Argos Book Of Horrors

If you live outside the UK you may not be familiar with Argos, but it’s basically what Americans would have if Sears hadn’t become a complete disaster after the Internet became popular. While they operate many brick-and-mortar stores and are a formidable online retailer, they still have a large physical catalog that is surprisingly popular. It’s so large, in fact, that interesting (and creepy) things can be done with it using machine learning.

This project from [Chris Johnson] is called the Book of Horrors and was made by feeding all 16,000 pages of the Argos catalog into a machine learning algorithm. The computer takes all of the pages and generates a model which ties the pages together into a series of animations that blends the whole catalog into one flowing, ever-changing catalog. It borders on creepy, both in visuals and in the fact that we can’t know exactly what computers are “thinking” when they generate these kinds of images.

The more steps the model was trained on the creepier the images became, too. To see more of the project you can follow it on Twitter where new images are released from time to time. It also reminds us a little of some other machine learning projects that have been used recently to create short films with equally mesmerizing imagery. Continue reading “Argos Book Of Horrors”

Detect COVID-19 Symptoms Using Wearable Device And AI

A new study from West Virginia University (WVU) Rockefeller Neuroscience Institute (RNI) uses a wearable device and artificial intelligence (AI) to predict COVID-19 up to 3 days before symptoms occur. The study has been an impressive undertaking involving over 1000 health care workers and frontline workers in hospitals across New York, Philadelphia, Nashville, and other critical COVID-19 hotspots.

The implementation of the digital health platform uses a custom smartphone application coupled with an Ōura smart ring to monitor biometric signals such as respiration and temperature. The platform also assesses psychological, cognitive, and behavioral data through surveys administered through a smartphone application.

We know that wearables tend to suffer from a lack of accuracy, particularly during activity. However, the Ōura ring appears to take measurements while the user is very still, especially during sleep. This presents an advantage as the accuracy of wearable devices greatly improves when the user isn’t moving. RNI noted that the Ōura ring has been the most accurate device they have tested.

Given some of the early warning signals for COVID-19 are fever and respiratory distress, it would make sense that a device able to measure respiration and temperature could be used as an early detector of COVID-19. In fact, we’ve seen a few wearable device companies attempt much of what RNI is doing as well as a few DIY attempts. RNI’s study has probably been the most thorough work released so far, but we’re sure that many more are upcoming.

The initial phase of the study was deployed among healthcare and frontline workers but is now open to the general public. Meanwhile the National Basketball Association (NBA) is coordinating its re-opening efforts using Ōura’s technology.

We hope to see more results emerge from RNI’s very important work. Until then, stay safe Hackaday.

Automating The Disinfection Of Large Spaces With Robots

What do you do when you have to disinfect an entire warehouse? You could send a group of people through the place with UV-C lamps, but that would take a long time as said humans cannot be in the same area as the UV-C radiation, as much as they may like the smell of BBQ chicken. Constantly repositioning the lamps or installing countless lamps would get in the way during normal operation. The answer is to strap UV-C lights to a robot according to MIT’s CSAIL, and have it ride around the space.

As can be seen in the video (also embedded after the break), a CSAIL group has been working with telepresence robotics company Ava Robotics and the Greater Boston Food Bank (GBFB). Their goal was to create a robotic system that could autonomously disinfect a GBFB warehouse using UV-C without exposing any humans to the harmful radiation. While the robotics can be controlled remotely, they can also map the space and navigate between waypoints.

While testing the system, the team used a UV-C dosimeter to confirm the effectiveness of this setup. With the robot driving along at a leisurely 0.22 miles per hour (~0.35 kilometer per hour), it was able to cover approximately 4,000 square feet (~372 square meter) in about half an hour. They estimated that about 90% of viruses like SARS-CoV-2 could be neutralized this way.

During trial runs, they discovered the need to have the robot adapt to the constantly changing layout of the warehouse, including which aisles require which UV-C depending on how full they are. Having multiple of these robots in the same space coordinate with each other would also be a useful feature addition.

Continue reading “Automating The Disinfection Of Large Spaces With Robots”

Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips

What has dual compressed-air cannons, 500 roll-on deodorant balls, and a machine-learning brain with a bad attitude? We didn’t know either, until [Leo Fernekes] dropped this video on his autonomous robot sentry gun and saw it in action for ourselves.

Now, we’ve seen tons of sentry guns on these pages before, shooting everything from water to various forms of Nerf. And plenty of those builds have used some form of machine vision to aim the gun onto the target. So while it might appear that [Leo]’s plowing old ground here, this build is chock full of interesting tips and tricks.

It started when [Leo] saw a video on TensorFlow basics from our friend [Edje Electronics], which gave him the boost needed to jump into an AI project. The controller he ended up with looks for humans in the scene and slews the turret onto target, where the air cannons can do their thing. The hefty ammo is propelled by compressed air, which is dumped into the chamber using a solenoid valve with an interesting driver that maximizes the speed at which it opens. Style points go to the bacteriophage T4-inspired design, and to the sequence starting at 1:34 which reminded us of the factory scene from RoboCop.

[Leo] really put a ton of work into this project, and the results show. He is hoping to get an art gallery or museum to show it as an interactive piece to comment on one possible robot-human future, presumably after getting guests to sign a release. Whatever happens to it, the robot looks great and [Leo] learned a lot from it, as did we.

Continue reading “Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips”

Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions

Telecommuters: tired of the constant embarrassment of showing up to video conferences wearing nothing but your underwear? Save the humiliation and all those pesky trips down to HR with Safe Meeting, the new system that uses the power of artificial intelligence to turn off your camera if you forget that casual Friday isn’t supposed to be that casual.

The following infomercial is brought to you by [Nick Bild], who says the whole thing is tongue-in-cheek but we sense a certain degree of “necessity is the mother of invention” here. It’s true that the sudden throng of remote-work newbies certainly increases the chance of videoconference mishaps and the resulting mortification, so whatever the impetus, Safe Meeting seems like a great idea. It uses a Pi cam connected to a Jetson Nano to capture images of you during videoconferences, which are conducted over another camera. The stream is classified by a convolutional neural net (CNN) that determines whether it can see your underwear. If it can, it makes a REST API call to the conferencing app to turn off the camera. The video below shows it in action, and that it douses the camera quickly enough to spare your modesty.

We shudder to think about how [Nick] developed an underwear-specific training set, but we applaud him for doing so and coming up with a neat application for machine learning. He’s been doing some fun work in this space lately, from monitoring where surfaces have been touched to a 6502-based gesture recognition system.

Continue reading “Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions”

Creating Surreal Short Films From Machine Learning

Ever since we first saw the nightmarish artwork produced by Google DeepDream and the ridiculous faux paintings produced from neural style transfer, we’ve been aware of the ways machine learning can be applied to visual art. With commercially available trained models and automated pipelines for generating images from relatively small training sets, it’s now possible for developers without theoretical knowledge of machine learning to easily generate images, provided they have sufficient access to GPUs. Filmmaker [Kira Bursky] took this a step further, creating a surreal short film that features characters and textures produced from image sets.

She began with about 150 photos of her face, 200 photos of film locations, 4600 photos of past film productions, and 100 drawings as the main datasets.

via [Kira Bursky]
Using GAN models for nebulas, faces, and skyscrapers in RunwayML, she found the results from training her face set disintegrated, realistic, and painterly. Many of the images continue to evoke aspects of her original face with distortions, although whether that is the model identifying a feature common to skyscrapers and faces or our own bias towards facial recognition is up to the viewer.

On the other hand, the results of training the film set photos on models of faces and bedrooms produced abstract textures and “surreal and eerie faces like a fever dream”. Perhaps, unlike the familiar anchors of facial features, it’s the lack of recognizable characteristics in the transformed images that gives them such a surreal feel.

[Kira] certainly uses these results to her advantage, brainstorming a concept for a short film that revolves around her main character experiencing nightmares. Although her objective was to use her results to convey a series of emotionally striking scenes, the models she uses to produce these scenes are also quite interesting.

She started off by using the MiDaS model, created by a team of researchers from ETH Zurich and Intel, for generating monocular depth maps. The results associated levels inside of an image with their appropriate depth in relation to one another. She also used the MASK R-CNN for masking out the backgrounds in generated faces and combined her generated images in Photoshop to create the main character for her short film.

via [Vox]
In order to simulate the character walking, she used the Liquid Warping GAN, a framework for human motion imitation and appearance transfer, created by a team from ShanghaiTech University and Tencent AI Lab. This allowed her to take her original images and synthesize results from reference poses of herself going through the motions of walking by using a 3D body mesh recovery module. Later on, she applied similar techniques for motion tracking on her faces, running them through the First Order Motion Model to simulate different emotions. She went on to join her facial movements with her character using After Effects.

Bringing the results together, she animated a 3D camera blur using the depth map videos to create a less disorienting result by providing anchor points for the viewers and creating a displacement map to heighten the sense of depth and movement within the scenes. In After Effects, she also overlaid dust and film grain effects to give the final result a crisper look. The result is a surprisingly cinematic film entirely made of images and videos generated from machine learning models. With the help of the depth adjustments, it almost looks like something that you might see in a nightmare.

Check out the result below:

Continue reading “Creating Surreal Short Films From Machine Learning”

Crunching Giant Data From The Large Hadron Collider

Modern physics experiments are often complex, ambitious, and costly. The times where scientific progress could be made by conducting a small tabletop experiment in your lab are mostly over. Especially, in fields like astrophysics or particle physics, you need huge telescopes, expensive satellite missions, or giant colliders run by international collaborations with hundreds or thousands of participants. To drive this point home: the largest machine ever built by humankind is the Large Hadron Collider (LHC). You won’t be surprised to hear that even just managing the data it produces is a super-sized task.

Since its start in 2008, the LHC at CERN has received several upgrades to stay at the cutting edge of technology. Currently, the machine is in its second long shutdown and being prepared to restart in May 2021. One of the improvements of Run 3 will be to deliver particle collisions at a higher rate, quantified by the so-called luminosity. This enables experiments to gather more statistics and to better study rare processes. At the end of 2024, the LHC will be upgraded to the High-Luminosity LHC which will deliver an increased luminosity by up to a factor of 10 beyond the LHC’s original design value.

Currently, the major experiments ALICE, ATLAS, CMS, and LHCb are preparing themselves to cope with the expected data rates in the range of Terabytes per second. It is a perfect time to look into more detail at the data acquisition, storage, and analysis of modern high-energy physics experiments. Continue reading “Crunching Giant Data From The Large Hadron Collider”