Generate Positivity With Machine Learning

Gesture recognition and machine learning are getting a lot of air time these days, as people understand them more and begin to develop methods to implement them on many different platforms. Of course this allows easier access to people who can make use of the new tools beyond strictly academic or business environments. For example, rollerblading down the streets of Atlanta with a gesture-recognizing, streaming TV that [nate.damen] wears over his head.

He’s known as [atltvhead] and the TV he wears has a functional LED screen on the front. The whole setup reminds us a little of Deep Thought. The screen can display various animations which are controlled through Twitch chat as he streams his journeys around town. He wanted to add a little more interaction to the animations though and simplify his user interface, so he set up a gesture-sensing sleeve which can augment the animations based on how he’s moving his arm. He uses an Arduino in the arm sensor as well as a Raspberry Pi in the backpack to tie it all together, and he goes deep in the weeds explaining how to use Tensorflow to recognize the gestures. The video linked below shows a lot of his training runs for the machine learning system he used as well.

[nate.damen] didn’t stop at the cheerful TV head either. He also wears a backpack that displays uplifting messages to people as he passes them by on his rollerblades, not wanting to leave out those who don’t get to see him coming. We think this is a great uplifting project, and the amount of work that went into getting the gesture recognition machine learning algorithm right is impressive on its own. If you’re new to Tensorflow, though, we have featured some projects that can do reliable object recognition using little more than a Raspberry Pi and a camera.

Continue reading “Generate Positivity With Machine Learning”

Ideas To Prototypes Hack Chat With Nick Bild

Join us on Wednesday, July 29 at noon Pacific for the Ideas to Prototypes Hack Chat with Nick Bild!

For most of us, ideas are easy to come by. Taking a shower can generate half of dozen of them, the bulk of which will be gone before your hair is dry. But a few ideas will stick, and eventually make it onto paper or its electronic equivalent, to be played with and tweaked until it coalesces into a plan. And a plan, if we’re lucky, is what’s needed to put that original idea into action, to bring it to fruition and see just what it can do.

No matter what you’re building, the ability to turn ideas into prototypes is what moves projects forward, and it’s what most of us live for. Seeing something on the bench or the shop floor that was once just a couple of back-of-the-napkin sketches, and before that only an abstract concept in your head, is immensely satisfying.

The path from idea to prototype, however, is not always a smooth one, as Nick Bild can attest. We’ve been covering Nick’s work for a while now, starting with his “nearly practical” breadboard 6502 computer, the Vectron, up to his recent forays into machine learning with ShAIdes, his home-automation controlling AI sunglasses. On the way we’ve seen his machine-learning pitch predictor, dazzle-proof glasses, and even a wardrobe-malfunction preventer.

All of Nick’s stuff is cool, to be sure, but there’s a method to his productivity, and we’ll talk about that and more in this Hack Chat. Join us as we dive into Nick’s projects and find out what he does to turn his ideas into prototypes.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, July 29 at 12:00 PM Pacific time. If time zones have you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about. Continue reading “Ideas To Prototypes Hack Chat With Nick Bild”

Argos Book Of Horrors

If you live outside the UK you may not be familiar with Argos, but it’s basically what Americans would have if Sears hadn’t become a complete disaster after the Internet became popular. While they operate many brick-and-mortar stores and are a formidable online retailer, they still have a large physical catalog that is surprisingly popular. It’s so large, in fact, that interesting (and creepy) things can be done with it using machine learning.

This project from [Chris Johnson] is called the Book of Horrors and was made by feeding all 16,000 pages of the Argos catalog into a machine learning algorithm. The computer takes all of the pages and generates a model which ties the pages together into a series of animations that blends the whole catalog into one flowing, ever-changing catalog. It borders on creepy, both in visuals and in the fact that we can’t know exactly what computers are “thinking” when they generate these kinds of images.

The more steps the model was trained on the creepier the images became, too. To see more of the project you can follow it on Twitter where new images are released from time to time. It also reminds us a little of some other machine learning projects that have been used recently to create short films with equally mesmerizing imagery. Continue reading “Argos Book Of Horrors”

Recreating Paintings By Teaching An AI To Paint

The Timecraft project by [Amy Zhao] and team members uses machine learning to figure out a way how an existing painting may have been originally been painted, stroke by stroke. In their paper titled ‘Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings’, they describe how they trained a ML algorithm using existing time lapse videos of new paintings being created, allowing it to probabilistically generate the steps needed to recreate an already finished painting.

The probabilistic model is implemented using a convolutional neural network (CNN), with as output a time lapse video, spanning many minutes. In the paper they reference how they were inspired by artistic style transfer, where neural networks are used to generate works of art in a specific artist’s style, or to create mix-ups of different artists.

A lot of the complexity comes from the large variety of techniques and materials that are used in the creation of a painting, such as the exact brush used, the type of paint. Some existing approaches have focused on the the fine details here, including physics-based simulation of the paints and brush strokes. These come with significant caveats that Timecraft tried to avoid by going for a more high-level approach.

The time lapse videos that were generated during the experiment were evaluated through a survey performed via Amazon Mechanical Turk, with the 158 people who participated asked to compare the realism of the Timecraft videos versus that of the real time lapse videos. The results were that participants preferred the real videos, but would confuse the Timecraft videos for the real time lapse videos half the time.

Although perhaps not perfect yet, it does show how ML can be used to deduce how a work of art was constructed, and figure out the individual steps with some degree of accuracy.

Continue reading “Recreating Paintings By Teaching An AI To Paint”

Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions

Telecommuters: tired of the constant embarrassment of showing up to video conferences wearing nothing but your underwear? Save the humiliation and all those pesky trips down to HR with Safe Meeting, the new system that uses the power of artificial intelligence to turn off your camera if you forget that casual Friday isn’t supposed to be that casual.

The following infomercial is brought to you by [Nick Bild], who says the whole thing is tongue-in-cheek but we sense a certain degree of “necessity is the mother of invention” here. It’s true that the sudden throng of remote-work newbies certainly increases the chance of videoconference mishaps and the resulting mortification, so whatever the impetus, Safe Meeting seems like a great idea. It uses a Pi cam connected to a Jetson Nano to capture images of you during videoconferences, which are conducted over another camera. The stream is classified by a convolutional neural net (CNN) that determines whether it can see your underwear. If it can, it makes a REST API call to the conferencing app to turn off the camera. The video below shows it in action, and that it douses the camera quickly enough to spare your modesty.

We shudder to think about how [Nick] developed an underwear-specific training set, but we applaud him for doing so and coming up with a neat application for machine learning. He’s been doing some fun work in this space lately, from monitoring where surfaces have been touched to a 6502-based gesture recognition system.

Continue reading “Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions”

Crunching Giant Data From The Large Hadron Collider

Modern physics experiments are often complex, ambitious, and costly. The times where scientific progress could be made by conducting a small tabletop experiment in your lab are mostly over. Especially, in fields like astrophysics or particle physics, you need huge telescopes, expensive satellite missions, or giant colliders run by international collaborations with hundreds or thousands of participants. To drive this point home: the largest machine ever built by humankind is the Large Hadron Collider (LHC). You won’t be surprised to hear that even just managing the data it produces is a super-sized task.

Since its start in 2008, the LHC at CERN has received several upgrades to stay at the cutting edge of technology. Currently, the machine is in its second long shutdown and being prepared to restart in May 2021. One of the improvements of Run 3 will be to deliver particle collisions at a higher rate, quantified by the so-called luminosity. This enables experiments to gather more statistics and to better study rare processes. At the end of 2024, the LHC will be upgraded to the High-Luminosity LHC which will deliver an increased luminosity by up to a factor of 10 beyond the LHC’s original design value.

Currently, the major experiments ALICE, ATLAS, CMS, and LHCb are preparing themselves to cope with the expected data rates in the range of Terabytes per second. It is a perfect time to look into more detail at the data acquisition, storage, and analysis of modern high-energy physics experiments. Continue reading “Crunching Giant Data From The Large Hadron Collider”

Automate Your Xbox

First the robots took our jobs, then they came for our video games. This dystopian future is brought to you by [Little French Kev] who designed this adorable 3D-printed robot arm to interface with an Xbox One controller joystick. He shows it off in the video after the break, controlling a ball-balancing physics demonstration written in Unity.

Hats off to him on the quality of the design. There are two parts that nestle the knob of the thumbstick from either side. He mates those pieces with each other using screws, firmly hugging the stick. Bearings are used at the joints for smooth action of the two servo motors that control the arm. The base of the robotic appendage is zip-tied to the controller itself.

The build targets experimentation with machine learning. Since the computer can control the arm via an Arduino, and the computer has access to metrics of what’s happening in the virtual environment, it’s a perfect for training a neural network. Are you thinking what we’re thinking? This is the beginning of hardware speed-running your favorite video games like [SethBling] did for Super Mario World half a decade ago. It will be more impressive since this would be done by automating the mechanical bit of the controller rather than operating purely in the software realm. You’ll just need to do your own hack to implement button control.

Continue reading “Automate Your Xbox”