Machine Learning Algorithm Runs On A Breadboard 6502

When it comes to machine learning algorithms, one’s thoughts do not naturally flow to the 6502, the processor that powered some of the machines in the first wave of the PC revolution. And one definitely does not think of gesture recognition running on a homebrew breadboard version of a 6502 machine, and yet that’s exactly what [Nick Bild] has accomplished.

Before anyone gets too worked up in the comments, we realize that [Nick]’s Vectron breadboard computer is getting a lot of help from other, more modern machines. He’s got a pair of Raspberry Pi 3s in the mix, one to capture and downscale images from a Pi cam, and one that interfaces to an Atari 2600 emulator and sends keypresses to control games based on the gestures seen by the camera. But the logic to convert gesture to control signals is all Vectron, and uses a k-nearest neighbor algorithm executed in 6502 assembly. Fifty gesture images are stored in ROM and act as references for the four known gesture classes: up, down, left, and right. When a match between the camera image and a gesture class is found, the corresponding keypress is sent to the game. The video below shows that the whole thing is pretty responsive.

In our original article on [Nick]’s Vectron breadboard computer, [Tom Nardi] said that “You won’t be playing Prince of Persia on it.” That may be true, but a machine learning system running on the Vectron is not too shabby either.

Continue reading “Machine Learning Algorithm Runs On A Breadboard 6502”

Machine Vision Keeps Track Of Grubby Hands

Can you remember everything you’ve touched in a given day? If you’re being honest, the answer is, “Probably not.” We humans are a tactile species, with an outsized proportion of both our motor and sensory nerves sent directly to our hands. We interact with the world through our hands, and unfortunately that may mean inadvertently spreading disease.

[Nick Bild] has a potential solution: a machine-vision system called Deep Clean, which monitors a scene and records anything in it that has been touched. [Nick]’s system uses Jetson Xavier and a stereo camera to detect depth in a scene; he built his camera from a pair of Raspberry Pi cams and a Pi 3B+, but other depth cameras like a Kinect could probably do the job. The idea is to watch the scene for human hands — OpenPose is the tool he chose for that job — and correlate their depth in the scene with the depth of objects. Touch a doorknob or a light switch, and a marker is left on the scene. The idea would be that a cleaning crew would be able to look at the scene to determine which areas need extra attention. We can think of plenty of applications that extend beyond the current crisis, as the ability to map areas that have been touched seems to be generally useful.

[Nick] has been getting some mileage out of that Xavier lately — he’s used it to build an AI umpire and shades that help you find lost stuff. Who knows what else he’ll find to do with them during this time of confinement?

Continue reading “Machine Vision Keeps Track Of Grubby Hands”

Harry Potter Wand Hack Makes Magic Real

Any sufficiently advanced hack is indistinguishable from magic, a wise man once observed. That’s true with this cool build from [Jasmeet Singh] that magically opens a box when you wave a Harry Potter magic wand in the right way. Is it magic? No, it’s a neat hack that uses computer vision to track the wand and recognize when you make the magic gesture.

The trick is based on the same technique that Universal Studios use in their Harry Potter theme park, as detailed in a patent with the snappy title of “System and method for tracking a passive wand and actuating an effect based on a detected wand path“. The basic idea is that a retroreflective dot on the end of the wand reflects light from a set of infra-red LEDs around the camera. An infra-red sensitive camera detects this reflected light as a bright dot. This camera is tied into a computer vision system that tracks the path of the dot, then triggers the action if it follows a certain pattern.

The version that [Jasmeet] built uses a Raspberry Pi NoIR camera, and a Raspberry Pi 3 running OpenCV. This feeds into a machine learning graph that detects the letters of the alphabet. If the detected letter is an A (for Alomahora, the Harry Potter open spell), then the box opens. If it is a C, the box closes. This is all tied together using Python.

It’s a neat build that ties together a number of interesting techniques, and which could keep the kids amused for a while. You could also expand it further, such as adding a death ray that triggers if you trace an S for Sectumsempra. That’ll teach them not to mess with the dark arts.

Continue reading “Harry Potter Wand Hack Makes Magic Real”

Teaching Robots Workplace Etiquette

Most often, humans and robots do not have to work directly together, instead working on different parts in a production pipeline or with the robot performing tasks instead of a human. In such cases any human-robot interaction (HRI) will be superficial. Yet what if humans and robots have to work alongside each other? This is a question which a group of students at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have recently studied some answers to.

In their paper on human-robot collaborative tasks (PDF), they cover the three possible models one can use for this kind of interaction: there can be no communication (‘silent’), the communication can be pre-programmed (state machine), or in this case a Markov model-based system. This framework which they demonstrate is called CommPlan and it uses observation data from human subjects to construct a Markov model that can integrate sensor data in order to decide on its next action.

In the experiment they performed (the preparation of a meal; see the embedded video after the break), human subjects had to work alongside a robot. Between the three different approaches, the CommPlan one was the fastest, using voice interaction only when it deemed it to be necessary. The experiment’s subjects expressed hereby a preference for bidirectional communication, much as would occur between human workers.

Continue reading “Teaching Robots Workplace Etiquette”

A Soldering LightSaber For The Speedy Worker

We all have our preferences when it comes to soldering irons, and for [Marius Taciuc] the strongest of them all is for a quick heat-up. It has to be at full temperature in the time it takes him to get to work, or it simply won’t cut the mustard. His solution is a temperature controlled iron, but one with no ordinary temperature control. Instead of a normal feedback loop it uses a machine learning algorithm to find the quickest warm-up.

The elements he’s using have a thermocouple in series with the element itself, meaning that to measure the temperature the power must be cut to the element. This duty cycle can not be cut too short or the measurements become noisy, so under a traditional temperature control regimen there is a limit on how quickly it can be heated up. His approach is to turn it on full-time for a period without stopping to measure the temperature, only measuring after it has had a chance to heat up. The algorithm constantly learns how long to switch it on to achieve what temperature, and is able to interpolate to arrive at the desired reading. It’s a clever way to make existing hardware perform new tricks, and we like that.

He’s appeared on these pages quite a few times over the years, but perhaps you’d like to see the first version of the same hardware. Meanwhile watch the quick heat up in action with a fuller explanation in the video below.

Continue reading “A Soldering LightSaber For The Speedy Worker”

Silicone And AI Power This Prayerful Robotic Intercessor

Even in a world that is as currently far off the rails as this one is, we’re going to go out on a limb and say that this machine learning, servo-powered prayer bot is going to be the strangest thing you see today. We’re happy to be wrong about that, though, and if we are, please send links.

“The Prayer,” as [Diemut Strebe]’s work is called, may look strange, but it’s another in a string of pieces by various artists that explores just what it means to be human at a time when machines are blurring the line between them and us. The hardware is straightforward: a silicone rubber representation of a human nasopharyngeal cavity, servos for moving the lips, and a speaker to create the vocals. Those are generated by a machine-learning algorithm that was trained against the sacred texts of many of the world’s major religions, including the Christian Bible, the Koran, the Baghavad Gita, Taoist texts, and the Book of Mormon. The algorithm analyzes the structure of sacred verses and recreates random prayers and hymns using Amazon Polly that sound a lot like the real thing. That the lips move in synchrony with the ersatz devotions only adds to the otherworldliness of the piece. Watch it in action below.

We’ve featured several AI-based projects that poke at some interesting questions. This kinetic sculpture that uses machine learning to achieve balance comes to mind, while AI has even been employed in the search for spirits from the other side.

Continue reading “Silicone And AI Power This Prayerful Robotic Intercessor”

Assistive Specs Help Jog Your Memory

It’s something that can happen to all of us, that we forget things. Young and old, we know things are on our to-do list but in the heat of the moment they disappear from our minds and we miss them. There are a myriad of technological answers to this in the form of reminders and calendars, but [Nick Bild] has come up with possibly the most inventive yet. His Newrons project is a pair of glasses with a machine vision camera, that flashes a light when it detects an object in its field of view associated with a calendar entry.

At its heart is a JeVois A33 Smart Machine Vision Camera, which runs a neural network trained on an image dataset. It passes its sightings to an Arduino Nano IoT fitted with a real-time clock, that pulls appointment information from Google Calendar and flashes the LED when it detects a match between object and event. His example which we’ve placed below the break is a pill bottle triggering a reminder to take the pills.

We like this idea, but can’t help thinking that it has a flaw in that the reminder relies on the object moving into view. A version that tied this in with more conventional reminding based upon the calendar would address this, and perhaps save the forgetful a few problems.

Continue reading “Assistive Specs Help Jog Your Memory”