JeVois Machine Vision Camera Nails Demo Mode

JeVois is a small, open-source, smart machine vision camera that was funded on Kickstarter in early 2017. I backed it because cameras that embed machine vision elements are steadily growing more capable, and JeVois boasts an impressive range of features. It runs embedded Linux and can process video at high frame rates using OpenCV algorithms. It can run standalone, or as a USB camera streaming raw or pre-processed video to a host computer for further action. In either case it can communicate to (and be controlled by) other devices via serial port.

But none of that is what really struck me about the camera when I received my unit. What really stood out was the demo mode. The team behind JeVois nailed an effective demo mode for a complex device. That didn’t happen by accident, and the results are worth sharing.

Continue reading “JeVois Machine Vision Camera Nails Demo Mode”

Sorting Two Tonnes Of Lego

Have you ever taken an interest in something, and then found it’s got a little out of hand as your acquisitions spiral into a tidal wave of bags and boxes? [Jacques Mattheij] found himself in just that position with Lego. His online purchases had run away with him, and he had a garage packed with “two metric tonnes” of the little coloured bricks.

Disposing of Lego is fairly straightforward, there is a lively second-hand market. But to maximise the return it is important to be in control of what you have, to avoid packaging up fake, discoloured, damaged, or dirty parts. This can become a huge job if you do it by hand, so he built a Lego sorting machine to do the job for him.

The machine starts with a hopper for the loose Lego, with a slow belt that tips individual parts down a chute to a faster belt derived from a running trainer. On that they run past a camera whose images are analysed through a neural net, and based on its identification the parts are directed into appropriate bins with carefully timed jets of compressed air.

The result is a surprisingly fast way to sort large amounts of bricks without human intervention. He’s posted some videos, one of which we’ve placed below the break, so you can see for yourselves.

Continue reading “Sorting Two Tonnes Of Lego”

Smartphone Will Destroy You at Air Hockey

Most of us carry a spectacularly powerful computer in our pocket, which we rarely use for much more than web browsing, social media, and maybe the occasional phone call. Our mobile phones are technological miracles, but their potential sometimes seems wasted.

It’s always a pleasure to see something that makes use of a mobile phone to drive some nuts-and-bolts hardware. [Jose Julio]’s project does just that, using the phone as the brains behind a robotic air hockey table.

Readers with long memories will remember previous air hockey tables from [Jose], using 3D printer components controlled by an Arduino Mega with a webcam suspended above the field of play. This version transfers camera, machine vision, and game strategy to an Android app, leaving the Arduino to control the hardware under wireless network command from above.

The result you can see in the video below the break is an extremely fast-paced game, with the robot looking unbeatable. If you want to build your own there are full instructions and code on GitHub, or if you follow the link from the page linked above, he sells the project as a kit.

Continue reading “Smartphone Will Destroy You at Air Hockey”

Raspberry Pi Robot That Reads Your Emotions

It’s getting easier and easier to add machine intelligence to your hacks, even to the point where you sometimes don’t have to install any special software. In this case [Dexter Industries] has added the ability to read human emotions to their EmpathyBot robot by making use of Google Cloud Vision.

Press a button on the robot and it moves forward until it’s a certain distance from an object. It then takes a picture and sends it off to Google Cloud Vision along with a request to do face detection. The response that Google returns is in JSON format and, if it finds a face, includes the likelihood of the face being happy, sad, sorrowful or surprised. The robot parses that response and gives an appropriate canned speech using the text-to-speech software, eSpeak e.g. “You seem happy! Tell me why you are so happy!”.

[Dexter] has made the source code available on github. It’s written in python and is easy to read by anyone with even just a little programming experience. The video after the break gives a number of demonstrations, including some with non-human subjects.

Continue reading “Raspberry Pi Robot That Reads Your Emotions”

Sort Your Candy With A Raspberry Pi And Google Cloud Vision

If you have been off trick-or-treating and returned home with an embarassment of candy, what on earth can you do to mange the problem and sort it by brand?

Yes, it’s an issue that so many of us have had to face at this time of year. So much a challenge, that the folks at [Dexter Industries] have made a robotic candy-sorter to automate the task.

OK, there’s something of the tongue-in-cheek about the application. But the technology they’ve used is interesting, and worth a second look. Hardware wise it’s a Lego Mindstorms conveyor and hopper controlled by a Raspberry Pi through the BrickPi interface. All very well, but it’s in the software that the interest lies. They use the Raspberry Pi’s camera to take a picture to send off to Google Cloud Vision, which they then query to return a guess at the brand of the candy in question. The value returned is then compared to a list of brands to keep or donate to another family member, and the hopper tips the bar into the respective pile.  They provide full build details and code, as well as the video we’ve put below the break. So simple a child can explain it, sort of.

Continue reading “Sort Your Candy With A Raspberry Pi And Google Cloud Vision”

Interactive Dynamic Video

If a picture is worth a thousand words, a video must be worth millions. However, computers still aren’t very good at analyzing video. Machine vision software like OpenCV can do certain tasks like facial recognition quite well. But current software isn’t good at determining the physical nature of the objects being filmed. [Abe Davis, Justin G. Chen, and Fredo Durand] are members of the MIT Computer Science and Artificial Intelligence Laboratory. They’re working toward a method of determining the structure of an object based upon the object’s motion in a video.

The technique relies on vibrations which can be captured by a typical 30 or 60 Frames Per Second (fps) camera. Here’s how it works: A locked down camera is used to image an object. The object is moved due to wind, or someone banging on it, or  any other mechanical means. This movement is captured on video. The team’s software then analyzes the video to see exactly where the object moved, and how much it moved. Complex objects can have many vibration modes. The wire frame figure used in the video is a great example. The hands of the figure will vibrate more than the figure’s feet. The software uses this information to construct a rudimentary model of the object being filmed. It then allows the user to interact with the object by clicking and dragging with a mouse. Dragging the hands will produce more movement than dragging the feet.

The results aren’t perfect – they remind us of computer animated objects from just a few years ago. However, this is very promising. These aren’t textured wire frames created in 3D modeling software. The models and skeletons were created automatically using software analysis. The team’s research paper (PDF link) contains all the details of their research. Check it out, and check out the video after the break.

Continue reading “Interactive Dynamic Video”

Next Week in NYC: How the Age of Machine Consciousness is Transforming Our Lives

I’ve developed or have been involved with a number of imaging technologies, everything from DIY synthetic aperture radar, the MIT thru-wall radar, to the next generation of ultrasound imaging devices. Imagery is cool, but what the end-user often wants is some way by which to get an answer as opposed to viewing a reconstruction. So let’s figure that out.

We’re kicking-off a discussion on how to apply deep learning to more than just beating Jeopardy champions at their own game. We’d like to apply deep learning to hard data, to imagery. Is it possible to get the computer to accurately provide the diagnosis?

I helped to organize a seminar series/discussion panel in New York City on November 13th (you know, for those readers who are closer to New York than to Munich). This discussion panel includes David Ferrucci (the guy who lead the IBM Watson program), MIT Astrophysicist Max Tagmark, and the person who created genetic sequencing on a chip: Jonathan Rothberg.  As the vanguard of creativity and enthusiasm in everything technical we’d like the Hackaday community to join the conversation.

Continue reading “Next Week in NYC: How the Age of Machine Consciousness is Transforming Our Lives”