Object Detection, With TensorFlow

Getting computers to recognize objects has been a historically difficult problem in computer science, but with the rise of machine learning it is becoming easier to solve. One of the tools that can be put to work in object recognition is an open source library called TensorFlow, which [Evan] aka [Edje Electronics] has put to work for exactly this purpose.

His object recognition software runs on a Raspberry Pi equipped with a webcam, and also makes use of Open CV. [Evan] notes that this opens up a lot of creative low-cost detection applications for the Pi, such as setting up a camera that detects when a pet is waiting at the door to be let inside or outside, counting the number of bees entering and exiting a beehive, or monitoring parking spaces at an office.

This project uses a number of other toolkits as well, including Protobuf. It also makes extensive use of Python scripts, but if you’re comfortable with that and you have an application for computer vision, [Evan]’s tutorial will get you started.

Continue reading “Object Detection, With TensorFlow”

Talking To Alexa With Sign Language

As William Gibson once noted, the future is already here, it just isn’t equally distributed. That’s especially true for those of us with disabilities. [Abishek Singh] wanted to do something about that, so he created a way for the hearing-impaired to use Amazon’s Alexa voice service. He did this using a TensorFlow deep learning network to convert American Sign Language (ASL) to speech and a speech-to-text converter to interpret the response. This all runs on a laptop, so it should work with any voice interface with a bit of tweaking. In particular, [Abishek] seems to have created a custom bit of ASL to trigger Alexa. Perhaps the next step would be to use a robotic arm to create the output directly in ASL and cut out the Echo device completely? [Abishek] has not released the code for this project yet, but he has released the code for other projects, such as Peeqo, the robot that responds with GIFs.

[Via FlowingData and [Belg4mit]]

Continue reading “Talking To Alexa With Sign Language”

Modern Wizard Summons Familiar Spirit

In European medieval folklore, a practitioner of magic may call for assistance from a familiar spirit who takes an animal form disguise. [Alex Glow] is our modern-day Merlin who invoked the magical incantations of 3D printing, Arduino, and Raspberry Pi to summon her familiar Archimedes: The AI Robot Owl.

The key attraction in this build is Google’s AIY Vision kit. Specifically the vision processing unit that tremendously accelerates image classification tasks running on an attached Raspberry Pi Zero W. It no longer consumes several seconds to analyze each image, classification can now run several times per second, all performed locally. No connection to Google cloud required. (See our earlier coverage for more technical details.) The default demo application of a Google AIY Vision kit is a “joy detector” that looks for faces and attempts to determine if a face is happy or sad. We’ve previously seen this functionality mounted on a robot dog.

[Alex] aimed to go beyond the default app (and default box) to create Archimedes, who was to reward happy people with a sticker. As a moving robotic owl, Archimedes had far more crowd appeal than the vision kit’s default cardboard box. All the kit components have been integrated into Archimedes’ head. One eye is the expected Pi camera, the other eye is actually the kit’s piezo buzzer. The vision kit’s LED-illuminated button now tops the dapper owl’s hat.

Archimedes was created to join in Google’s promotion efforts. Their presence at this Maker Faire consisted of two tents: one introductory “Learn to Solder” tent where people can create a blinky LED badge, and the other tent is focused on their line of AIY kits like this vision kit. Filled with demos of what the kits can do aside from really cool robot owls.

Hopefully these promotional efforts helped many AIY kits find new homes in the hands of creative makers. It’s pretty exciting that such a powerful and inexpensive neural net processor is now widely available, and we look forward to many more AI-powered hacks to come.

Continue reading “Modern Wizard Summons Familiar Spirit”

Train object recognizer for cards

Using TensorFlow To Recognize Your Own Objects

When the time comes to add an object recognizer to your hack, all you need do is choose from many of the available ones and retrain it for your particular objects of interest. To help with that, [Edje Electronics] has put together a step-by-step guide to using TensorFlow to retrain Google’s Inception object recognizer. He does it for Windows 10 since there’s already plenty of documentation out there for Linux OSes.

You’re not limited to just Inception though. Inception is one of a few which are very accurate but it can take a few seconds to process each image and so is more suited to a fast laptop or desktop machine. MobileNet is an example of one which is less accurate but recognizes faster and so is better for a Raspberry Pi or mobile phone.

Collage of images for card datasetYou’ll need a few hundred images of your objects. These can either be scraped from an online source like Google’s images or you get take your own photos. If you use the latter approach, make sure to shoot from various angles, rotations, and with different lighting conditions. Fill your background with various other things and even have some things partially obscuring your objects. This may sound like a long, tedious task, but it can be done efficiently. [Edje Electronics] is working on recognizing playing cards so he first sprinkled them around his living room, added some clutter, and walked around, taking pictures using his phone. Once uploaded, some easy-to-use software helped him to label them all in around an hour. Note that he trained on 24 different objects, which are the number of different cards you get in a pinochle deck.

You’ll need to install a lot of software and do some configuration, but he walks you through that too. Ideally, you’d use a computer with a GPU but that’s optional, the difference being between three or twenty-four hours of training. Be sure to both watch his video below and follow the steps on his Github page. The Github page is kept most up-to-date but his video does a more thorough job of walking you through using the software, such as how to use the image labeling program.

Why is he training an object recognizer on playing cards? This is just one more step in making a blackjack playing robot. Previously he’d done an impressive job using OpenCV, even though the algorithm handled non-overlapping cards only. Google’s Inception, however, recognizes partially obscured cards. This is a very interesting project, one which we’ll be keeping an eye on. If you have any ideas for him, leave them in the comments below.

Continue reading “Using TensorFlow To Recognize Your Own Objects”

Five Steps To TensorFlow On The Raspberry Pi

If you have about 10 hours to kill, you can use [Edje Electronics’s] instructions to install TensorFlow on a Raspberry Pi 3. In all fairness, the amount of time you’ll have to babysit is about an hour. The rest of the time is spent building things and you don’t need to watch it going. You can see a video on the steps required below.

You need the Pi with at least a 16 GB SD card and a USB drive with at least 1 GB of free space. This not only holds the software, but allows you to create a swap file so the Pi will have enough virtual memory to build everything required.

Continue reading “Five Steps To TensorFlow On The Raspberry Pi”

TensorFlow In Your Browser

If you want to explore machine learning, you can now write applications that train and deploy TensorFlow in your browser using JavaScript. We know what you are thinking. That has to be slow. Surprisingly, it isn’t, since the libraries use Graphics Processing Unit (GPU) acceleration. Of course, that assumes your browser can use your GPU. There are several demos available, include one where you train a Pac Man game to respond to gestures in your webcam to control the game. If you try it and then disable accelerated graphics in your browser options, you’ll see just what a speed up you can gain from the GPU.

Continue reading “TensorFlow In Your Browser”

Neural Network Zaps You To Take Better Photographs

It’s ridiculously easy to take a bad photograph. Your brain is a far better Photoshop than Photoshop, and the amount of editing it does on the scenes your eyes capture often results in marked and disappointing differences between what you saw and what you shot.

Taking your brain out of the photography loop is the goal of [Peter Buczkowski]’s “prosthetic photographer.” The idea is to use a neural network to constantly analyze a scene until maximal aesthetic value is achieved, at which point the user unconsciously takes the photograph.

But the human-computer interface is the interesting bit — the device uses a transcutaneous electrical nerve stimulator (TENS) wired to electrodes in the handgrip to involuntarily contract the user’s finger muscles and squeeze the trigger. (Editor’s Note: This project is about as sci-fi as it gets — the computer brain is pulling the strings of the meat puppet. Whoah.)

Meanwhile, back in reality, it’s not too strange a project. A Raspberry Pi watches the scene through a Pi Cam and uses a TensorFlow neural net trained against a set of high-quality photos to determine when to trip the shutter. The video below shows it in action, and [Peter]’s blog has some of the photos taken with it.

We’re not sure this is exactly the next “must have” camera accessory, and it probably won’t help with snapshots and selfies, but it’s an interesting take on the human-device interface. And if you’re thinking about the possibilities of a neural net inside your camera to prompt you when to take a picture, you might want to check out our primer on TensorFlow to get started.

Continue reading “Neural Network Zaps You To Take Better Photographs”