Home Automation At A Glance Using AI Glasses

There was a time when you had to get up from the couch to change the channel on your TV. But then came the remote control, which saved us from having to move our legs. Later still we got electronic assistants from the likes of Amazon and Google which allowed us to command our home electronics with nothing more than our voice, so now we don’t even have to pick up the remote. Ushering in the next era of consumer gelification, [Nick Bild] has created ShAIdes: a pair of AI-enabled glasses that allow you to control devices by looking at them.

Of course on a more serious note, vision-based home automation could be a hugely beneficial assistive technology for those with limited mobility. By simply looking at the device you want to control and waving in its direction, the system knows which appliance to activate. In the video after the break, you can see [Nick] control lamps and his speakers with such ease that it almost looks like magic; a defining trait of any sufficiently advanced technology.

So how does it work? A Raspberry Pi camera module mounted to a pair of sunglasses captures video which is sent down to a NVIDIA Jetson Nano. Here, two separate image classification Convolutional Neural Network (CNN) models are being used to identify objects which can be controlled in the background, and hand gestures in the foreground. When there’s a match for both, the system can fire off the appropriate signal to turn the device on or off. Between the Nano, the camera, and the battery pack to make it all mobile, [Nick] says the hardware cost about $150 to put together.

But really, the hardware is only one small piece of the puzzle in a project like this. Which is why we’re happy to see [Nick] go into such detail about how the software functions, and crucially, how he trained the system. Just the gesture recognition subroutine alone went through nearly 20K images so it could reliably detect an arm extended into the frame.

If controlling your home with a glance and wave isn’t quite mystical enough, you could always add an infrared wand to the mix for that authentic Harry Potter experience.

Continue reading “Home Automation At A Glance Using AI Glasses”

Build Your Own Selfie Drone With Computer Vision

In late 2013 and early 2014, in the heady days of the drone revolution, there was one killer app — the selfie drone. Selfie sticks themselves had already become a joke, but a selfie drone injected a breath of fresh air into the world of tech. Fidget spinners had yet to be invented, so this is really all we had. It wasn’t quite time for the age of the selfie drone, though, and the Lily camera drone — in spite of $40 Million in preorders — became the subject of lawsuits, and not fines from the FAA.

Technology marches ever forward, and now you can build your own selfie drone. That’s exactly what [geaxgx] did, although this build uses a an off-the-shelf drone with custom software instead of building everything from scratch.

For hardware, this is a Ryze Tello, a small, $100 quadcopter with a front-facing camera. With the right libraries, you can stream images to a computer and send flight commands back to the drone. Yes, all the processing for the selfie drone happens on a non-flying computer, because computer vision takes processing power and battery life.

The software comes from CMU’s OpenPose library, a real-time solution for detecting a body, face, or hands. With this, [geaxgx] was able to hover the drone and keep his face in the middle of the camera’s frame. While there’s no movement of the drone involved — the drone is just hovering and rotating to the left and right — it is a flying selfie stick without the stick. You can check out the video below and check out all the code on [geaxgx]’s GitHub here.

Continue reading “Build Your Own Selfie Drone With Computer Vision”

Finding Plastic Spaghetti With Machine Learning

Among 3D printer owners, “spaghetti” is the common term for the tangled mess of stringy plastic that’s often the result of a failed print. Fear of their print bed turning into a hot plate of PLA spaghetti is enough to keep many users from leaving their machines operating overnight or while they’re out of the house. Accordingly, we’ve seen a number of methods that allow the human operator to watch their print remotely to make sure everything is progressing smoothly.

But unless you plan on keeping your eyes on your phone the entire time you’re out of the house, there’s still a chance some PETG pasta might sneak its way out. Enter the Spaghetti Detective, an open source project that lets machine learning take over when you can’t sit watching the printer all day. Their system plugs into Octoprint to monitor your print in real-time and pause it if it starts looking particularly stringy. The concept is still under development, but judging by the gallery of results submitted by users, the system seems to have a knack for identifying non-edible noodles.

Once the software comes out of beta it looks like the team is going to try to monetize it by providing hosting and monitoring services for a monthly fee, but as it’s an open source project, you’re also able to run the software on your own machine. Though the documentation notes that the lowly Raspberry Pi doesn’t have quite what it takes to handle the image recognition routines, so you’ll need a proper computer if you want to self-host the service. Could be a good use for that old laptop you’ve got kicking around the lab.

As demonstrated in the video after the break, the system’s “spaghetti confidence” is shown with a simple to understand gauge: green is a good-looking print, and red means the detective is getting a sniff of the stringy stuff. If your print dips into the red too much, Octoprint is commanded to pause the print. The user can then look at the last image from the printer and decide to either cancel the print entirely, or resume if the Spaghetti Detective got a little overzealous.

Frankly, it’s a brilliant idea and we’re very interested to see where it goes from here. Assuming you’ve got Octoprint controlling your 3D printer there are some very clever monitoring systems out there currently, but since spaghetti isn’t the only thing a rogue 3D printer can cook up, having an extra line of defense sounds like a good idea to us.

Continue reading “Finding Plastic Spaghetti With Machine Learning”

Pixy2 Is Super Vision For Arduino Or Raspberry Pi

A Raspberry Pi with a camera is nothing new. But the Pixy2 camera can interface with a variety of microcontrollers and has enough smarts to detect objects, follow lines, or even read barcodes without help from the host computer. [DroneBot Workshop] has a review of the device and he’s very enthused about the camera. You can see the video below.

When you watch the video, you might wonder how much this camera will cost. Turns out it is about $60 which isn’t cheap but for the capabilities it offers it isn’t that much, either. The camera can detect lines, intersections, and barcodes plus any objects you want to train it to recognize. The camera also sports its own light source and dual servo motor drive meant for a pan and tilt mounting arrangement.

Continue reading “Pixy2 Is Super Vision For Arduino Or Raspberry Pi”

The Tiniest Computer Vision Platform Just Got Better

The future, if you believe the ad copy, is a world filled with cameras backed by intelligence, neural nets, and computer vision. Despite the hype, this may actually turn out to be true: drones are getting intelligent cameras, self-driving cars are loaded with them, and in any event it makes a great toy.

That’s what makes this Kickstarter so exciting. It’s a camera module, yes, but there are also some smarts behind it. The OpenMV is a MicroPython-powered machine vision camera that gives your project the power of computer vision without the need to haul a laptop or GPU along for the ride.

The OpenMV actually got its start as a Hackaday Prize entry focused on one simple idea. There are cheap camera modules everywhere, so why not attach a processor to that camera that allows for on-board image processing? The first version of the OpenMV could do face detection at 25 fps, color detection at more than 30 fps, and became the basis for hundreds of different robots loaded up with computer vision.

This crowdfunding campaign is financing the latest version of the OpenMV camera, and there are a lot of changes. The camera module is now removable, meaning the OpenMV now supports global shutter and thermal vision in addition to the usual color/rolling shutter sensor. Since this camera has a faster microcontroller, this latest version can support multi-blob color tracking at 80 fps. With the addition of a FLIR Lepton sensor, this camera does thermal sensing, and thanks to a new library, the OpenMV also does number detection with the help of neural networks.

We’ve seen a lot of builds using the OpenMV camera, and it’s getting ot the point where you can’t compete in an autonomous car race without this hardware. This new version has all the bells and whistles, making it one of the best ways we’ve seen to add computer vision to any hardware project.

Object Detection, With TensorFlow

Getting computers to recognize objects has been a historically difficult problem in computer science, but with the rise of machine learning it is becoming easier to solve. One of the tools that can be put to work in object recognition is an open source library called TensorFlow, which [Evan] aka [Edje Electronics] has put to work for exactly this purpose.

His object recognition software runs on a Raspberry Pi equipped with a webcam, and also makes use of Open CV. [Evan] notes that this opens up a lot of creative low-cost detection applications for the Pi, such as setting up a camera that detects when a pet is waiting at the door to be let inside or outside, counting the number of bees entering and exiting a beehive, or monitoring parking spaces at an office.

This project uses a number of other toolkits as well, including Protobuf. It also makes extensive use of Python scripts, but if you’re comfortable with that and you have an application for computer vision, [Evan]’s tutorial will get you started.

Continue reading “Object Detection, With TensorFlow”

Was The Self Driving Car Invented In The 1980s?

The news is full of self-driving cars and while there is some bad news, most of it is pretty positive. It seems a foregone conclusion that it is just a matter of time before calling for an Uber doesn’t involve another person. But according to a recent article, [Ernst Dickmanns] — a German aerospace engineer —  built three autonomous vehicles starting in 1986 and culminating with on-the-road demonstrations in 1994 for Daimler.

It is hard to imagine what had to take place to get a self-driving car in 1986. The article asserts that you need computer analysis of video at 10 frames a second minimum. In the 1980s doing a single frame in 10 minutes was considered an accomplishment. [Dickmanns’] vehicles borrowed tricks from how humans drive. They focused on a small area at any one moment and tried to ignore things that were not relevant.

Continue reading “Was The Self Driving Car Invented In The 1980s?”