The Easiest Thermal Camera Build You’ll Ever See

Thermal cameras are one of those tools that we all want, but just can’t justify actually buying. You don’t really know what you would do with one, and when even the cheap ones are a couple hundred dollars, it’s a bit out of the impulse buy territory. So you just keeping waiting and hoping that eventually they’ll drop to the price that you can actually own one yourself.

Well, today might be the day you were waiting for. While it might not be the prettiest build, we think you’ll agree it can’t get much easier than what [vvkuryshev] has put together. His build only has two components: a Raspberry Pi and a thermal camera module he picked up online for about $80 USD. There isn’t even any wiring involved, the camera fits right on the Pi’s GPIO header.

Of course, you probably wouldn’t be seeing this on Hackaday if all he had to do was just buy a module and solder it to the Pi’s header. As with most cheap imported gadgets, the GY-MCU90640 module that [vvkuryshev] bought came with some crusty Windows software which wasn’t going to do him much good on the Raspberry Pi. But after going back and forth a bit with the seller, he was able to get some documentation for the device that put him on the right track to writing a Python script which got it working under Linux.

The surprisingly simple Python script reads a frame from the camera four times a second over serial and run it through OpenCV. It even adds some useful data like the minimum and maximum temperatures in the frame to the top of the image. Normally the script would output to the Pi’s primary display, but if you want to use it remotely, [vvkuryshev] says he’s had pretty good luck running it over VNC. In fact, he says that with a VNC application on your phone you could even use this setup on the go, though the setup is a bit awkward for that in its current incarnation.

This isn’t the first DIY thermal camera build we’ve seen, and it isn’t even the first one we’ve seen that leveraged a commercially available imaging module. But short of buying a turn-key camera, we don’t see how it could get any easier to add heat vision to your bag of tricks.

This Light-Up Sorter Is A Bright Idea

Sorting out a mountain of screws and other workbench detritus by hand is a task that only appeals to a select few of us. [AdrienR] is not one of those people. He believes the job is better suited to a robot, so he built an intelligent and good-looking machine that does just that.

[Adrien]’s sorting bot is capable of organizing a hodgepodge of parts quickly and effectively. He simply scatters the parts on the light box work surface, illuminates it, and takes a picture with a downward-facing web cam. An algorithm studies the parts and their positions using OpenCV image processing, and sends the triangulation back to the arm so it can pick and place the parts into laser cut boxes using a home brew electromagnet.

[Adrien] calls this a work in progress. He plans to control it with a Raspberry Pi so it can be a standalone unit, and will probably move the parts boxes to the outside curve. Drop yourself past the break to see it sort.

If delta robots are more your sort, this one has balls. Colored balls.

Continue reading “This Light-Up Sorter Is A Bright Idea”

High-Style Ball Balancing Platform

If IKEA made ball-balancing PID robots, they’d probably look like this one.

This [Johan Link] build isn’t just about style. A look under the hood reveals not the standard, off-the-shelf microcontroller development board you might expect. Instead, [Johan] designed and built his own board with an ATmega32 to run the three servos that control the platform. The entire apparatus is made from a dozen or so 3D-printed parts that interlock to form the base, the platform, and the housing for the USB webcam that’s perched on an aluminum tube. From that vantage point, the camera’s images are analyzed with OpenCV and the center of the ball is located. A PID loop controls the three servos to center the ball on the platform, or razzle-dazzle it a little by moving the ball in a controlled circle. It’s quite a build, and the video below shows it in action.

We’ve seen a few balancing platforms before, but few with such style. This Stewart platform comes close, and this juggling platform gets extra points for closing the control loop with audio feedback. And for juggling, of course.

Continue reading “High-Style Ball Balancing Platform”

Robot Can’t Take Its Eyes Off The Bottle

Robots, as we currently understand them, tend to run on electricity. Only in the fantastical world of Futurama do robots seek out alcohol as both a source of fuel and recreation. That is, until [Les Wright] and his beer seeking robot came along. (YouTube, video after the break.)

A Raspberry Pi 3 provides the brains, with an Intel Neural Compute stick plugged in as an accelerator for neural network tasks. This hardware, combined with the OpenCV image detection software, enable the tracked robot to identify objects and track their position accordingly.

That a beer bottle was chosen is merely an amusing aside – the software can readily identify many different object categories. [Les] has also implemented a search feature, in which the robot will scan the room until a target bottle is identified. The required software and scripts are available on GitHub for your perusal.

Over the past few years, we’ve seen an explosion in accelerator hardware for deep learning and neural network computation. This is, of course, particularly useful for robotics applications where a link to cloud services isn’t practical. We look forward to seeing further development in this field – particularly once the robots are able to open the fridge, identify the beer, and deliver it to the couch in one fell swoop. The future will be glorious!

 

Continue reading “Robot Can’t Take Its Eyes Off The Bottle”

Project Shows How To Use Machine Learning To Detect Pedestrians

Most people are familiar with the idea that machine learning can be used to detect things like objects or people, but for anyone who’s not clear on how that process actually works should check out [Kurokesu]’s example project for detecting pedestrians. It goes into detail on exactly what software is used, how it is configured, and how to train with a dataset.

The application uses a USB camera and the back end work is done with Darknet, which is an open source framework for neural networks. Running on that framework is the YOLO (You Only Look Once) real-time object detection system. To get useful results, the system must be trained on large amounts of sample data. [Kurokesu] explains that while pre-trained networks can be used, it is still necessary to fine-tune the system by adding a dataset which more closely models the intended application. Training is itself a bit of a balancing act. A system that has been overly trained on a model dataset (or trained on too small of a dataset) will suffer from overfitting, a condition in which the system ends up being too picky and unable to usefully generalize. In terms of pedestrian detection, this results in false negatives — pedestrians that don’t get flagged because the system has too strict of an idea about what a pedestrian should look like.

[Kurokesu]’s walkthrough on pedestrian detection is great, but for those interested in taking a step further back and rolling their own projects, this fork of Darknet contains YOLO for Linux and Windows and includes practical notes and guides on installing, using, and training from a more general perspective. Interested in learning more about machine learning basics? Don’t forget Google has a free online crash course to get you up to speed.

Your Face Is Going Places You May Not Like

Many Chinese cities, among them Ningbo, are investing heavily in AI and facial recognition technology. Uses range from border control — at Shanghai’s international airport and the border crossing with Macau — to the trivial: shaming jaywalkers.

In Ningbo, cameras oversee the intersections, and use facial-recognition to shame offenders by putting their faces up on large displays for all to see, and presumably mutter “tsk-tsk”. So it shocked Dong Mingzhu, the chairwoman of China’s largest air conditioner firm, to see her own face on the wall of shame when she’d done nothing wrong. The AIs had picked up her face off of an ad on a passing bus.

False positives in detecting jaywalkers are mostly harmless and maybe even amusing, for now. But the city of Shenzhen has a deal in the works with cellphone service providers to identify the offenders personally and send them a text message, and eventually a fine, directly to their cell phone. One can imagine this getting Orwellian pretty fast.

Facial recognition has been explored for decades, and it is now reaching a tipping point where the impacts of the technology are starting to have real consequences for people, and not just in the ways dystopian sci-fi has portrayed. Whether it’s racist, inaccurate, or easily spoofed, getting computers to pick out faces correctly has been fraught with problems from the beginning. With more and more companies and governments using it, and having increasing impact on the public, the stakes are getting higher.

Continue reading “Your Face Is Going Places You May Not Like”

Turning LEGO Blocks Into Music With OpenCV

We’re not sure what it is, but something about LEGO and music go together like milk and cookies when it comes to DIY musical projects. [Paul Wallace]’s Lego Music project is a sequencer that uses the colorful plastic pieces to build and control sound, but there’s a twist. The blocks aren’t snapped onto anything; the system is entirely visual. A computer running OpenCV uses a webcam to watch the arrangement of blocks, and overlays them onto a virtual grid where the positions of the pieces are used as inputs for the sequencer. The Y axis represents pitch, and the X axis represents time.

Embedded below are two videos. The first demonstrates how the music changes based on which blocks are placed, and where. The second is a view from the software’s perspective, and shows how the vision system processes the video by picking out the colored blocks, then using their positions to change different values which has an effect on the composition as a whole.

Continue reading “Turning LEGO Blocks Into Music With OpenCV”