Visually impaired people know something the rest of us often overlooks: we actually don’t see with our eyes, but with our brains. For his Hackaday Prize entry, [Ray Lynch] is building a tongue vision system, that will help blind people to see through one of the human brain’s auxiliary ports: the taste buds.
Although it might be more accurate to say that this chair dances because no one is watching, the result is still a clever project that [Igor], a maker-in-residence at the National Museum of Decorative Arts and Design in Norway, created recently. Blurring the lines between art, hack, and the ghosts from Super Mario, this chair uses an impressive array of features to “dance”, but only if no one is looking at it.
In order to get the chair to appear to dance, [Igor] added servo motors in all four legs to allow them to bend. A small non-moving dowel was placed on the inside of the leg to keep the chair from falling over during all of the action. It’s small enough that it’s not immediately noticeable from a distance, which helps maintain the illusion of a dancing chair.
From there, a Raspberry Pi 3 serves as the control center for the chair. It’s programmed in Python and runs OpenCV for face detection and uses pigpio for controlling the leg servos. There’s also a web interface for watching the camera’s output and viewing its facial recognition abilities. The web interface also allows a user to debug the program. [Igor]’s chair can process up to 3 frames per second at 800×600 pixels.
Be sure to check out the video after the break to see the chair in action. It’s an interesting piece of art, and if those dowels can support the weight of a person it would be a great addition to any home as well. If it’s not enough chair for you, though, there are some other more dangerous options out there.
Barring the RepRap project, we usually see 3D printers make either replacement parts or small assemblies, not an entire finished product. [Amos] is the exception to this rule with his entirely 3D-printed camera. Everything in this camera is 3D printed, from the shutter to the lightproof box to the lens itself. It’s an amazing piece of engineering, and a testament to how far 3D printing has come in just a few short years.
35mm film is the most common film by far, and the only one that’s still easy to get and have developed at a reasonable price. This 3D-printed camera is based on that standard, making most of the guts extremely similar to the millions of film cameras that have been produced over the years. There’s a film cartridge, a few gears, a film takeup spool, and a lightproof box. So far, this really isn’t a challenge for any 3D printer.
The fun starts with the lens. We’ve seen 3D printers used for lens making before, starting with a 3D print used to create a silicone mold where a lens is cast in clear acrylic, 3D printed tools used to grind glass, and an experiment from FormLabs to 3D print a lens. All of these techniques require some surface finishing, and [Amos]’ lens is no different. He printed a lens on his Form 2 printer, and started polishing with 400 grit sandpaper. After working up to 12000 grit, the image was still a bit blurry, revealing microscopic grooves that wouldn’t polish out. This led him to build a tool to mechanically polish the lens. This tool was, of course, 3D printed. After polishing, the lens was ‘dip polished’ in a vat of uncured resin.
The shutter was the next challenge, and for this [Amos] couldn’t rely on the usual mechanisms found in film cameras. he did find a shutter mechanism from 1885 that didn’t take up a lot of depth, and after modeling the movement in Blender, designed a reasonable shutter system.
Building an entire camera in a 3D printer is a challenge, but how are the pictures? Not bad, actually. There’s a weird vignetting, and everything’s a little bit blurry. It’s hip, trendy, and lomo, and basically amazing that it works at all.
We are all (hopefully) aware that we can be watched while we’re online. Our clicks are all trackable to some extent, whether it’s our country’s government or an advertiser. What isn’t as obvious, though, is that it’s just as easy to track our movements in real life. [Saulius] was able to prove this concept by using optical character recognition to track the license plate numbers of passing cars half a kilometer away.
To achieve such long distances (and still have clear and reliable data to work with) [Saulius] paired a 70-300 mm telephoto lens with a compact USB camera. All of the gear was set up on an overpass and the camera was aimed at cars coming around a corner of a highway. As soon as the cars enter the frame, the USB camera feeds the information to a laptop running openALPR which is able to process and record license plate data.
The build is pretty impressive, but [Saulius] notes that it isn’t the ideal setup for processing a large amount of information at once because of the demands made on the laptop. With this equipment, monitoring a parking lot would be a more feasible situation. Still, with even this level of capability available to anyone with the cash, imagine what someone could do with the resources of a national government. They might even have long distance laser night vision!
Ever since the Roomba was invented, humanity has been one step closer to a Jetsons-style future with robots performing all of our tedious tasks for us. The platform is so ubiquitous and popular with the hardware hacking community that almost anything that could be put on a Roomba has been done already, with one major exception: a Roomba with heat vision. Thanks to [marcelvarallo], though, there’s now a Roomba with almost all of the capabilities of the Predator.
The Roomba isn’t just sporting an infrared camera, though. This Roomba comes fully equipped with a Raspberry Pi for wireless connectivity, audio in and out, video streaming from a webcam (and the FLiR infrared camera), and control over the motors. Everything is wired to the internal battery which allows for automatic recharging, but the impressive part of this build is that it’s all done in a non-destructive way so that the Roomba can be reverted back to a normal vacuum cleaner if the need arises.
If sweeping a just the right time the heat camera might be the key to the messy problem we discussed on Wednesday.
The only thing stopping this from hunting humans is the addition of some sort of weapons. Perhaps this sentry gun or maybe some exploding rope. And, if you don’t want your vacuum cleaner to turn into a weapon of mass destruction, maybe you could just turn yours into a DJ.
It’s no secret that a lot of time, money, and effort goes into photographing and filming all that delicious food you see in advertisements. Mashed potatoes in place of ice cream, carefully arranged ingredients on subs, and perfectly golden french fries are all things you’ve seen so often that they’re taken for granted. But, those are static shots – the food is almost always just sitting on a plate. At most, you might see a chef turning a steak or searing a fillet in a commercial for a restaurant. What takes real skill – both artistic and technical – is assembling a hamburger in mid-air and getting it all in stunning 4k video.
That’s what [Steve Giralt] set out to do, and to accomplish it he had to get creative. Each component of the hamburger was suspended by rubber bands, and an Arduino timed and controlled servo system cut each rubber band just before that ingredient entered the frame. There’s even a 3D printed dual-catapult system to fling the condiments, causing them to collide in the perfect place to land in place on the burger.
We use touch screens all the time these days, and though we all know they support multiple touch events it is easy for us to take them for granted and forget that they are a rather accomplished sensor array in their own right.
[Optismon] has long held an interest in capacitive touch screen sensors, and has recently turned his attention to the official Raspberry Pi 7-inch touchscreen display. He set out to read its raw capacitance values, and ended up with a fully functional 2D capacitive imaging device able to sense hidden nails and woodwork in his drywall.
Reading the capacitance values is not a job for the faint-hearted though. There is an I2C bus which is handled by the Pi GPU rather than the processor, and to read it in software would require a change to the Pi’s infamous Broadcom binary blob. His solution which he agrees is non-optimal was to take another of the Pi’s I2C lines that he could talk to and connect it in parallel with the display line. As a result he can catch the readings from the screen’s sensors and with a bit of scripting make a 2D display on the screen. The outlines of hands and objects on his desk can clearly be seen when he places them on the screen, and when he runs the device over his wall it shows the position of the studding and nails behind the drywall.
He’s posted his code in a GitHub repository, and put up the YouTube video of his capacitive imaging in action which you can watch below the break.