RaspiReader, An Open Source Fingerprint Reader

In 2008, the then German interior minister, [Wolfgang Schäuble] had his fingerprint reproduced by members of the German Chaos Computer Club, or CCC, and published on a piece of plastic film distributed with their magazine. [Schäuble] was a keen proponent of mass gathering of biometric information by the state, and his widely circulated fingerprint lifted from a water glass served as an effective demonstration against the supposed infallibility of biometric information.

Diagram showing the fingerprint reader's operation.
Diagram showing the fingerprint reader’s operation.

It was reported at the time that the plastic [Schäuble] fingerprint could fool the commercial scanners of the day, including those used by the German passport agency, and the episode caused significant embarrassment to the politician. The idea of “spoofing” a fingerprint would completely undermine the plans for biometric data collection that were a significant policy feature for several European governments of the day.

It is interesting then to read a paper from Cornell University, “RaspiReader: An Open Source Fingerprint Reader Facilitating Spoof Detection” (PDF downloadable from the linked page) by [Joshua J. Engelsma], [Kai Cao], and [Anil K. Jain] investigates the mechanism of an optical fingerprint reader and presents a design using the ever-popular Raspberry Pi that attempts to detect and defeat attempts at spoofing. For the uninitiated is serves as a fascinating primer on FTIR (Frustrated Total Internal Reflection) photography of fingerprints, and describes their technique combining it with a conventional image to detect spoofing. Best of all, the whole thing is open-source, meaning that you too can try building one yourself.

If [Cao] and [Jain] sound familiar, maybe it’s from their Samsung Galaxy fingerprint hack last year, so it’s neat to see them at work on the defense side. If you think that fingerprints make good passwords, you’ve got some background reading to do. If you just can’t get enough fingerprints, read [Al Williams]’ fundamentals of fingerprint scanning piece from earlier this year.

Via Hacker News.

Hackaday Prize Entry: Rangefinder + Camera = SmartZoom

The interesting thing about submissions for The Hackaday Prize is seeing unusual projects and concepts that might not otherwise pop up. [ken conrad] has a curious but thoughtfully designed idea for Raspberry Pi-based SmartZoom Imaging that uses a Pi Zero and camera plus some laser emitters to create a device with a very specific capability: a camera that constantly and dynamically resizes the image make the subject appear consistently framed and sized, regardless of its distance from the lens. The idea brings together two separate functions: rangefinding and automated zooming and re-sampling of the camera image.

The Raspberry Pi uses the camera board plus some forward-pointing laser dots as a rangefinder; as long as at least two laser dots are visible on the subject, the distance between the device and the subject can be calculated. The Pi then uses the knowledge of how near or far the subject is to present a final image whose zoom level has been adjusted to match (and offset) the range of the subject from the camera, in effect canceling out the way an object appears larger or smaller based on distance.

We’ve seen visible laser dots as the basis of rangefinding before, but never tied into a zoom function. Doubtlessly, [ken conrad] will update his project with some example applications, but in the meantime we’re left wondering: is there a concrete, practical use case for this unusual device? We have no idea, but we’d certainly have fun trying to find one.

Detect Cars Running Stop Signs (and Squirrels Running Across the Roof)

There’s a stop sign outside [Devin Gaffney]’s house that, apparently, no one actually stops at. In order to avoid the traffic and delays on a major thoroughfare, cars take the road behind [Devin Gaffney]’s house, but he noticed a lot of cars didn’t bother to stop at the stop sign. He had a Raspberry Pi and a camera, so he set them up to detect the violating cars.

His setup is pretty standard – Raspberry Pi and camera pointed outside at the intersection. He’s running OpenCV and using machine learning to detect the cars and determine if they have run the stop sign or not. His website has some nice charts showing when the violations occurred by hour and by day of the week. Also on the site are links that you can use to help train the system in noticing cars, cars that run the stop sign, determining if there’s enough of the video to determine if there’s a violation, and whether or not there’s a car going the wrong way through the intersection.

This is an interesting use of the Pi and OpenCV; there’s no guarantee that this will help the people of [Devin Gaffney]’s neighborhood, but hopefully gives them some ammunition (assuming they want something done about the intersection.) It’s a cheap and easy setup and it’s nice to let the community have a hand in training the system. For more OpenCV, check out this article on taking the perfect jump shot or this one which tries to quantify cloudiness. Cool stuff.

[via reddit]

Continue reading “Detect Cars Running Stop Signs (and Squirrels Running Across the Roof)”

Hackaday Prize Entry: Detecting Adulterated Food Using AI

Adulterated food is food that has a substance added to it to save on manufacturing costs. It can have a negative effect, it can reduce the food’s potency or it can have no effect at all. In many cases it’s done illegally. It’s also a widespread problem, one which [G. Vignesh] has decided to take on as his entry for the 2017 Hackaday Prize, an AI Based Adulteration Detector.

On his hackaday.io Project Details page he outlines some existing methods for testing food, some which you can do at home: adulterated sugar may have chalk added to it, so put it in water and the sugar will dissolve while the chalk will not. His approach is to instead take high-definition photos of the food and, on a Raspberry Pi, apply filters to them to reveal various properties such as density, size, color, texture and so on. He also mentions doing image analysis using a deep learning neural network. This project touches us all and we’ll be watching it with interest.

If all this talk of adulterated food makes you nervous about your food supply then consider growing our own, hacker style. One such project we’ve seen here on Hackaday is Farmbot, an open-source CNC farming robot. Another such is MIT’s OpenAg Food Computer, a robotic control and monitoring growing chamber.

The ‘All-Seeing Pi’ Aids Low-Vision Adventurer

Adventure travel can be pretty grueling, what with the exotic locations and potential for disaster that the typical tourist destinations don’t offer. One might find oneself dangling over a cliff for that near-death-experience selfie or ziplining through a rainforest canopy. All this is significantly complicated by being blind, of course, so a tool like this Raspberry Pi low-vision system would be a welcome addition to the nearly-blind adventurer’s well-worn rucksack.

[Dan] has had vision problems since childhood, but one look at his YouTube channel shows that he doesn’t let that slow him down. When [Dan] met [Ben] in Scotland, [Ben] noticed that he was using his smartphone as a vision aid, looking at the display up close and zooming in to get as much detail as possible from his remaining vision. [Ben] thought he could help, so he whipped up a heads-up display from a Raspberry Pi and a Pi Camera. Mounted to a 3D-printed frame holding a 5″ HDMI display and worn from a GoPro head mount, the camera provides enough detail to help [Dan] navigate, as seen in the video below.

The rig is a bit unwieldy right now, but as proof of concept (and proof of friendship), it’s a solid start. We think a slimmer profile design might help, in which case [Ben] might want to look into this Google Glass-like display for a multimeter for inspiration on version 2.0.

Continue reading “The ‘All-Seeing Pi’ Aids Low-Vision Adventurer”

This 3D Printed Microscope Bends for 50nm Precision

Exploiting the flexibility of plastic, a group of researchers has created a 3D printable microscope with sub-micron accuracy. By bending the supports of the microscope stage, they can manipulate a sample with surprising precision. Coupled with commonly available M3 bolts and stepper motors with gear reduction, they have reported a precision of up to 50nm in translational movement. We’ve seen functionality derived from flexibility before but not at this scale. And while it’s not a scanning electron microscope, 50nm is the size of a small virus (no, not that kind of virus).

OpenFlexure has a viewing area of 8x8x4mm, which is impressive when the supports only flex 6°. But, if 256 mm3 isn’t enough for you, fret not: the designs are all Open Source and are modeled in OpenSCAD just begging for modification. With only one file for printing, no support material, a wonderful assembly guide and a focus on PLA and ABS, OpenFlexure is clearly designed for ease of manufacturing. Optics are equally interesting. Using a Raspberry Pi Camera Module with the lens reversed, they achieve a resolution where one pixel corresponds to 120nm.

The group hopes that their microscopes will reach low-resource parts of the world, and it seem that the design has already started to spread. If you’d like to make one for yourself, you can find all the necessary files up on GitHub.

Continue reading “This 3D Printed Microscope Bends for 50nm Precision”

Objectifier: Director of Domestic Technology

book-example[Bjørn Karmann]’s Objectifier is a device that lets you control domestic objects by allowing them to respond to unique actions or behaviour, using machine learning and computer vision. The Objectifier can turn on a table lamp when you open a book, and turn it off when you close the book. Switch on the coffee maker when you place the mug next to the pot, and switch it off when the mug is removed. Turn on the belt sander when you put on the safety glasses, and stop it when you remove the glasses. Charge the phone when you put a banana in front of it, and stop charging it when you place an apple in front of it. You get the drift — the possibilities are endless. Hopefully, sometime in the (near) future, we will be able to interact with inanimate objects in this fashion. We can get them to learn from our actions rather than us learning how to program them.

The device uses computer vision and a neural network to learn complex behaviours associated with your trigger commands. A training mode, using a phone app, allows you to train it for the On and Off actions. Some actions require more human effort in training it — such as detecting an open and closed book — but eventually, the neural network does a fairly good job.

The current version is the sixth prototype in the series and [Bjørn] has put in quite a lot of work refining the project at each stage. In its latest avatar, the device hardware consists of a Pi Zero, a Raspberry-Pi camera module, an SMPS power brick, a relay block to switch the output, a 230 V plug for input power and a 230 V socket outlet for the final output. All the parts are put together rather neatly using acrylic laser cut support pieces, and then further enclosed in a nice wooden enclosure.

On the software side, all of the machine learning part is taken care of using “Wekinator” — a free, open source software that allows building musical instruments, gestural game controllers, computer vision or computer listening systems using machine learning. The computer vision is handled via Processing. All the code is wrapped using openframeworks, with ml4A providing apps for working with machine learning.

All of the above is what we could deduce looking at the pictures and information on his blog post. There isn’t much detail about the hardware, but the pictures are enough to tell us all. The software isn’t made available, but maybe this could spur some of you hackers into action to build another version of the Objectifier. Check out the video after the break, showing humans teaching the Objectifier its tricks.

Continue reading “Objectifier: Director of Domestic Technology”