Spy Tech: Unshredding Documents

Bureaucracies generate paper, usually lots of paper. Anything you consider private — especially anything that could get you in trouble — should go in a “burn box” which is usually a locked trash can that is periodically emptied into an incinerator. However, what about a paper shredder? Who hasn’t seen a movie or TV show where the office furiously shreds papers as the FBI, SEC, or some other three-letter-agency is trying to crash the door down?

That might have been the scene in the late 1980s when Germany reunified. The East German Ministry of State Security — known as the Stasi — had records of unlawful activity and, probably, information about people of interest. The staff made a best effort to destroy these records, but they did not quite complete their task.

The collapsing East German government ordered documents destroyed, and many were pulped or burned. However, many of the documents were shredded by hand, stuffed into bags, and were awaiting final destruction. There were also some documents destroyed by the interim government in 1990. Today there are about 16,000 of these bags remaining, each with 2,500 to 3,000 pieces of pages in them.

Machine-shredded documents were too small to recover, but the hand-shredded documents should be possible to reconstruct. After all, they do it all the time in spy movies, right? With modern computers and vision systems, it should be a snap.

You’d think so, anyway.

Continue reading “Spy Tech: Unshredding Documents”

Self-Driving Library For Python

Fully autonomous vehicles seem to perennially be just a few years away, sort of like the automotive equivalent of fusion power. But just because robotic vehicles haven’t made much progress on our roadways doesn’t mean we can’t play with the technology at the hobbyist level. You can embark on your own experimentation right now with this open source self-driving Python library.

Granted, this is a library built for much smaller vehicles, but it’s still quite full-featured. Known as Donkey Car, it’s mostly intended for what would otherwise be remote-controlled cars or robotics platforms. The library is built to be as minimalist as possible with modularity as a design principle, and includes the ability to self-drive with computer vision using machine-learning algorithms. It is capable of logging sensor data and interfacing with various controllers as well, either physical devices or through something like a browser.

To build a complete platform costs around $250 in parts, but most things needed for a Donkey Car compatible build are easily sourced and it won’t be too long before your own RC vehicle has more “full self-driving” capabilities than a Tesla, and potentially less risk of having a major security vulnerability as well.

Modern Dance Or Full-Body Keyboard? Why Not Both!

If you felt in your heart that Hackaday was a place that would forever be free from projects that require extensive choreography to pull off, we’re sorry to disappoint you. Because you’re going to need a level of coordination and gross motor skills that most of us probably lack if you’re going to type with this full-body, semaphore-powered keyboard.

This is another one of [Fletcher Heisler]’s alternative inputs projects, in the vein of his face-operated coding keyboard. The idea there was to be able to code with facial gestures while cradling a sleeping baby; this project is quite a bit more expressive. Pretty much all you need to know about the technical side of the project can be gleaned from the brilliant “Hello world!” segment at the start of the video below. [Fletcher] uses OpenCV and MediaPipe’s Pose library for pose estimation to decode the classic flag semaphore alphabet, which encodes characters in the angle of the signaler’s extended arms relative to their body. To extend the character set, [Fletcher] added a squat gesture for numbers, and a shift function controlled by opening and closing the hands. The jazz-hands thing is just a bonus.

Honestly, the hack here is mostly a brain hack — learning a complex series of gestures and stringing them together fluidly isn’t easy. [Fletcher] used a few earworms to help him master the character set and tune his code; the inevitable Rickroll was quite artistic, and watching him nail the [Johnny Cash] song was strangely satisfying. We also thoroughly enjoyed the group number at the end. Ooga chaka FTW.

Continue reading “Modern Dance Or Full-Body Keyboard? Why Not Both!”

Hands-On: NVIDIA Jetson Orin Nano Developer Kit

NVIDIA’s Jetson line of single-board computers are doing something different in a vast sea of relatively similar Linux SBCs. Designed for edge computing applications, such as a robot that needs to perform high-speed computer vision while out in the field, they provide exceptional performance in a board that’s of comparable size and weight to other SBCs on the market. The only difference, as you might expect, is that they tend to cost a lot more: the current top of the line Jetson AGX Orin Developer Kit is $1999 USD

Luckily for hackers and makers like us, NVIDIA realized they needed an affordable gateway into their ecosystem, so they introduced the $99 Jetson Nano in 2019. The product proved so popular that just a year later the company refreshed it with a streamlined carrier board that dropped the cost of the kit down to an incredible $59. Looking to expand on that success even further, today NVIDIA announced a new upmarket entry into the Nano family that lies somewhere in the middle.

While the $499 price tag of the Jetson Orin Nano Developer Kit may be a bit steep for hobbyists, there’s no question that you get a lot for your money. Capable of performing 40 trillion operations per second (TOPS), NVIDIA estimates the Orin Nano is a staggering 80X as powerful as the previous Nano. It’s a level of performance that, admittedly, not every Hackaday reader needs on their workbench. But the allure of a palm-sized supercomputer is very real, and anyone with an interest in experimenting with machine learning would do well to weigh (literally, and figuratively) the Orin Nano against a desktop computer with a comparable NVIDIA graphics card.

We were provided with one of the very first Jetson Orin Nano Developer Kits before their official unveiling during NVIDIA GTC (GPU Technology Conference), and I’ve spent the last few days getting up close and personal with the hardware and software. After coming to terms with the fact that this tiny board is considerably more powerful than the computer I’m currently writing this on, I’m left excited to see what the community can accomplish with the incredible performance offered by this pint-sized system.

Continue reading “Hands-On: NVIDIA Jetson Orin Nano Developer Kit”

Webcam VR

Immersive Virtual Reality From The Humble Webcam

[Russ Maschmeyer] and Spatial Commerce Projects developed WonkaVision to demonstrate how 3D eye tracking from a single webcam can support rendering a graphical virtual reality (VR) display with realistic depth and space. Spatial Commerce Projects is a Shopify lab working to provide concepts, prototypes, and tools to explore the crossroads of spatial computing and commerce.

The graphical output provides a real sense of depth and three-dimensional space using an optical illusion that reacts to the viewer’s eye position. The eye position is used to render view-dependent images. The computer screen is made to feel like a window into a realistic 3D virtual space where objects beyond the window appear to have depth and objects before the window appear to project out into the space in front of the screen. The resulting experience is like a 3D view into a virtual space. The downside is that the experience only works for one viewer.

Eye tracking is performed using Google’s MediaPipe Iris library, which relies on the fact that the iris diameter of the human eye is almost exactly 11.7 mm for most humans. Computer vision algorithms in the library use this geometrical fact to efficiently locate and track human irises with high accuracy.

Generation of view-dependent images based on tracking a viewer’s eye position was inspired by a classic hack from Johnny Lee to create a VR display using a Wiimote. Hopefully, these eye-tracking approaches will continue to evolve and provide improved motion-responsive views into immersive virtual spaces.

AI simulated drone flight track

Human Vs. AI Drone Racing At The University Of Zurich

[Thomas Bitmatta] and two other champion drone pilots visited the Robotics and Perception Group at the University of Zurich. The human pilots accepting the challenge to race drones against Artificial Intelligence “pilots” from the UZH research group.

The human pilots took on two different types of AI challengers. The first type leverages 36 tracking cameras positioned above the flight arena. Each camera captures 400 frames per second of video. The AI-piloted drone is fitted with at least four tracking markers that can be identified in the captured video frames. The captured video is fed into a computer vision and navigation system that analyzes the video to compute flight commands. The flight commands are then transmitted to the drone over the same wireless control channel that would be used by a human pilot’s remote controller.

The second type of AI pilot utilizes an onboard camera and autonomous machine vision processing. The “vision drone” is designed to leverage visual perception from the camera with little or no assistance from external computational power.

Ultimately, the human pilots were victorious over both types AI pilots. The AI systems do not (yet) robustly accommodate unexpected deviation from optimal conditions. Small variations in operating conditions often lead to mistakes and fatal crashes for the AI pilots.

Both of the AI pilot systems utilize some of the latest research in machine learning and neural networking to learn how to fly a given track. The systems train for a track using a combination of simulated environments and real-world flight deployments. In their final hours together, the university research team invited the human pilots to set up a new course for a final race. In less than two hours, the AI system trained to fly the new course. In the resulting real-world flight of the AI drone, its performance was quite impressive and shows great promise for the future of autonomous flight. We’re betting on the bots before long.

Continue reading “Human Vs. AI Drone Racing At The University Of Zurich”

Art of 3D printer in the middle of printing a Hackaday Jolly Wrencher logo

Brainstorming

One of the best things about hanging out with other hackers is the freewheeling brainstorming sessions that tend to occur. Case in point: I was at the Electronica trade fair and ended up hanging out with [Stephen Hawes] and [Lucian Chapar], two of the folks behind the LumenPnP open-source pick and place machine that we’ve covered a fair number of times in the past.

Among many cool features, it has a camera mounted on the parts-moving head to find the fiducial markings on the PCB. But of course, this mean a camera mounted to an almost general purpose two-axis gantry, and that sent the geeks’ minds spinning. [Stephen] was talking about how easy it would be to turn into a photo-stitching macrophotography rig, which could yield amazingly high resolution photos.

Meanwhile [Lucian] and I were thinking about how similar this gantry was to a 3D printer, and [Lucian] asked why 3D printers don’t come with cameras mounted on the hot ends. He’d even shopped this idea around at the East Coast Reprap Festival and gotten some people excited about it.

So here’s the idea: computer vision near extruder gives you real-time process control. You could use it to home the nozzle in Z. You could use it to tell when the filament has run out, or the steppers have skipped steps. If you had it really refined, you could use it to compensate other printing defects. In short, it would be a simple hardware addition that would open up a universe of computer-vision software improvements, and best of all, it’s easy enough for the home gamer to do – you’d probably only need a 3D printer.

Now I’ve shared the brainstorm with you. Hope it inspires some DIY 3DP innovation, or at least encourages you to brainstorm along below.