“Did I forget something?” It’s that nagging feeling every engineer has when their project is about to be deployed – it may be a product about to be ramped into production, a low volume product, or even a one off like a microsatellite. If you have the time and a few prototypes to spare though, there are ways to alleviate these worries. The key is a test method which has been used in aerospace, military, and other industries for years – Highly Accelerated Life Testing (HALT).
How to HALT
The idea behind HALT testing can be summed up in a couple of sentences:
Beat your product to death.
Figure out what broke.
Fix it, and fix the design.
Repeat.
Sounds barbaric, and in many cases it is. HALT testing is often associated with giant test chambers which are literally designed to torture anything inside them. Liquid nitrogen shock cools the chamber as low as -100°C. The Device Under Test (DUT) can soak at that temperature for hours. Powerful heaters then blast the chamber, causing temperature rises of up to 90°C per minute, topping off at up to 200°C. Pneumatic hammers beat on the chamber table causing vibrations at up to 90 Grms and 10 KHz. Corrosive sprays simulate years of rain and humidity. These chambers are literally hell on earth for any device unlucky enough to be placed inside them. It’s easy to see why this sort of testing is often referred to as “Shake and Bake”.
When you want to protect a computer connected to the Internet against attackers, you usually put it behind a firewall. The firewall controls access to the protected computer. However, you can defeat any lock and there are ways a dedicated attacker can compromise a firewall. Really critical data is often placed on a computer that is “air gapped.” That is, the computer isn’t connected at all to an insecure network.
An air gap turns a network security problem into a physical security problem. Even if you can infect the target system and collect data, you don’t have an easy way to get the data out of the secure facility unless you are physically present and doing something obvious (like reading from the screen into a phone). Right? Maybe not.
Researchers in Isreal have been devising various ways to transmit data from air walled computers. Their latest approach? Transmit data via changing the speed of cooling fans in the target computer. Software running on a cellphone (or other computer, obviously) can decode the data and exfiltrate it. You can see a video on the process below.
A few years ago, [Artem] learned about ways to focus sound in an issue of Popular Mechanics. If sound can be focused, he reasoned, it could be focused onto a plane of microphones. Get enough microphones, and you have a ‘sound camera’, with each microphone a single pixel.
Movies and TV shows about comic books are now the height of culture, so a device using an array of microphones to produce an image isn’t an interesting demonstration of FFT, signal processing, and high-speed electronic design. It’s a Daredevil camera, and it’s one of the greatest builds we’ve ever seen.
[Artem]’s build log isn’t a step-by-step process on how to make a sound camera. Instead, he went through the entire process of building this array of microphones, and like all amazing builds the first step never works. The first prototype was based on a flatbed scanner camera, simply a flatbed scanner in a lightproof box with a pinhole. The idea was, by scanning a microphone back and forth, using the pinhole as a ‘lens’, [Artem] could detect where a sound was coming from. He pulled out his scanner, a signal generator, and ran the experiment. It didn’t work. The box was not soundproof, the inner chamber should have been anechoic, and even if it worked, this camera would only be able to produce an image or two a minute.
The idea sat in the shelf of [Artem]’s mind for a while, and along the way he learned about FFT and how the gigantic Duga over the horizon radar actually worked. Math was the answer, and by using FFT to transform a microphones signals from up-and-down to buckets of frequency and intensity, he could build this camera.
That was the theory, anyway. Practicality has a way of getting in the way, and to build this gigantic sound camera he would need dozens of microphones, dozens of amplifiers, and a controller with enough analog pins, DACs, and processing power to make sense of all of this.
This complexity collapsed when [Artem] realized there was an off-the-shelf part that was a perfect microphone camera pixel. MEMS microphones, like the kind found in smartphones, take analog sound and turn it into a digital signal. Feed this into a fast enough microcontroller, and you can perform FFT on the signal and repeat the same process on the next pixel. This was the answer, and the only thing left to do was to build a board with an array of microphones.
[Artem]’s camera microphone is constructed out of several modules, each of them consisting of an 8×8 array of MEMS microphones, controlled via FPGA. These individual modules can be chained together, and the ‘big build’ is a 32×32 array. After a few problems with manufacturing, the board actually worked. He was recording 64 channels of audio from a single panel. Turning on the FFT visualization and pointing it at a speaker revealed that yes, he had indeed made a sound camera.
The result is a terribly crude movie with blobs of color, but that’s the reality of a camera that only has 32×32 resolution. Right now the sound camera works, the images are crude, and [Artem] has a few ideas of where to go next. A cheap PC is fast enough to record and process all the data, but now it’s an issue of bandwidth; 30 sounds per second is a total of 64 Mbps of data. That’s doable, but it would need another FPGA implementation.
Is this sonic vision? Yes, technically the board works. No, in that the project is stalled, and it’s expensive by any electronic hobbyist standards. Still, it’s one of the best to grace our front page.