There are millions of IoT devices out there in the wild and though not conventional computers, they can be hacked by alternative methods. From firmware hacks to social engineering, there are tons of ways to break into these little devices. Now, four researchers at the National University of Singapore and one from the University of Maryland have published a new hack to allow audio capture using lidar reflective measurements.
The hack revolves around the fact that audio waves or mechanical waves in a room cause objects inside a room to vibrate slightly. When a lidar device impacts a beam off an object, the accuracy of the receiving system allows for measurement of the slight vibrations cause by the sound in the room. The experiment used human voice transmitted from a simple speaker as well as a sound bar and the surface for reflections were common household items such as a trash can, cardboard box, takeout container, and polypropylene bags. Robot vacuum cleaners will usually be facing such objects on a day to day basis.
The bigger issue is writing the filtering algorithm that is able to extract the relevant information and separate the noise, and this is where the bulk of the research paper is focused (PDF). Current developments in Deep Learning assist in making the hack easier to implement. Commercial lidar is designed for mapping, and therefore optimized for reflecting off of non-reflective surface. This is the opposite of what you want for laser microphone which usually targets a reflective surface like a window to pick up latent vibrations from sound inside of a room.
Deep Learning algorithms are employed to get around this shortfall, identifying speech as well as audio sequences despite the sensor itself being less than ideal, and the team reports achieving an accuracy of 90%. This lidar based spying is even possible when the robot in question is docked since the system can be configured to turn on specific sensors, but the exploit depends on the ability to alter the firmware, something the team accomplished using the Dustcloud exploit which was presented at DEF CON in 2018.
You don’t need to tear down your robot vacuum cleaner for this experiment since there are a lot of lidar-based rovers out there. We’ve even seen open source lidar sensors that are even better for experimental purposes.
Thanks for the tip [Qes]
HAL are you in there?
When a tech article says such an egregiously ignorant and wrong statement like “Laser microphones, used in espionage since the 1940s, …”, how much faith can you put in the rest of the article?
So, upshot from the actual paper: The bad guy has to first hijack the roomba, arrange it to find and target a suitable acoustic surface, then carefully calibrate the acoustic response of that target and the room, using swept audio tones in the environment. THEN, he can correlate the roomba-received noise with some previously-known audio input (like a radio program or prerecorded voice samples), and look for positive hits.
It is indeed a difference between a technically possible security exploit and a practical security exploit.
Not all security risks are large enough to be usable in practice.
A good example is Specter, a security exploit targeting branch prediction in a lot of processors. It is an attack that can very easily be pulled off on almost any system.
But since it practically requires that the attacker is free to run code on the system, then one likely has bigger security issues at hand. Not to mention that a Specter attack also uses a large amount of CPU performance while also being fairly slow at its job… This makes it very easy to detect, and terminating the uninvited process is thereby rather trivial.
Though, if one needs to run untrusted code on a system, then Specter is a valid concern. (And why running such untrusted code in an intestinally slow emulator/environment is honestly a nice idea. Since timing attacks are pointless if everything looks instant as far as execution were concerned.)
so spectre meltdown was demonstrated to work on JavaScript, which could be injected view browser.
Except that Javascript, HTML5, Java, Python, Lua, etc are all using run time environments.
Ie, a web browser can intentionally slow that environment down, and in turn make the underlying timing attack unfeasible.
Performance can be brought back thanks to running multiple scripts in parallel. And then have the environment take one “tick” at a time on each. While also only allowing other scripts to see an update the next tick.
Implementing this is fairly trivial and can be done in a similar fashion to double buffering. Have two databases for all variables, only read from the old database, and only write to the new. On the next tick, the new database is considered old, and vice versa for the old database. And yes, one will also need to toss over unchanged things from the old database over to the new, but this only needs to be done for values that were changed on tick n-1 from the current tick’s perspective when moving to tick n+1, due to all information that hasn’t changed since before n-1 are going to be in both databases.
Though, such a solution has the downside that it will slightly eat into performance, and much more notably consume twice as much memory… But it does allow one to run untrusted code sufficiently slowly for most timing attacks to be practically impossible.
Other solutions against Specter is to just check if a thread has the right to read/write to certain parts of memory or not before just speculatively executing its code. Cleaning up afterwards isn’t particularly hard, and honestly, checking before hand does slow down peak serial performance.
But Specter is just a timing attack using a fairly specific architecture implementation, that happened to be exceptionally common. Though, there is plenty of other timing attacks a system can be subject to, though, the proposed solution of just running things slower does solve the vast majority of timing attacks. Not just specter.