Researchers at the University of Edinburgh and Heriot-Watt University have created a sensor that can see around corners using lasers, high speed cameras, and some intense data processing. They can essentially turn a laser light source into a virtual mirror to look through.
Led by [Genevieve Gariepy], the team has been able to prove their research in a lab setting, and are now trying to refine it to work in the real world. While the animated image above makes the system seem rather simple, the tech behind it makes our heads hurt.
The timing measurement alone for the laser light to bounce off the hidden object and be reflected to where the camera can see it needs to be accurate down to the 500 billionth of a second (500 nanoseconds). Five hundred billionths.
That’s not to mention all the data processing after that, and eliminating all the noise…
http://www.youtube.com/watch?v=Pi7iCUSXctY
This is somewhat similar to a hack we saw a few years ago, but that relied on actually shining the laser on the object to create the reflection.
[Thanks Amirgon!]
According to Wikipedia, this has been done three years ago: https://en.wikipedia.org/wiki/Femto-photography (look at cite note #9). But still awesome.
I think it’s the “around corners” that’s the new part.
Yeah it’s not that new, here’s the MIT’s CORNAR
http://web.media.mit.edu/~raskar/cornar/
The animation makes me wonder how new this actually is.
Timing must be accurate to 500 nanoseconds? Light travels 1 foot per nanosecond. How do you recreate an image with 500 feet of slop in the measurement?
accuracy vs. resolution ?
500ns accuracy isn’t that impressive. Even the average Arduino can time that. A sensitive camera with shutter speed that can be used to time that on the other hand is.
So don’t try to impress HaD readers with the wrong things. HaD should not try to be MSM (Main Stream Media).
Ditto, your average precision laser diode can be pulsed in the range of one nanosecond, some in the range of picoseconds.
500 nSec is a moderate amount of time in my world. – It’s only half of a microSec.
I once had to thing carefully and deeply for about 30 minutes to determine 15 nSec was *fast enough*. It was the fastest I could wiggle a wire. Turns out, 15 nSec was fast enough, but towards the slow end of the scale.
A 32 MHz clock on an Atmel is 31.25 nSec. An ARM can easily do 80 MHz is 12.5 nSec, and commercial chips can do 800 MHz = 1.25 nSec.
The interesting part is the camera. And the post processing.
This may be HaD’s creative interpretation of quantity again. To me “down to the 500 billionth of a second” means “500 billion per second”, ie 1/500,000,000,000 or about two picoseconds. One 500-billionth of a second does not equal 500 billionths of a second, the way one ten-thousandth of a second (0.0001s) is not ten thousandths of a second (0.01s). In two picoseconds light travels about half a millimetre, which makes a lot more sense in context.
I’d imagine whoever wrote the article summary and raved so much about “five hundred *billionths*” doesn’t know much about computers, their head would probably explode if they knew the 1GHz processor in a cheap tablet or an early 2000s PC is running an instruction about every billionth of a second and that a $2 AVR chip can time to about the precision they are swooning over.
I take it back, having read the article on the original source sciencealert.com, it seems HaD have just repeated their nonsense verbatim. The article claims the laser pulses are “10 femtoseconds (100,000 billionths of a second)” long. HaD didn’t make up the meaningless timing rubbish, they are just copy/pasting it from a site that very obviously can’t math good.
Dave Vandenbout – I don’t think Jim is talking about “recreating” images with a laser. He is talking about just “tracking” a target from around the corner. I personally think that the US Navy has already done this with ultrasonic’s and/or radar. Using a laser for terrain mapping is awesome too.
I expected some live demo in the video, not just theory.
Indeed, it is worth noting that in all of the video there is one shot in a laboratory of some dots on a screen, but no actual demo showing that the system is capable of seeing around walls is given. Also – 500ns is actually pretty poor resolution for a TOF camera, even the $250 DepthSense 325 provides ~0.01ns resolution and higher performance TOF sensors can get down into the few ps resolution.
I can’t see this being used for any real long range applications, unless it was dark out or there is a way for them to send a data stream to know that they are getting back the correct pulse being sent. Also, I would think that this is extremely weather dependent and not necessarily capable in rain or high fog situations.
It’s probably best for indoor applications, where lights give off a very specific range of frequencies you can pretty much guarantee to avoid. You could probably do something similar with sound in open areas as well if you could generate a collimated enough beam at high frequency, and that would probably work well in daylight, but I don’t know how big the sensor would have to be to determine the sound position (it’s probably a function of wavelength?). The fact sound waves are slower would probably make it a lot easier to work out. I’m not sure if this has been done with sonar yet, the novelty seems to come from using the energy emitted from a beam spreading on contact with a surface, and having it detected at an arbitrary point, rather than from the energy emitted close to the detector. I haven’t heard of it before, and the applications to radar seem like they could be pretty damn useful, since you could conceal a detectors location while still scanning. Imagine a radar beam directed to specific location so the base station location can’t be found by following the source. you could potentially hide radar stations and fake moving them around.
I think it’s A 500 billionth of a second, not 500 billionths. Which is 2 picoseconds. (I think…)
I think a “500 billionth” isn’t a unit of measure, but would equal 500 billionths, as they’re essentially the same thing.
“A seventh” is 1/7. Likewise “a five-hundred-billionth” is 1/500,000,000,000 (2e-12). This is quite a different thing from “500 billionths” (500/1,000,000,000 or 5e-7). This could create a five-orders-of-magnitude ambiguity.
The original article from which all other are based (https://theconversation.com/the-amazing-camera-that-can-see-around-corners-51948) gives a resolution of about 50 ps, which can be translated to a spatial resolution of about 1 cm
Bootsy Collins did it first.
http://i.imgur.com/DQO9b0D.png
https://i.imgur.com/DQO9b0D.png
sometimes I hate this website
@1:31
Why are they using Windows for scientific calculations? Linux with python and Tk or GTK would be cheaper (it’s free) and easier to use (everything is open source). You could even make some kernel modules to accurately count to 500 nanoseconds if you need that. You can’t run 500ns counter under Windows, WinAPI would be too slow and resource hungry.
of all the places to make a video
they did it in a cafe?
Cleaner floors :-)
(need somewhere that looks ‘realistic’ to demonstrate the principle and perhaps they aren’t very good at keeping the lab clean)
“Can be used in rescue missions!”
2012 MIT video describing the same theory, with actual demo footage:
http://web.media.mit.edu/~raskar/cornar/
Coworker saw it in action two years ago; they have the tech down to a package not much bigger than a DSLR.
it can be used in rescue operation and can be used in cars. who are they trying to kid this will be used to kill people in military applications and swat teams
I think we can all agree that this project is best judged after they stop using 500 billionth of a second frickin’ lasers and start using 5 hundred-billionth of a second chirped pulse lasers.
Unless this is just degradation of signal to noise every time someone reflects their work on a speculative surface.