What’s the best way to image a room? A picture? Hah — don’t be so old-fashioned! You want a LIDAR rig to scan the space and reconstruct it as a 3D point map in your computer.
Hot on the heels of [Saulius Lukse]’s scanning thermometer, he’s replaced the thermal camera on their pan/tilt setup with a time-of-flight (TOF) camera — a Garmin LIDAR — capable of 500 samples per second and end up scanning their room in a mere fifteen minutes. Position data is combined with the ranging information to produce a point cloud using Python. Open that file in a 3D manipulation program and you’ll be treated to a sight like this:
The world of the subsoil is a fascinating place. Our whole ecosystem depends on its variety of fungus, bacteria and detritivore creatures that break down and decay dead matter and provide the nutrients to sustain plants that bring in the energy from the sun.
It’s easy enough to study what is happening beneath the surface, just reach for a trowel. But of course, that’s an imperfect technique, for it only gives a picture of a world you have destroyed, and then at best only a snapshot.
What if you could image underground, take pictures and video of the decay process and the creatures that are its engine? [Josh Williams] was curious how this could be achieved, so after early experiments with buried webcams proved unimpressive he created the Rhizotron. A flatbed scanner waterproofed for burial with plenty of silicone, and driven by a Raspberry Pi. The result was particularly successful, and though he has lost several scanners to water ingress he has collected some impressive imagery which he has posted on the project’s blog. Below the break we’ve included one of his videos taken with the scanner in a compost bucket, in which you can see decomposition aplenty, mating millipedes, spreading fungal hyphae and much more.
I was a bit of a lost soul after high school. I dabbled with electrical engineering for a semester but decided that it wasn’t for me – what I wouldn’t give for a do-over on that one. In my search for a way to make money, I stumbled upon radiologic technology – learning how to take X-rays. I figured it was a good way to combine my interests in medicine, electronics, and photography, so after a two-year course of study I got my Associates Degree, passed my boards, and earned the right to put “R.T.(R) (ARRT)” after my name.
That was about as far as that career went. There are certain realities of being in the health care business, and chief among them is that you really have to like dealing with the patients. I found that I liked the technology much more than the people, so I quickly moved on to bigger and better things. But the love of the technology never went away, so I thought I’d take a look at exactly what it takes to produce medical X-rays, and see how it’s changed from my time in the Radiology Department.
While hackers routinely read and write stripe cards, this is the first time we’ve reported on optically imaging and decoding data from the magnetic stripe. [anfratuosus] used a magnetic developer which is designed to allow visual inspection of the magnetic stripe. The developer uses micron sized iron particles in a suspension which are dropped onto the stripe. To the particles, the magnetic stripe looks like a series of magnets lined up. Long magnets represent 0s and short magnets 1s. With each bit the orientation of the magnet changes, something like the diagram to the right. The magnetic field is strongest where the poles meet. So the iron particles are attracted to these flux reversal points on the stripe creating a visible pattern . There’s an awesome video of the process in action below.
While magnetic developer was designed for debugging faulty recording systems [anfratuosus] went a step further scanning the “developed” card, and writing a tool to decode the images and extract the card data. [anfratuosus] doesn’t mention any particular application, we love this circuitous hack anyway!
In a perfect futuristic world you have pre-emptive 3D scans of your specific anatomy. They’d be useful to compare changes in your body over time, and to have a pristine blueprint to aid in the event of a catastrophe. As with all futuristic worlds there are some problems with actually getting there. The risks may outweigh the rewards, and cost is an issue, but having 3D imaging of a sick body’s anatomy does have some real benefits. Take a journey with me down the rabbit hole of 3D technology and Gray’s Anatomy.
This picture was taken by using a DRAM chip as an image sensor (translated). A decapped 64k DRAM chip was combined with optics that could focus an image onto the die. By reading data out of the DRAM, the image could be constructed.
DRAM is the type of RAM you find on the RAM cards inserted into your motherboard. It consists of a massive array of capacitors and transistors. Each bit requires one transistor and one capacitor, which is quite efficient. The downside is that the memory needs to be refreshed periodically to prevent the capacitors from discharging.
Exposing the capacitor to light causes it to discharge faster. Once it has discharged past a certain threshold, the bit will flip from one to zero. To take a picture, ones are written to every bit in the DRAM array. By timing how long it takes a bit to flip from one to zero, the amount of light exposure can be determined. Since the DRAM is laid out in an array, each bit can be treated as a pixel to reconstruct the image.
Sure, modern CCDs are better, cheaper, and faster, but this hack is a neat way to totally re-purpose a chip. There’s even Turbo Pascal source if you’d like to recreate the project.
[Ben Krasnow] built his own version of the TSA’s body scanner. The device works by firing a beam of x-rays at at target. Some of the beam will go through the target, some will be absorbed by the target, and some will reflect back. These reflected x-rays are called ‘backscatter‘, and they are captured to create an image.
In [Ben]’s setup a rotating disk focuses x-rays into beams that travel in arcs across the X-axis. The disk is moved along the Y-axis to fill in the scan. On the disk assembly, there is a potentometer to measure the y-axis position of the beam, and an optical sensor to trigger an oscilloscope, aligning the left and right sides of the image. Using these two sensors, the scope can reconstruct an X-Y plot of the scan.
To detect the x-rays, a phosphorous screen turns the backscattered x-rays into visible light, and a photo-multiplier amplifies the light source. A simple amplifier circuit connects the photo-multiplier to a scope, controlling the brightness at each point.
The result is very similar to the TSA version, and [Ben] managed to learn a lot about the system from a patent. This isn’t the first body scanner we’ve seen though: [Jeri Ellsworth] built a microwave version a couple years ago.
The impressive build does a great job of teaching the fundamentals of backscatter imaging. [Ben] will be talking about the project at EHSM, which you should check out if you’re in Berlin from December 28th to the 30th. After the break, watch [Ben]’s machine scan a turkey in a Christmas sweater.