The technique is called NLOS or non-line-of-sight imaging. Previous approaches require scanning a large area to find indirect light from hidden objects. This new approach uses a laser to find objects that are moving. The indirect data changes based on the movement and an algorithm can reverse the measurements to determine the characteristics of the object.
A makerspace is a great place to use specialty tools that may be too expensive or large to own by oneself, but there are other perks that come with participation in that particular community. For example, all of the skills you’ve gained by using all that fancy equipment may make you employable in some very niche situations. [lukeiamyourfather] from the Dallas Makerspace recently found himself in just that situation, and was asked to image a two-million-year-old fossil.
The fossil was being placed into a CT machine for imaging, but was too thick to properly view. These things tend to be fragile, so he spent some time laser cutting an acrylic stand in order to image the fossil vertically instead of horizontally. Everything that wasn’t fossil had to be non-conductive for the CT machine, so lots of fishing line and foam was used as well. After the imaging was done, he was also asked to 3D print a model for a display in the museum.
This is all going on at the Perot Museum of Nature and Science if you happen to be in the Dallas area. It’s interesting to see these skills put to use out in the wild as well, especially for something as rare and fragile as studying an old fossil. Also, if you’d like to see if your local makerspace measures up to the Dallas makerspace, we featured a tour of it back in 2014, although they have probably made some updates since then.
Medical imaging is one of the very best applications of technology — it allows us to peer inside of the human body without actually performing surgery. It’s non-destructive testing to the extreme, and one of the more interesting projects we’ve seen over the past year uses AC currents and an infinite grid of resistors to image the inside of a living organism. It’s called Spectra and it is the brainchild of [Jean Rintoul]. Her talk at the Hackaday Superconference is all about low cost and open source biomedical imaging.
We’ve seen some interesting medical imaging hacks in the Hackaday Prize over the years. There have been vein finders and even a CT scanner, but when it comes to biomedical imaging, the Spectra project is something different. Right now, it’s just good enough to image organs while they’re still inside your body, and there’s still a lot of potential to do more. Let’s take a closer look a how this works.
The Day of Compulsory Romance is once more upon us, and in recognition of that fact we submit for your approval an alternative look at one of the symbols of romantic love: an X-ray of a rose.
Normally, diagnostic X-rays are somewhat bland representations of differential densities of the tissues that compose various organs and organ systems, generally rendered in shades of gray. But [Linas K], a physicist with time to kill and access to some cool tools, captured the images in the video below not only in vivid color, but as a movie. The imaging side of the project consists of a low-power X-ray tube normally used for non-clinical applications and a CMOS sensor panel. The second video shows that [Linas] did an impressive upgrade on the X-ray controller, rolling his own from an FPGA. This allowed him to tie in a stepper motor to rotate the rose incrementally while taking images which he stitched together in post.
Watching the interior structure of the flower as it spins is fascinating and romantic in its own right, for certain subsets of romance. And really, who wouldn’t appreciate the work that went into this? But if you don’t have access to X-ray gear, fear not — a lovely Valentine’s gift is only a bottle of ferric chloride away.
Like many people who have a solar power setup at home, [Jeroen Boeye] was curious to see just how much energy his panels were putting out. But unlike most people, it just so happens that he’s a data scientist with a deep passion for programming and a flair for visualizations. In his latest blog post, [Jeroen] details how his efforts to explain some anomalous data ended with the discovery that his solar array was effectively acting as an extremely low-resolution camera.
It all started when he noticed that in some months, the energy produced by his panels was not following the expected curve. Generally speaking, the energy output of stationary solar panels should follow a clear bell curve: increasing output until the sun is in the ideal position, and then decreasing output as the sun moves away. Naturally cloud cover can impact this, but cloud cover should come and go, not show up repeatedly in the data.
[Jeroen] eventually came to realize that the dips in power generation were due to two large trees in his yard. This gave him the idea of seeing if he could turn his solar panels into a rudimentary camera. In theory, if he compared the actual versus expected output of his panels at any given time, the results could be used as “pixels” in an image.
He started by creating a model of the ideal energy output of his panels throughout the year, taking into account not only obvious variables such as the changing elevation of the sun, but also energy losses through atmospheric dispersion. This model was then compared with the actual power output of his solar panels, and periods of low efficiency were plotted as darker dots to represent an obstruction. Finally, the plotted data was placed over a panoramic image taken from the perspective of the solar panels. Sure enough, the periods of low panel efficiency lined up with the trees and buildings that are in view of the panels.
One of the modern marvels in our medical toolkit is ultrasound imaging. One of its drawbacks, however, is that it displays 2D images. How expensive do you think it would be to retrofit an ultrasound machine to produce 3D images? Try a $10 chip and pennies worth of plastic.
While — of all things — playing the Wii with his son, [Joshua Broder, M.D], an emergency physician and associate professor of surgery at [Duke Health], realized he could port the Wii’s gyroscopic sensor to ultrasound technology. He did just that with the help of [Matt Morgan, Carl Herickhoff and Jeremy Dahl] from [Duke’s Pratt School of Engineering] and [Stanford University]. The team mounted the sensor onto the side of the probe with a 3D printed collar. This relays the orientation data to the computer running software that sutures the images together into a complete 3D image in near real-time, turning a $50,000 ultrasound machine into its $250,000 equivalent.
They say that a picture is worth a thousand words. But what is a picture exactly? One definition would be a perfect reflection of what we see, like one taken with a basic camera. Our view of the natural world is constrained to a bandwidth of 400 to 700 nanometers within the electromagnetic spectrum, so our cameras produce images within this same bandwidth.
For example, if I take a picture of a yellow flower with my phone, the image will look just about how I saw it with my own eyes. But what if we could see the flower from a different part of the electromagnetic spectrum? What if we could see less than 400 nm or greater than 700 nm? A bee, like many other insects, can see in the ultraviolet part of the spectrum which occupies the area below 400 nm. This “yellow” flower looks drastically different to us versus a bee.
In this article, we’re going to explore how images can be produced to show spectral information outside of our limited visual capacity, and take a look at the multi-spectral cameras used to make them. We’ll find that while it may be true that an image is worth a thousand words, it is also true that an image taken with a hyperspectral camera can be worth hundreds of thousands, if not millions, of useful data points. Continue reading “Hyperspectral Imaging – Seeing The Unseeable”→