One of the core lessons any physics student will come to realize is that the more you know about physics, the less intuitive it seems. Take the nature of light, for example. Is it a wave? A particle? Both? Neither? Whatever the answer to the question, scientists are at least able to exploit some of its characteristics, like its ability to bend and bounce off of obstacles. This camera, for example, is able to image a room without a direct light-of-sight as a result.
The process works by pointing a camera through an opening in the room and then strobing a laser at the exposed wall. The laser light bounces off of the wall, into the room, off of the objects on the hidden side of the room, and then back to the camera. This concept isn’t new, but the interesting thing that this group has done is lift the curtain on the image processing underpinnings. Before, the process required a research team and often the backing of the university, but this project shows off the technique using just a few lines of code.
This project’s page documents everything extensively, including all of the algorithms used for reconstructing an image of the room. And by the way, it’s not a simple 2D image, but a 3D model that the camera can capture. So there should be some good information for anyone working in the 3D modeling world as well.
Filming in slow-motion has long become a standard feature on the higher end of the smartphone spectrum, and can turn the most trivial physical activity into a majestic action shot to share on social media. It also unveils some little wonders of nature that are otherwise hidden to our eyes: the formation of a lightning flash during a thunderstorm, a hummingbird flapping its wings, or an avocado reaching that perfect moment of ripeness. Altogether, it’s a fun way of recording videos, and as [Robert Elder] shows, something you can do with a few dollars worth of Raspberry Pi equipment at a whopping rate of 660 FPS, if you can live with some limitations.
Taking the classic 24 FPS, this will turn a one-second video into a nearly half-minute long slo-mo-fest. To achieve such a frame rate in the first place, [Robert] uses [Hermann-SW]’s modified version of raspiraw to get raw image data straight from the camera sensor to the Pi’s memory, leaving all the heavy lifting of processing it into an actual video for after all the frames are retrieved. RAM size is of course one limiting factor for recording length, but memory bandwidth is the bigger problem, restricting the resolution to 64×640 pixels on the cheaper $6 camera model he uses. Yes, sixty-four pixels height — but hey, look at that super wide-screen aspect ratio!
While you won’t get the highest quality out of this, it’s still an exciting and inexpensive way to play around with slow motion. You can always step up your game though, and have a look at this DIY high-speed camera instead. And well, here’s one mounted on a lawnmower blade destroying anything but a printer.
[Cole Price] describes himself as a photographer and a space nerd. We’ll give that to him since his web site clearly shows a love of cameras and a love of the NASA programs from the 1960s. [Cole] has painstakingly made replicas of cameras used in the space program including a Hasselblad 500C used on a Mercury flight and another Hasselblad used during Apollo 11. His work is on display in several venues — for example, the 500C is in the Carl Zeiss headquarters building.
[Cole’s] only made a detailed post about 500C and a teaser about the Apollo 11 camera. However, there’s a lot of detail about what NASA — and an RCA technician named [Red Williams] — did to get the camera space-ready.
Taking a selfie before the modern smartphone era was a true endeavor. Flip phones didn’t have forward-facing cameras, and if you want to go really far back to the days of film cameras, you needed to set a timer on your camera and hope, or get a physical remote shutter. You could also try and create a self portrait on an Etch a Sketch, too, but this would take a lot of time and artistic skill. Luckily in the modern world, we can bring some of this old technology into the future and add a robot to create interesting retro selfies – without needing to be an artist.
The device from [im-pro] attaches two servos to the Etch a Sketch knobs. This isn’t really a new idea in itself, but the device also includes a front-facing camera, taking advantage of particularly inexpensive ESP32 Camera modules. Combining the camera features with [Bart Dring]’s ESP32 Grbl port is a winner. Check the code in [im-pro]’s GitHub.
Once the picture is taken, the ESP32 at the heart of the build handles the image processing and then drawing the image on the Etch a Sketch. The robot needs a black and white image to draw, and an algorithm for doing it without “lifting” the drawing tool, and these tasks stretch the capabilities of such a small processor. It takes some time to work, but in the end the results speak for themselves.
The final project is definitely worth looking for, if not for the interesting ESP32-controlled robot than for the image processing algorithim implementation. The ESP32 is a truly versatile platform, though, and is useful for building almost anything.
Robots have certainly made the world a better place. Virtually everything from automobile assembly to food production uses a robot at some point in the process, not to mention those robots that can clean your house or make your morning coffee. But not every robot needs such a productive purpose. This one allows you to punch the world, which while not producing as much physical value as a welding robot in an assembly line might, certainly seems to have some therapeutic effects at least.
The IoT Planet Puncher comes to us from [8BitsAndAByte] who build lots of different things of equally dubious function. This one allows us to release our frustration on the world by punching it (or rather, a small model of it). A small painted sphere sits in front of a 3D-printed boxing glove mounted on a linear actuator. The linear actuator is driven by a Raspberry Pi. The Pi’s job doesn’t end there, though, as the project also uses a Pi camera to take video of the globe and serve it on a webpage through which anyone can control the punching glove.
While not immediately useful, we certainly had fun punching it a few times, and once a mysterious hand entered the shot to make adjustments to the system as well. Projects like this are good fun, and sometimes you just need to build something, even if it’s goofy, because the urge strikes you. Continue reading “Punch The World With A Raspberry Pi”→
From time to time, we see electronics projects for model rocket instrumentation. Those who have been involved in the hobby for many years may remember when 8-bit microcontrollers like the PIC16F84 were the kind of hardware you might fly on a mission. These days, however, there’s little reason not to send a high-powered processor. This is exactly what [Mohamed Elhariry] has done with his PiX project, which turns a Raspberry Pi Zero W into a neat little flight data recorder.
The hardware has what you might expect from a flight recorder, including accelerometer, gyroscope, and pressure sensor. In addition, it carries temperature and humidity sensors, and of course, a camera. A 64 GB microSD card provides the storage, while a LiPo SHIM board allows the whole thing to run from a 150 mAh battery. All of the components are off-the-shelf breakouts, which makes assembly as easy as soldering a few connections and securing the modules with a little tape.
The project is in GitHub, including python code, schematics for the hardware, and detailed instructions. If you ever wanted to get started with instrumenting a model rocket, this looks like a great resource. Also in the repo is a captured video from an actual flight [34 MB GIF] if you just want to see the view from one launch.
In the hacker and DIY community, there are people who have exceptional knowledge and fantastic tools. These people are able to do what others could only dream about, and that others can only browse eBay looking for that one tool they need to do the job. One of these such people is [John McMaster]. He is the resident expert on looking inside integrated circuits. He drops acid on a chip, and he can tell you exactly how it works on the inside.
At the hardwear.io conference, [John] shared one of his techniques for reverse-engineering intgrated circuits. He’s doing this by simply looking at the transistors, and looking at the light they give off. He’s also looking at the wrong side of the die.
The technique [John] is using is properly called backside analysis, or looking at the infrared emissions of electron recombinations. This happens at the junction of every transistor when it’s active, and these photons are emitted at the bandgap of silicon, or about 1088 nm, far into the infrared. This sort of thing has been done before by [nedos] at CCC in 2013, but rarely have we seen a deep dive into the tools and techniques needed to look at the reverse side of an IC and see the photons coming off.
There are several tools [John] used for this work, and he actually did a good comparison of different camera technologies used to image infrared photon emissions from integrated circuits. InGaAs cameras are expensive, but they offer high sensitivity. New back-illuminated CMOS cameras and cooled CCDs normally reserved for astrophotography were also tested, and as always, you get what you pay for; the most expensive cameras worked best, but there were ways you could make the cheap ones work.
As with any camera work, preparing the lighting is of utmost importance. This includes an IR pass filter, and using only LED lighting in the lab with no sunlight, incandescent, or halogen light bulbs in the room — you don’t want any IR, after all. A NIR objective in the microscope was sourced from eBay, for about 1/10th the normal cost, because the objective had a small, insignificant scratch. Using this NIR objective made the image twice as bright as any other method. You can successfully image a chip with this, and [John] tested the setup on a resistor inside a CD4050 chip; the resistor glowed a slight purple, the color you would expect with infrared sensors. But can it work with I/O levels in a more modern chip? Also, yes. It needs some Photoshop to process, and stretching the 12-bit or 16-bit color space into an 8-bit color space, but it does work.
Finally, the supreme achievement of doing backside IR analysis. Is that possible with even this minimal setup? This requires some preparation; the silicon substrate in an IC is transparent in IR, but there is attenuation and this is especially important when the substrate is 300 um thick. This needs to be shaved down to about 25 um thick, which surprisingly is best done with fine sandpaper and a finger.
While few IR emissions were observed via backside emissions, the original plan wasn’t to completely analyze the chip, but merely to do some floor planning. For this, it worked. It’s a remarkable amount of work to see the inside of a silicon chip.