It’s no secret that a lot of time, money, and effort goes into photographing and filming all that delicious food you see in advertisements. Mashed potatoes in place of ice cream, carefully arranged ingredients on subs, and perfectly golden french fries are all things you’ve seen so often that they’re taken for granted. But, those are static shots – the food is almost always just sitting on a plate. At most, you might see a chef turning a steak or searing a fillet in a commercial for a restaurant. What takes real skill – both artistic and technical – is assembling a hamburger in mid-air and getting it all in stunning 4k video.
That’s what [Steve Giralt] set out to do, and to accomplish it he had to get creative. Each component of the hamburger was suspended by rubber bands, and an Arduino timed and controlled servo system cut each rubber band just before that ingredient entered the frame. There’s even a 3D printed dual-catapult system to fling the condiments, causing them to collide in the perfect place to land in place on the burger.
We use touch screens all the time these days, and though we all know they support multiple touch events it is easy for us to take them for granted and forget that they are a rather accomplished sensor array in their own right.
[Optismon] has long held an interest in capacitive touch screen sensors, and has recently turned his attention to the official Raspberry Pi 7-inch touchscreen display. He set out to read its raw capacitance values, and ended up with a fully functional 2D capacitive imaging device able to sense hidden nails and woodwork in his drywall.
Reading the capacitance values is not a job for the faint-hearted though. There is an I2C bus which is handled by the Pi GPU rather than the processor, and to read it in software would require a change to the Pi’s infamous Broadcom binary blob. His solution which he agrees is non-optimal was to take another of the Pi’s I2C lines that he could talk to and connect it in parallel with the display line. As a result he can catch the readings from the screen’s sensors and with a bit of scripting make a 2D display on the screen. The outlines of hands and objects on his desk can clearly be seen when he places them on the screen, and when he runs the device over his wall it shows the position of the studding and nails behind the drywall.
The sensor on your digital camera picks up a lot more than just the light that’s visible to the human eye. Camera manufacturers go out of their way to reduce this to just the visible spectrum in order to produce photos that look right to us. But, what if you want your camera to take photos of the full light spectrum? This is particularly useful for astrophotography, where infrared light dramatically adds to the effect.
Generally, accomplishing this is just a matter of removing the internal IR-blocking filter from your camera. However, most of us are a little squeamish about tearing into our expensive DSLRs. This was the dilemma that [Gavin] faced until a couple of years ago when he discovered the Canon EOS-M.
Now, it’s important to point out that one could do a similar conversion with just about any cheap digital camera and save themselves a lot of money (the practically give those things away now). But, as any photography enthusiast knows, lenses are just as important as the camera itself (maybe even more so).
So, if you’re interested in taking nice pictures, you’ve got to have a camera with an interchangeable lens. Of course, if you’re already into photography, you probably already have a DSLR with some lenses. This was the case for [Gavin], and so he needed a cheap digital camera that used Canon interchangeable lenses like the ones he already had. After finding the EOS-M, the teardown and IR-blocking filter removal was straightforward with just a couple of hiccups.
When [Gavin] wrote his post in 2014, the EOS-M was about $350. Now you can buy them for less than $150 used, so a conversion like this is definitely into the “cheap enough to tinker” realm. Have a Nikon camera? The Nikon 1 J3 is roughly equivalent to the original EOS-M, and is about the same price. Want to save even more money, and aren’t concerned with fancy lenses? You can do a full-spectrum camera build with a Raspberry Pi, with the added benefit of being able to adjust what light is let in.
One of the problems with a cheap drone is getting good video, especially in real time. Cheap hobby quadcopters often have a camera built-in or mounted in a fixed position. That’s great for fun shots, but it makes it hard to get just the right shot, especially as the drone tilts up and down, taking the camera with it. Pricey drones often have a gimbal mount to keep the camera stable, but you are still only looking in one direction.
Some cheap drones now have a VR (virtual reality) mode to feed signal to a headset or a Google Cardboard-like VR setup. That’s hard to fly, though, because you can’t really look around without moving the drone to match. You can mount multiple cameras, but now you’ve added weight and power drain to your drone.
MAGnet Systems wants to change all that with a lightweight spherical camera made to fit on a flying vehicle. The camera is under 2.5 inches square, weighs 62 grams, and draws less than 3 watts at 12 volts. It picks up a sphere that is 360 degrees around the drone’s front and back and 240 degrees centered directly under the drone. That allows a view of 30 degrees above the horizon as well as directly under the drone. There is apparently a different lens that can provide 280 degrees if you need that, although apparently that will add size and weight and be more suitable for use on the ground.
The software (see video below) runs on Windows or Android (they’ve promised an iOS version) and there’s no additional image processing hardware needed. The camera can also drive common VR headsets.
One of last year’s Hackaday Prize finalists was the DOLPi, [Dave Prutchi]’s polarimetric camera which used an LCD sheet from a welder’s mask placed in front of a Raspberry Pi camera. Multiple images were taken by the DOLPi at different polarizations and used to compute images designed to show the polarization of the light in each pixel and convey it to the viewer through color.
[Dave] wrote to tip us off about [Paul Wallace]’s take on the same idea, a DOLPi-inspired polarimetric camera using an iPhone with an ingenious solution to the problem of calibrating the device to the correct polarization angle for each image that does not require any electrical connection between phone and camera hardware. [Paul]’s camera is calibrated using the iPhone’s flash. The light coming from the flash through the LCD is measured by a phototransistor and Arduino Mini which sets the LCD to the correct polarization. The whole setup is taped to the back of the iPhone, though we suspect a 3D-printed holder could be made without too many problems. He provides full details as well as code for the iPhone app that controls the camera and computes the images on his blog post.
We love horrible hacks like this. It’s a lens and a ring of LEDs, taped to a cell phone. Powered through crocodile clips, also taped to the cell phone. There’s nothing professional here — we can think of a million ways to tweak this recipe. But the proof of the pudding is in the tasting.
[Maurice] is a photographer specializing in micrographs. These very large images of very small things are beautiful, but late last year he’s been limited by his equipment. He needed a new microscope, one designed for photography, that had a scanning stage, and ideally one that was cheap. He ended up choosing a microscope from the 80s. Did it meet all his qualifications? No, but it was good enough, and like all good tools, capable of being modified to make a better tool.
This was a Nikon microscope, and [Maurice] shoots a Canon. This, of course, meant the camera mount was incompatible with a Canon 5D MK III, but with a little bit of milling and drilling, this problem could be overcome.
That left [Maurice] with a rather large project on his hands. He had a microscope that met all his qualifications save for one: he wanted a scanning stage, or a bunch of motors and a camera controller that could scan over a specimen and shoot gigapixel images. This was easily accomplished with a few 3D printed parts, stepper motors, and a Makeblock Orion, an Arduino-based board designed for robotics that also has two stepper motor drivers.
With a microscope that could automatically scan over a specimen and snap a picture, the only thing left to build was a piece of software that automated the entire process. This software was built with Processing. While this sketch is very minimal, it does allow [Maurice] to set the step size and how many pictures to take in the X and Y axis. The result is easy automated micrographs. You can see a video of the process below.