We are all (hopefully) aware that we can be watched while we’re online. Our clicks are all trackable to some extent, whether it’s our country’s government or an advertiser. What isn’t as obvious, though, is that it’s just as easy to track our movements in real life. [Saulius] was able to prove this concept by using optical character recognition to track the license plate numbers of passing cars half a kilometer away.
To achieve such long distances (and still have clear and reliable data to work with) [Saulius] paired a 70-300 mm telephoto lens with a compact USB camera. All of the gear was set up on an overpass and the camera was aimed at cars coming around a corner of a highway. As soon as the cars enter the frame, the USB camera feeds the information to a laptop running openALPR which is able to process and record license plate data.
The build is pretty impressive, but [Saulius] notes that it isn’t the ideal setup for processing a large amount of information at once because of the demands made on the laptop. With this equipment, monitoring a parking lot would be a more feasible situation. Still, with even this level of capability available to anyone with the cash, imagine what someone could do with the resources of a national government. They might even have long distance laser night vision!
Prosumer DSLRs have been a boon to the democratization of digital media. Gear that once commanded professional prices is now available to those on more modest budgets. Not only has this unleashed a torrent of online content, it has also started a wave of camera hacks and accessories, like this automatic focus puller based on a Kinect and a Raspberry Pi.
For [Tom Piessens], the Canon EOS 5D has been a solid platform but suffers from a problem. The narrow depth of field possible with DSLRs makes it difficult to maintain focus on subjects that are moving relative to the camera, making follow-focus scenes like this classic hard to reproduce. Aiming for a better system than the stock autofocus, [Tom] grafted a Kinect sensor and a stepper motor actuator to a Raspberry Pi, and used the Kinect’s depth map to drive the focus ring. Parts are laser-cut, including a nice enclosure for the Pi and display that makes the whole thing reasonably portable. The video below shows the focus remaining locked on a selected region of interest. It seems like movement along only one axis is allowed; we’d love to see this system expanded to follow a designated object no matter where it moves in the frame.
If you’re in need of a follow-focus rig but don’t have a geared lens, check out these 3D-printed lens gears. They’d be a great complement to this backwoods focus-puller.
Continue reading “Kinect and Raspberry Pi Add Focus Pulling to DSLR”
As your builds get smaller and your eyes get older, you might appreciate a little optical assistance around the shop. Stereo microscopes and inspection cameras are great additions to your bench, but often command a steep price. So this DIY PCB inspection microscope might be just the thing if you’re looking to roll your own and save a few bucks.
It’s not fancy, and it’s not particularly complex, but [Saulius]’ build does the job, mainly because he thought the requirements through before starting the build. MDF is used for the stand because it’s dimensionally stable, easy to work, and heavy, which tends to stabilize motion and dampen vibration. The camera itself is an off-the-shelf USB unit with a CS mount that allows a wide range of lenses to be fitted. A $20 eBay macro slider allows for fine positioning, and a ring light stolen from a stereo microscope provides shadow-free lighting.
We’d say the most obvious area for improvement would be a linkage on the arm to keep the plane of the lens parallel to the bench, but even as it is this looks like a solid build with a lot of utility – especially for hackers looking to age in place at the bench.
Film photography began with a mercury-silver amalgam, and ended with strips of nitrocellulose, silver iodide, and dyes. Along the way, there were some very odd chemistries going on in the world of photography, from ferric and silver salts to the prussian blue found in Cyanotypes and blueprints.
Metal salts are fun, and for his Hackaday Prize entry, [David Brown] is building a printer for these alternative photographic processes. It’s not a dark room — it’s a laser printer designed to reproduce images with weird, strange chemistries.
Cyanotypes are made by applying potassium ferricyanide and ferric ammonium citrate to some sort of medium, usually paper or cloth. This is then exposed via UV light (i.e. the sun), and whatever isn’t exposed is washed off. Instead of the sun, [David] is using a common UV laser diode to expose his photographs. he already has the mechanics of this printer designed, and he should be able to reach his goal of 750 dpi resolution and 8-bit monochrome.
Digital photography will never go away, but there will always be a few people experimenting with light sensitive chemicals. We haven’t seen many people experiment with these strange alternative photographic processes, and anything that gets these really cool prints out into the world is great news for us.
It’s no secret that a lot of time, money, and effort goes into photographing and filming all that delicious food you see in advertisements. Mashed potatoes in place of ice cream, carefully arranged ingredients on subs, and perfectly golden french fries are all things you’ve seen so often that they’re taken for granted. But, those are static shots – the food is almost always just sitting on a plate. At most, you might see a chef turning a steak or searing a fillet in a commercial for a restaurant. What takes real skill – both artistic and technical – is assembling a hamburger in mid-air and getting it all in stunning 4k video.
That’s what [Steve Giralt] set out to do, and to accomplish it he had to get creative. Each component of the hamburger was suspended by rubber bands, and an Arduino timed and controlled servo system cut each rubber band just before that ingredient entered the frame. There’s even a 3D printed dual-catapult system to fling the condiments, causing them to collide in the perfect place to land in place on the burger.
Continue reading “Using Robotics To Film the Perfect Hamburger Shot”
We use touch screens all the time these days, and though we all know they support multiple touch events it is easy for us to take them for granted and forget that they are a rather accomplished sensor array in their own right.
[Optismon] has long held an interest in capacitive touch screen sensors, and has recently turned his attention to the official Raspberry Pi 7-inch touchscreen display. He set out to read its raw capacitance values, and ended up with a fully functional 2D capacitive imaging device able to sense hidden nails and woodwork in his drywall.
Reading the capacitance values is not a job for the faint-hearted though. There is an I2C bus which is handled by the Pi GPU rather than the processor, and to read it in software would require a change to the Pi’s infamous Broadcom binary blob. His solution which he agrees is non-optimal was to take another of the Pi’s I2C lines that he could talk to and connect it in parallel with the display line. As a result he can catch the readings from the screen’s sensors and with a bit of scripting make a 2D display on the screen. The outlines of hands and objects on his desk can clearly be seen when he places them on the screen, and when he runs the device over his wall it shows the position of the studding and nails behind the drywall.
He’s posted his code in a GitHub repository, and put up the YouTube video of his capacitive imaging in action which you can watch below the break.
Continue reading “Capacitive Imaging With A Raspberry Pi Touch Screen”
The sensor on your digital camera picks up a lot more than just the light that’s visible to the human eye. Camera manufacturers go out of their way to reduce this to just the visible spectrum in order to produce photos that look right to us. But, what if you want your camera to take photos of the full light spectrum? This is particularly useful for astrophotography, where infrared light dramatically adds to the effect.
Generally, accomplishing this is just a matter of removing the internal IR-blocking filter from your camera. However, most of us are a little squeamish about tearing into our expensive DSLRs. This was the dilemma that [Gavin] faced until a couple of years ago when he discovered the Canon EOS-M.
Now, it’s important to point out that one could do a similar conversion with just about any cheap digital camera and save themselves a lot of money (the practically give those things away now). But, as any photography enthusiast knows, lenses are just as important as the camera itself (maybe even more so).
So, if you’re interested in taking nice pictures, you’ve got to have a camera with an interchangeable lens. Of course, if you’re already into photography, you probably already have a DSLR with some lenses. This was the case for [Gavin], and so he needed a cheap digital camera that used Canon interchangeable lenses like the ones he already had. After finding the EOS-M, the teardown and IR-blocking filter removal was straightforward with just a couple of hiccups.
When [Gavin] wrote his post in 2014, the EOS-M was about $350. Now you can buy them for less than $150 used, so a conversion like this is definitely into the “cheap enough to tinker” realm. Have a Nikon camera? The Nikon 1 J3 is roughly equivalent to the original EOS-M, and is about the same price. Want to save even more money, and aren’t concerned with fancy lenses? You can do a full-spectrum camera build with a Raspberry Pi, with the added benefit of being able to adjust what light is let in.