In our info-obsessed culture, hackers are increasingly interested in ways to quantify the world around them. One popular project is to collect data about their home energy or water consumption to try and identify any trends or potential inefficiencies. For safety and potentially legal reasons, this usually has to be done in a minimally invasive way that doesn’t compromise the metering done by the utility provider. As you might expect, that often leads to some creative methods of data collection.
The latest solution comes courtesy of [Keilin Bickar], who’s using the ESP8266 and a serial TTL camera module to read the characters from the LCD of his water meter. With a 3D printed enclosure that doubles as a light source for the camera, the finished device perches on top of the water meter and sends the current reading to HomeAssistant via MQTT without any permanent wiring or mounting.
Of course, the ESP8266 is not a platform we generally see performing optical character recognition. Some clever programming was required to get the Wemos D1 Mini Lite to reliably read the numbers from the meter without having to push the task to a more computationally powerful device such as a Raspberry Pi. The process starts with a 160×120 JPEG image provided by a VC0706 camera module, which is then processed with the JPEGDecoder library. The top and bottom of the image are discarded, and the center band is isolated into blocks that correspond with the position of each digit on the display.
Within each block, the code checks an array of predetermined points to see if the corresponding pixel is black or not. In theory this allows detecting all the digits between 0 and 9, though [Keilin] says there were still the occasional false readings due to inherent instabilities in the camera and mounting. But with a few iterations to the code and the aid of a Python testing program that allowed him to validate the impact of changes to the algorithm, he was able to greatly improve the detection accuracy. He says it also helps that the nature of the data allows for some basic sanity checks; for example the number only ever goes up, and only by a relatively small amount each time.
This method might not allow the per-second sampling required to pull off the impressive (if slightly creepy) water usage data mining we saw recently, but as long as you’re not after very high resolution data this is an elegant and creative way to pull useful data from your existing utility meter.
When you hear the term “extension tube”, you probably think of something fairly long, right? But when [Loudifier] needed an extension tube to do extreme close-ups with a wide-angle lens on a Canon EF-M camera, it needed to be small…really small. The final 3D printed extension provides an adjustable length between 0 and 10 millimeters.
But it’s not just an extension tube, that would be too easy. According to [Loudifier], the ideal extension distance would be somewhere around 3 mm, but unfortunately the mounting bayonet for an EF-M lens is a little over 5 mm. To get around this, the extension tube also adapts to an EF/EF-S lens, which has a shorter mount and allows bringing it in closer than would be physically possible under otherwise.
[Loudifier] says the addition of electrical connections between the camera and the lens (for functions like auto focus) would be ideal, but the logistics of pulling that off are a bit daunting. For now, the most reasonable upgrades on the horizon are the addition of some colored dots on the outside to help align the camera, adapter, and lens. As the STLs and Fusion design file are released under the Creative Commons, perhaps the community will even take on the challenge of adapting it to other lens types.
For the polar opposite of this project, check out the 300 mm long 3D printed extension tube we covered a few weeks back that inspired [Loudifier] to send this project our way.
There is a treasure trove of history locked away in closets and attics, where old shoeboxes hold reels of movie film shot by amateur cinematographers. They captured children’s first steps, family vacations, and parties where [Uncle Bill] was getting up to his usual antics. Little of what was captured on thousands of miles of 8-mm and Super 8 film is consequential, but giving a family the means to see long lost loved ones again can be a powerful thing indeed.
That was the goal of [Anton Gutscher]’s automated 8-mm film scanner. Yes, commercial services exist that will digitize movies, slides, and snapshots, but where’s the challenge in that? And a challenge is what it ended up being. Aside from designing and printing something like 27 custom parts, [Anton] also had a custom PCB fabricated for the control electronics. Film handling is done with a stepper motor that moves one frame into the scanner at a time for scanning and cropping. An LCD display allows the archivist to move the cropping window around manually, and individual images are strung together with ffmpeg running on the embedded Raspberry Pi. There’s a brief clip of film from a 1976 trip to Singapore in the video below; we find the quality of the digitized film remarkably good.
Hats off to [Anton] for stepping up as the family historian with this build. We’ve seen ad hoc 8-mm digitizers before, but few this polished looking. We’ve also featured other archival attempts before, like this high-speed slide scanner.
Continue reading “3D-Printed Film Scanner Brings Family Memories Back To Life”
When building robots, or indeed other complex mechanical systems, it’s often the case that more and more limit switches, light gates and sensors are amassed as the project evolves. Each addition brings more IO pin usage, cost, potentially new interfacing requirements and accompanying microcontrollers or ADCs. If you don’t have much electronics experience, that’s not ideal. With this in mind, for a Hackaday prize entry [rand3289] is working on FiberGrid, a clever shortcut for interfacing multiple sensors without complex hardware. It doesn’t completely solve the problems above, but it aims to be a cheap, foolproof way to easily add sensors with minimal hardware needed.
The idea is simple: make your sensors from light gates using fiber optics, feed the ends of the plastic fibers into a grid, then film the grid with a camera. After calibrating the software, built with OpenCV, you can “sample” the sensors through a neat abstraction layer. This approach is easier and cheaper than you might think and makes it very easy to add new sensors.
Naturally, it’s not fantastic for sample rates, unless you want to splash out on a fancy high-framerate camera, and even then you likely have to rely on an OS being able to process the frames in time. It’s also not very compact, but fortunately you can connect quite a few sensors to one camera – up to 216 in [rand3289]’s prototype.
Of course, this type of setup is mostly suited to binary sensors/switches where the light path is either blocked or not, but other uses can be devised. For example, rotation sensors made with polarising filters. We’ve even written about optical flex sensors before.
The quality of a photograph is a subjective measure depending upon a multitude of factors of which the calibre of the camera is only one. Yet a high quality camera remains an object of desire for many photographers as it says something about you and not just about the photos you take. [Neutral Gray] didn’t have a Leica handheld camera, but did have a Sony. What’s a hacker to do, save up to buy the more expensive brand? Instead he chose to remodel the Sony into a very passable imitation.
This is a Chinese language page but well worth reading. We can’t get a Google Translate link
to work, but in Chrome browser, right clicking and selecting “translate” works. If you have a workaround for mobile and other browsers please leave a comment below.
The Sony A7R is hardly a cheap camera in the first place, well into the four-figure range, so it’s a brave person who embarks on its conversion to match the Leica’s flat-top aesthetic. The Sony was first completely dismantled and it was found that the electronic viewfinder could be removed without compromising the camera. In a bold move, its alloy housing was ground away, and replaced with a polished plate bearing a fake Leica branding.
Extensive remodelling of the hand grip with a custom carbon fibre part followed, with significantly intricate work to achieve an exceptionally high quality result. Careful choice of paint finish results in a camera that a non-expert would have difficulty knowing was anything but a genuine Leica, given that it is fitted with a retro-styled lens system.
We’re not so sure we’d like to brace Leica’s lawyers on this side of the world, but we can’t help admiring this camera. If you’re after a digital Leica though, you can of course have a go at the real thing.
Thanks [fvollmer] for the tip.
Filming in slow-motion has long become a standard feature on the higher end of the smartphone spectrum, and can turn the most trivial physical activity into a majestic action shot to share on social media. It also unveils some little wonders of nature that are otherwise hidden to our eyes: the formation of a lightning flash during a thunderstorm, a hummingbird flapping its wings, or an avocado reaching that perfect moment of ripeness. Altogether, it’s a fun way of recording videos, and as [Robert Elder] shows, something you can do with a few dollars worth of Raspberry Pi equipment at a whopping rate of 660 FPS, if you can live with some limitations.
Taking the classic 24 FPS, this will turn a one-second video into a nearly half-minute long slo-mo-fest. To achieve such a frame rate in the first place, [Robert] uses [Hermann-SW]’s modified version of
raspiraw to get raw image data straight from the camera sensor to the Pi’s memory, leaving all the heavy lifting of processing it into an actual video for after all the frames are retrieved. RAM size is of course one limiting factor for recording length, but memory bandwidth is the bigger problem, restricting the resolution to 64×640 pixels on the cheaper $6 camera model he uses. Yes, sixty-four pixels height — but hey, look at that super wide-screen aspect ratio!
While you won’t get the highest quality out of this, it’s still an exciting and inexpensive way to play around with slow motion. You can always step up your game though, and have a look at this DIY high-speed camera instead. And well, here’s one mounted on a lawnmower blade destroying anything but a printer.
Continue reading “660 FPS Raspberry Pi Video Captures The Moment In Extreme Slo-Mo”
Macro photography — the art of taking pictures of tiny things — can be an expensive pastime. Good lenses aren’t cheap, and greater magnification inflates the price even further. One way to release a bit more performance from your optics comes in the form of an extension tube, which mounts your lens further from the camera to zoom in a little on the image. Back in the day with a film SLR you could make a rough and ready tube with cardboard and tape, but in the age of the digital camera the lens has become as much a computer peripheral as an optical device. [Nicholas Sherlock] has solved this problem by creating a 3D-printed extension tube for his Canon that preserves connections between camera and lens.
More details of this 300mm monster’s construction go so far beyond a plastic tub formed of two threaded sections with adapter plates at the ends. He’s using off-the-shelf metal rings to fit camera and lens just right, but making the electronic contacts is where it gets interesting. On end uses pogo pins, the other provides a contact block made of nail heads. In both cases the 3D-printed parts are designed to provide mounting points for the pins and nails. The assembly technique is worth a look both because of the design and as an example of how to document all the juicy details we’re constantly looking for in a great hack.
The results speak for themselves, in that the photography provides an impressive level of close-up detail. If you would like to build your own tube, it is available on Thingiverse.
Macro extensions seem far between here, but we’ve brought you a few lens repairs in our time.