Macro Photography With Industrial Lenses

Line scan cameras are advanced devices used for process inspection tasks in industrial applications. Used to monitor the quality of silicon wafers and other high-accuracy tasks, they’re often outfitted with top-quality optics that are highly specialised. [Peter] was able to get his hands on a lens for a line-scan camera, and decided to put it to work on some macro photography instead.

Macro image taken with the hacked lens.

Judging by the specs found online, this is a fairly serious piece of kit. It easily competes with top-shelf commercial optics, which is what piqued [Peter]’s interest in the part. Being such a specialised piece of hardware, you can’t just cruise over to eBay for an off-the-shelf adapter. Instead, a long chain of parts were used to affix this lens to a Sony AIII DSLR, converting from threaded fittings to a Nikon mount and then finally to Sony NEX mount.

Further work involved fitting an aperture into the chain to get the lens as close as possible to telecentric. This improves the lens’s performance for certain tasks, and makes focus stacking macro shots more readily achievable – something we’ve seen [Peter] tinker with before.

You never know what you might find when sorting through surplus industrial gear, could could score some high-performance hardware if you know where to look. It’s always great to see a cheap find become a useful instrument in the hacker toolbox!

Image Sensor From Discrete Parts Delivers Glorious 1-Kilopixel Images

Chances are pretty good that you have at least one digital image sensor somewhere close to you at this moment, likely within arm’s reach. The ubiquity of digital cameras is due to how cheap these sensors have become, and how easy they are to integrate into all sorts of devices. So why in the world would someone want to build an image sensor from discrete parts that’s 12,000 times worse than the average smartphone camera? Because, why not?

[Sean Hodgins] originally started this project as a digital pinhole camera, which is why it was called “digiObscura.” The idea was to build a 32×32 array of photosensors and focus light on it using only a pinhole, but that proved optically difficult as the small aperture greatly reduced the amount of light striking the array. The sensor, though, is where the interesting stuff is. [Sean] soldered 1,024 ALS-PT19 surface-mount phototransistors to the custom PCB along with two 32-bit analog multiplexers. The multiplexers are driven by a microcontroller to select each pixel in turn, one row and one column at a time. It takes a full five seconds to scan the array, so taking a picture hearkens back to the long exposures common in the early days of photography. And sure, it’s only a 1-kilopixel image, but it works.

[Sean] has had this project cooking for a while – in fact, the multiplexers he used for the camera came up as a separate project back in 2018. We’re glad to see that he got the rest built, even with the recycled lens he used. One wonders how a 3D-printed lens would work in front of that sensor.

Continue reading “Image Sensor From Discrete Parts Delivers Glorious 1-Kilopixel Images”

Custom Control Panels With Photogrammetry

One of the best applications for desktop 3D printing is the creation of one-off bespoke components. Most of the time a halfway decent pair of calipers and some patience is all it takes to model up whatever part you’re after, but occasionally things get complex enough that you might need a little help. If you ever find yourself in such a situation, salvation might be just a few marker scribbles away.

As [Mangy_Dog] explains in a recent video, he wanted to model a control panel for a laser cutter he’s been working on, but thought the shapes involved were a bit more than he wanted to figure out manually. So he decided to give photogrammetry a try. For the uninitiated, this process involves taking as many high-resolution images as possible of a given object from multiple angles, and letting the computer stitch that into a three dimensional model. He reasoned that if he had a 3D model of the laser’s existing front panel, it would be easy enough to 3D print some replacement parts for it.

That would be a neat enough trick on its own, but what we especially liked about this video was the tip that [Mangy_Dog] passed along about increasing visual complexity to improve the final results. Basically, the software is looking for identifiable surface details to piece together, so you can make things a bit easier for it by taking a few different colored markers and drawing all over the surface like a toddler. It might look crazy, but all those lines give the software some anchor points that help it sort out the nuances of the shape.

Unfortunately the markers ended up being a little more permanent than [Mangy_Dog] had hoped, and he eventually had to use acetone to get the stains off. Certainly something to keep in mind. But in the end, the 3D model generated was accurate enough that (after a bit of scaling) he was able to design a new panel that pops right on as if it was a factory component.

Hackaday readers may recall that when we last heard from [Mangy_Dog] he was putting the finishing touches on his incredible “Playdog Blackbone” handheld gaming system, which itself is a triumph of mating 3D printed components with existing hardware.

Continue reading “Custom Control Panels With Photogrammetry”

Build A DSLR Photo Booth The Easy Way

It’s a well-known fact in capitalist societies that any product or service, if being used in a wedding, instantly triples in cost. Wanting to avoid shelling out big money for a simple photo booth for a friend’s big day, [Lewis] decided to build his own.

Wanting a quality photo output, a Canon DSLR was selected to perform photographic duties. An Arduino Nano is then pressed into service to run the show. It’s hooked up to a MAX7219 LED matrix which feeds instructions to the willing participants, who activate the system with a giant glowing arcade button. When pressed, the Nano waits ten seconds and triggers the camera shutter, doing so three times. Images are displayed on a screen hooked up to the camera’s USB HDMI port.

It’s a build that keeps things simple. No single-board PCs needed, just a camera, an Arduino, and a monitor for the display. We’re sure the wedding-goers had a great time, and we look forward to seeing what [Lewis] comes up with next. We’ve seen a few of his hacks around here before, too.

Continue reading “Build A DSLR Photo Booth The Easy Way”

Autonomous Boat For Awesome Video Hyperlapses

With the ever-increasing capabilities of smart phones, action cameras, and hand-held gimbals, the battle for the best shots is intensifying daily on platforms like YouTube and Instagram. Hyperlapse sequences are one of the popular weapons in the armoury, and [Daniel Riley] aka [rctestflight] realised that his autonomous boat could be an awesome hyperlapse platform.

This is the third version of his autonomous boat, with version 1 suffering from seaweed assaults and version 2 almost sleeping with the fishes. The new version is a flat bottomed craft was built almost completely from pink insulation foam, making it stable and unsinkable. It uses the same electronics and air boat propulsion as version 2, with addition of a GoPro mounted in smart phone gimbal to film the hyper lapses. It has a tendency to push the bow into the water at full throttle, due to the high mounted motors, but was corrected by adding a foam bulge beneath the bow, at the cost of some efficiency.

Getting the gimbal settings tuned to create hyperlapses without panning jumps turned out to be the most difficult part. On calm water the boat is stable enough to fool the IMU into believing that it’s is not turning, so the gimbal controller uses the motor encoders to keep position, which don’t allow it to absorb all the small heading corrections the boat is constantly making. Things improved after turning off the encoder integration, but it would still occasionally bump against the edges of the dead band inside which the gimbal does not turn with the boat. In the end [Daniel] settled for slowly panning the gimbal to the left, while plotting a path with carefully calculated left turns to keep the boat itself out of the shot. While not perfect, the sequences still beautifully captured the night time scenery of Lake Union, Seattle. Getting it to this level cost many hours of midnight testing, since [Daniel] was doing his best to avoid other boat traffic, and we believe it paid off.

We look forward to his next videos, including an update on his solar plane. Continue reading “Autonomous Boat For Awesome Video Hyperlapses”

These Lessons Were Learned In Enclosure Design, But Go Far Beyond

[Foaly] has been hard at work making an open-source long range camera remote, and recently shared a deeply thoughtful post about how it is never too early to consider all aspects of design, lest it cost you in the end. It all started with designing an enclosure for a working prototype, and it led to redesigning the PCB from scratch. That took a lot of guts, and we recommend you make some time to click that link and read up on what he shared. You’ll either learn some valuable tips, or just enjoy nodding sagely as he confirms things you already know. It’s win-win.

Note the awkward buttons right next to the antenna connector, for example.

The project in question is Silver, and calling it a camera remote is selling it a bit short. In any case, [Foaly] had a perfectly serviceable set of prototypes and needed a small batch of enclosures. So far so normal, but in the process of designing possible solutions, [Foaly] ran into a sure-fire sign that a project is in trouble: problems cropping up everywhere, and in general everything just seeming harder than it should be. Holding the mounting-hole-free PCB securely never seemed quite right. Buttons were awkward to reach, ill-proportioned, and didn’t feel good to use. The OLED screen’s component was physically centered, but the display was off-center which looked wrong no matter how the lines of the bezel were sculpted. The PCB was a tidy rectangle, but the display ended up a bit small and enclosures always looked bulky by the time everything was accounted for. The best effort is shown here, and it just didn’t satisfy.

[Foaly] says the real problem was that he designed the electronics and did the layout while giving some thought (but not much thought) to their eventual integration into a case. This isn’t necessarily a problem for a one-off, but from a product design perspective it led to so many problems that it was better to start over, this time being mindful of how everything integrates right from the start: the layout, the components, the mechanical bits, the assembly, and the ultimate user experience. The end result is wonderful, and we’re delighted [Foaly] took the time to document his findings.

Enclosure design is a big deal and there are many different ways to go about it. For a more unique spin, be sure to check out our how-to make enclosures from the PCBs themselves. For a primer on more traditional enclosure manufacture and design, take a few minutes to familiarize yourself with injection molding.

Robotic Skin Sees When (and How) You’re Touching It

Cameras are getting less and less conspicuous. Now they’re hiding under the skin of robots.

A team of researchers from ETH Zurich in Switzerland have recently created a multi-camera optical tactile sensor that is able to monitor the space around it based on contact force distribution. The sensor uses a stack up involving a camera, LEDs, and three layers of silicone to optically detect any disturbance of the skin.

The scheme is modular and in this example uses four cameras but can be scaled up from there. During manufacture, the camera and LED circuit boards are placed and a layer of firm silicone is poured to about 5 mm in thickness. Next a 2 mm layer doped with spherical particles is poured before the final 1.5 mm layer of black silicone is poured. The cameras track the particles as they move and use the information to infer the deformation of the material and the force applied to it. The sensor is also able to reconstruct the forces causing the deformation and create a contact force distribution. The demo uses fairly inexpensive cameras — Raspberry Pi cameras monitored by an NVIDIA Jetson Nano Developer Kit — that in total provide about 65,000 pixels of resolution.

Apart from just providing more information about the forces applied to a surface, the sensor also has a larger contact surface and is thinner than other camera-based systems since it doesn’t require the use of reflective components. It regularly recalibrates itself based on a convolutional neural network pre-trained with data from three cameras and updated with data from all four cameras. Possible future applications include soft robotics, improving touch-based sensing with the aid of computer vision algorithms.

While self-aware robotic skins may not be on the market quite so soon, this certainly opens the possibility for robots that can detect when too much force is being applied to their structures — the machine equivalent sensation to pain.

Continue reading “Robotic Skin Sees When (and How) You’re Touching It”