A camera slider made from wood and recycled parts

Turning Old Plotter Parts Into A Smooth Camera Slider

Taking apart old stuff and re-using the parts to make something new is how many hackers first got started in the world of mechanical and electronic engineering. But even after years working in industry we still get that tinge of excitement whenever someone offers us an old device “for parts”, and immediately begin to imagine the things we could build with the components inside.

A GoPro mounted on a moving platform made from recycled partsSo when [Victor Frost] was offered an old Cricut cutting plotter, he realized he could use its parts to create the camera slider he’d been planning to build. The plotter’s X stage, controlled by a stepper motor, was ideal for moving a camera platform back and forth. [Victor] wanted to build the entire thing in a “freehand” way, without making a detailed design or purchasing any new parts. So he dived into his parts bin and dug up an Arduino, a 16×2 LCD, some wires and buttons, and a few pieces of MDF.

The camera mount is simply a piece of steel that a GoPro’s magnetic mount can latch onto, but [Victor] keeps open the possibility of mounting a proper tripod ball head. The Arduino drives the stepper motor through an Adafruit Motor Shield, with a simple user interface running on the LCD. The user can set the desired end points and speed, and then run the camera back and forth as often as needed. In this way, the software follows the same “keep it simple” philosophy as the hardware design.

If you’re planning to build your own camera slider, [Victor]’s design should be easy to copy, if you happen to have an old cutting plotter. If not, you can try this simple yet well-engineered model. Want even more? Then check out this fancy multi-axis camera motion control rig.

Continue reading “Turning Old Plotter Parts Into A Smooth Camera Slider”

A Soft Thumb-Sized Vision-Based Touch Sensor

A team from the Max Planck Institute for Intelligent Systems in Germany have developed a novel thumb-shaped touch sensor capable of resolving the force of a contact, as well as its direction, over the whole surface of the structure. Intended for dexterous manipulation systems, the system is constructed from easily sourced components, so should scale up to a larger assemblies without breaking the bank. The first step is to place a soft and compliant outer skin over a rigid metallic skeleton, which is then illuminated internally using structured light techniques. From there, machine learning can be used to estimate the shear and normal force components of the contact with the skin, over the entire surface, by observing how the internal envelope distorts the structured illumination.

The novelty here is the way they combine both photometric stereo processing with other structured light techniques, using only a single camera. The camera image is fed straight into a pre-trained machine learning system (details on this part of the system are unfortunately a bit scarce) which directly outputs an estimate of the contact shape and force distribution, with spatial accuracy reported good to less than 1 mm and force resolution down to 30 millinewtons. By directly estimating normal and shear force components the direction of the contact could be resolved to 5 degrees. The system is so sensitive that it can reportedly detect its own posture by observing the deformation of the skin due its own weight alone!

We’ve not covered all that many optical sensing projects, but here’s one using a linear CIS sensor to turn any TV into a touch screen. And whilst we’re talking about using cameras as sensors, here’s a neat way to use optical fibers to read multiple light-gates with a single camera and OpenCV.

Continue reading “A Soft Thumb-Sized Vision-Based Touch Sensor”

Super Simple Camera Slider With A Neat Twist

When you get into making videos of products or your own cool hacks, at some point you’re going to start wondering how those neat panning and rotating shots are achieved. The answer is quite often some kind of mechanical slider which sends the camera along a predefined path. Buying one can be an expensive outlay, so many people opt to build one. [Rahel zahir Ali] was no different, and designed and built a very simple slide, but with a neat twist.

This design uses a geared DC motor, taken from a car windscreen wiper. That’s a cost effective way to get your hands on a nice high-torque motor with an integral reduction gearbox. The added twist is that the camera mount is pivoted and slides on a third, central smooth rod. The ends of this guide rod can be offset at either end, allowing the camera to rotate up to thirty degrees as the slide progresses from one end to the other. With a few tweaks, the slider can be vertically mounted, to give those up-and-over shots. Super simple, low tech and not an Arduino in sight.

The CAD modelling was done with Fusion 360, with all the models downloadable with source, in case someone needs to adapt the design further. We were just expecting a pile of STLs, so seeing the full source was a nice surprise, given how many open source projects like this (especially on Thingiverse) do often seem to neglect this.

Electronics consist of a simple DC motor controller (although [Rahel] doesn’t mention a specific product, it should not be hard to source) which deals with the speed control, and a DPDT latching rocker switch handles the motor direction. A pair of microswitches are used to stop the motor at the end of its travel. Other than a 3D printer, there is nothing at all special needed to make yourself quite a useful little slider!

We’ve seen a few slider designs, since this is a common problem for content creators. Here’s a more complicated one, and another one.

Continue reading “Super Simple Camera Slider With A Neat Twist”

Game Boy Camera Gets Ridiculously Good Lens

How do you get better pictures from a 20+ year old Game Boy Camera? How about marrying a DSLR lens to it? That’s what [ConorSev] did and, honestly, the results are better than you might expect as [John Aldred] mentioned in his post about the topic. You can check the camera out in the video below.

A 3D printed adapter lets you mount a Canon EF lens to the Game Boy Camera, a trick that we’ve seen in the past. [ConorSev] looked at the existing adapters floating around, and came up with the revised version you see here. There was still the problem of actually getting the images off the Camera cartridge, but luckily, this isn’t exactly unexplored territory either.

While there might not be anything new with this project, using a high-quality lens on the toy makes for some interesting photographs, and you wonder how far you can push this whole idea. Of course, no matter how much of a lens you put on the front, you still have to contend with the original image sensor which has hardly well. Still, we were impressed at how much better things looked with a high-quality zoom lens.

We bet the original designer of the Game Boy Camera never imagined it would have the kind of zoom capability you can see in the video. We love seeing these little handhelds pushed beyond their limits. Cryptomining? No problem. Morse code? Piece of cake.

Continue reading “Game Boy Camera Gets Ridiculously Good Lens”

X-ray image of a camera lens

Observing A Plant’s Vascular System With X-Ray Video

[Ben Krasnow] has a knack for showing us what’s inside of things while they’re moving. This week’s Applied Science experiment has him making time-lapse X-ray videos of things. This plant’s vascular system is just one of a few examples, the others being a dial clock and the zoom lens on a DSLR.

X-ray of plantThe trick here is having an X-ray sensing panel that can be reused. It takes around five seconds of exposure to grab each 40×40 cm frame which are then assembled back into video.

Now watching mechanisms move is cool — [Ben’s] video back in 2015 to show what a phonograph needle in the groove of a vinyl record looks like under a scanning electron microscope is still one for the coolest “camera tricks” we’ve ever seen pulled off. But watching the vascular system of a plant function is the recipe for one of those ah-ha educational moments, so we hope that 7th-grade biology teachers everywhere will find their way to this video.

The apparatus is described in great detail, but regular Hackaday readers will most likely want to focus in on the teardown of the X-ray panel, which [Ben] describes as a giant digital camera sensor tuned for receiving the X-rays. The source is a 50 kV 1 mA tube that he compares to what is used at the dental office. (Obviously this requires forethought to ensure his automated time-lapse setup will fail safe with the X-ray tube.) A Cyclone III FPGA drives the panel, communicating with the sensor array via two Ethernet interfaces.

A friend sent a the broken panel to [Ben] and he was able to easily repair a MOSFET that got knocked out of place. [biluni] shows up in the comments of this video, sharing his recollection from working in the industry 15 years ago that a panel like this would have cost $150k! But considering the stellar resolution, and repeatable use, it sure as heck beats the old film process.

Continue reading “Observing A Plant’s Vascular System With X-Ray Video”

Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders

The eyes are windows into the mind, and this research into what jumping spiders look at and why required a clever device that performs eye tracking, but for jumping spiders. The eyesight of these fascinating creatures in some ways has a lot in common with humans. We both perceive a wide-angle region of lower visual fidelity, but are capable of directing our attention to areas of interest within that to see greater detail. Researchers have been able to perform eye-tracking on jumping spiders, literally showing exactly where they are looking in real-time, with the help of a custom device that works a little bit like a miniature movie theatre.

A harmless temporary adhesive on top (and a foam ball for a perch) holds a spider in front of a micro movie projector and IR camera. Spiders were not harmed in the research.

To do this, researchers had to get clever. The unblinking lenses of a spider’s two front-facing primary eyes do not move. Instead, to look at different things, the cone-shaped inside of the eye is shifted around by muscles. This effectively pulls the retina around to point towards different areas of interest. Spiders, whose primary eyes have boomerang-shaped retinas, have an X-shaped region of higher-resolution vision that the spider directs as needed.

So how does the spider eye tracker work? The spider perches on a tiny foam ball and is attached — the help of a harmless and temporary adhesive based on beeswax — to a small bristle. In this way, the spider is held stably in front of a video screen without otherwise being restrained. The spider is shown home movies while an IR camera picks up the reflection of IR off the retinas inside the spider’s two primary eyes. By superimposing the IR reflection onto the displayed video, it becomes possible to literally see exactly where the spider is looking at any given moment. This is similar in some ways to how eye tracking is done for humans, which also uses IR, but watches the position of the pupil.

In the short video embedded below, if you look closely you can see the two retinas make an X-shape of a faintly lighter color than the rest of the background. Watch the spider find and focus on the silhouette of a tasty cricket, but when a dark oval appears and grows larger (as it would look if it were getting closer) the spider’s gaze quickly snaps over to the potential threat.

Feel a need to know more about jumping spiders? This eye-tracking research was featured as part of a larger Science News article highlighting the deep sensory spectrum these fascinating creatures inhabit, most of which is completely inaccessible to humans.

Continue reading “Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders”

OAK-D Depth Sensing AI Camera Gets Smaller And Lighter

The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.

Animated face with small blue dots as 3D feature markers.
An example of real-time feature tracking, now in 3D thanks to integrated depth sensing.

The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.

The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.

So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.

There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.

Earlier this year we saw the OAK-D used in a system to visually identify weeds and estimate biomass in agriculture, and it’s exciting to see a new model being released. If you’re interested, the OAK-D Lite is available at a considerable discount during the Kickstarter campaign.