Super Simple Camera Slider With A Neat Twist

When you get into making videos of products or your own cool hacks, at some point you’re going to start wondering how those neat panning and rotating shots are achieved. The answer is quite often some kind of mechanical slider which sends the camera along a predefined path. Buying one can be an expensive outlay, so many people opt to build one. [Rahel zahir Ali] was no different, and designed and built a very simple slide, but with a neat twist.

This design uses a geared DC motor, taken from a car windscreen wiper. That’s a cost effective way to get your hands on a nice high-torque motor with an integral reduction gearbox. The added twist is that the camera mount is pivoted and slides on a third, central smooth rod. The ends of this guide rod can be offset at either end, allowing the camera to rotate up to thirty degrees as the slide progresses from one end to the other. With a few tweaks, the slider can be vertically mounted, to give those up-and-over shots. Super simple, low tech and not an Arduino in sight.

The CAD modelling was done with Fusion 360, with all the models downloadable with source, in case someone needs to adapt the design further. We were just expecting a pile of STLs, so seeing the full source was a nice surprise, given how many open source projects like this (especially on Thingiverse) do often seem to neglect this.

Electronics consist of a simple DC motor controller (although [Rahel] doesn’t mention a specific product, it should not be hard to source) which deals with the speed control, and a DPDT latching rocker switch handles the motor direction. A pair of microswitches are used to stop the motor at the end of its travel. Other than a 3D printer, there is nothing at all special needed to make yourself quite a useful little slider!

We’ve seen a few slider designs, since this is a common problem for content creators. Here’s a more complicated one, and another one.

Continue reading “Super Simple Camera Slider With A Neat Twist”

Game Boy Camera Gets Ridiculously Good Lens

How do you get better pictures from a 20+ year old Game Boy Camera? How about marrying a DSLR lens to it? That’s what [ConorSev] did and, honestly, the results are better than you might expect as [John Aldred] mentioned in his post about the topic. You can check the camera out in the video below.

A 3D printed adapter lets you mount a Canon EF lens to the Game Boy Camera, a trick that we’ve seen in the past. [ConorSev] looked at the existing adapters floating around, and came up with the revised version you see here. There was still the problem of actually getting the images off the Camera cartridge, but luckily, this isn’t exactly unexplored territory either.

While there might not be anything new with this project, using a high-quality lens on the toy makes for some interesting photographs, and you wonder how far you can push this whole idea. Of course, no matter how much of a lens you put on the front, you still have to contend with the original image sensor which has hardly well. Still, we were impressed at how much better things looked with a high-quality zoom lens.

We bet the original designer of the Game Boy Camera never imagined it would have the kind of zoom capability you can see in the video. We love seeing these little handhelds pushed beyond their limits. Cryptomining? No problem. Morse code? Piece of cake.

Continue reading “Game Boy Camera Gets Ridiculously Good Lens”

X-ray image of a camera lens

Observing A Plant’s Vascular System With X-Ray Video

[Ben Krasnow] has a knack for showing us what’s inside of things while they’re moving. This week’s Applied Science experiment has him making time-lapse X-ray videos of things. This plant’s vascular system is just one of a few examples, the others being a dial clock and the zoom lens on a DSLR.

X-ray of plantThe trick here is having an X-ray sensing panel that can be reused. It takes around five seconds of exposure to grab each 40×40 cm frame which are then assembled back into video.

Now watching mechanisms move is cool — [Ben’s] video back in 2015 to show what a phonograph needle in the groove of a vinyl record looks like under a scanning electron microscope is still one for the coolest “camera tricks” we’ve ever seen pulled off. But watching the vascular system of a plant function is the recipe for one of those ah-ha educational moments, so we hope that 7th-grade biology teachers everywhere will find their way to this video.

The apparatus is described in great detail, but regular Hackaday readers will most likely want to focus in on the teardown of the X-ray panel, which [Ben] describes as a giant digital camera sensor tuned for receiving the X-rays. The source is a 50 kV 1 mA tube that he compares to what is used at the dental office. (Obviously this requires forethought to ensure his automated time-lapse setup will fail safe with the X-ray tube.) A Cyclone III FPGA drives the panel, communicating with the sensor array via two Ethernet interfaces.

A friend sent a the broken panel to [Ben] and he was able to easily repair a MOSFET that got knocked out of place. [biluni] shows up in the comments of this video, sharing his recollection from working in the industry 15 years ago that a panel like this would have cost $150k! But considering the stellar resolution, and repeatable use, it sure as heck beats the old film process.

Continue reading “Observing A Plant’s Vascular System With X-Ray Video”

Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders

The eyes are windows into the mind, and this research into what jumping spiders look at and why required a clever device that performs eye tracking, but for jumping spiders. The eyesight of these fascinating creatures in some ways has a lot in common with humans. We both perceive a wide-angle region of lower visual fidelity, but are capable of directing our attention to areas of interest within that to see greater detail. Researchers have been able to perform eye-tracking on jumping spiders, literally showing exactly where they are looking in real-time, with the help of a custom device that works a little bit like a miniature movie theatre.

A harmless temporary adhesive on top (and a foam ball for a perch) holds a spider in front of a micro movie projector and IR camera. Spiders were not harmed in the research.

To do this, researchers had to get clever. The unblinking lenses of a spider’s two front-facing primary eyes do not move. Instead, to look at different things, the cone-shaped inside of the eye is shifted around by muscles. This effectively pulls the retina around to point towards different areas of interest. Spiders, whose primary eyes have boomerang-shaped retinas, have an X-shaped region of higher-resolution vision that the spider directs as needed.

So how does the spider eye tracker work? The spider perches on a tiny foam ball and is attached — the help of a harmless and temporary adhesive based on beeswax — to a small bristle. In this way, the spider is held stably in front of a video screen without otherwise being restrained. The spider is shown home movies while an IR camera picks up the reflection of IR off the retinas inside the spider’s two primary eyes. By superimposing the IR reflection onto the displayed video, it becomes possible to literally see exactly where the spider is looking at any given moment. This is similar in some ways to how eye tracking is done for humans, which also uses IR, but watches the position of the pupil.

In the short video embedded below, if you look closely you can see the two retinas make an X-shape of a faintly lighter color than the rest of the background. Watch the spider find and focus on the silhouette of a tasty cricket, but when a dark oval appears and grows larger (as it would look if it were getting closer) the spider’s gaze quickly snaps over to the potential threat.

Feel a need to know more about jumping spiders? This eye-tracking research was featured as part of a larger Science News article highlighting the deep sensory spectrum these fascinating creatures inhabit, most of which is completely inaccessible to humans.

Continue reading “Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders”

OAK-D Depth Sensing AI Camera Gets Smaller And Lighter

The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.

Animated face with small blue dots as 3D feature markers.
An example of real-time feature tracking, now in 3D thanks to integrated depth sensing.

The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.

The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.

So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.

There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.

Earlier this year we saw the OAK-D used in a system to visually identify weeds and estimate biomass in agriculture, and it’s exciting to see a new model being released. If you’re interested, the OAK-D Lite is available at a considerable discount during the Kickstarter campaign.

3D Printed Adapter Puts Slides In Their Best Light

If you’ve got old family photos on slides there’s an excellent chance you’ve considered digitizing them at one point or another, but perhaps didn’t know the best way of going about it. In that case, this 3D printed adapter designed by [Rostislav Persion] that lets you photograph slides with a standard DSLR may be exactly what you were waiting for.

The idea is simple enough, you place the slide inside the adapter, get your focus right, and snap a picture. But of course, you’ve also got to provide some illumination. In this case, the camera is mounted on a tripod and pointed at an appropriate light source. Once you’ve experimented a bit and got the image backlit the way you want it, you can lock everything in place and easily power through a stack of vintage family memories in no time.

For such a straightforward concept, we really appreciate the little details in the execution. For example, rather than just sliding a 3D printed cylinder over the DSLR’s lens, [Rostislav] came up with a foam-padded “shim” that’s strong enough to hold the adapter on without marring anything. The two-part slide spacer that features a bit of springiness to hold everything tight is also a very nice touch.

An approach like this should work nicely for the amount of slides most families are likely to have, but if you’re in a position where you need to digitize thousands of images, some automation would certainly help things along.

World’s Cheapest And Possibly Worst IR Camera

Don’t blame us for the title. [CCrome] admits it may well be the cheapest and worst IR camera available. The concept is surprisingly simple. Mount a cheap Harbor Freight non-contact thermometer on a 3D printer carriage and use it to scan the target. The design files are available on GitHub.

There is, of course, an Arduino to grab the data and send it to the PC. Some Python code takes care of converting it into an image.

Perhaps you don’t need a camera, but having a way to communicate with an $11 IR temperature sensor might come in handy someday. You do have to mash the measurement button down, so [CCrome] used the 3D printer to make a clamp for the button that also holds the POGO pins to the PCB. We would have been tempted to solder across the switch and also solder the wires to the pad. But, then again, you need a 3D printer for the project anyway.

Don’t expect the results you would get from a real thermal sensor. If you want that, you may have to build it yourself or open your wallet wide. If you need some inspiration for a use case, look at the thermal camera contest from a few years back.