Clever Stereo Camera Uses Sony Wireless Camera Modules

Stereophotography cameras are difficult to find, so we’re indebted to [DragonSkyRunner] for sharing their build of an exceptionally high-quality example. A stereo camera has two separate lenses and sensors a fixed distance apart, such that when the two resulting images are viewed individually with each eye there is a 3D effect. This camera takes two individual Sony cameras and mounts them on a well-designed wooden chassis, but that simple description hides a much more interesting and complex reality.

Sony once tested photography waters with the QX series — pair of unusual mirrorless camera models which took the form of just the sensor and lens.  A wireless connection to a smartphone allows for display and data transfer. This build uses two of these, with a pair of Android-running Odroid C2s standing in for the smartphones. Their HDMI video outputs are captured by a pair of HDMI capture devices hooked up to a Raspberry Pi 4, and there are a couple of Arduinos that simulate mouse inputs to the Odroids. It’s a bit of a Rube Goldberg device, but it allows the system to use Sony’s original camera software. An especially neat feature is that the camera unit and display unit can be parted for remote photography, making it an extremely versatile camera.

It’s good to see a stereo photography camera designed specifically for high-quality photography, previous ones we’ve seen have been closer to machine vision systems.

Camera held in hand

Review: Vizy Linux-Powered AI Camera

Vizy is a Linux-based “AI camera” based on the Raspberry Pi 4 that uses machine learning and machine vision to pull off some neat tricks, and has a design centered around hackability. I found it ridiculously simple to get up and running, and it was just as easy to make changes of my own, and start getting ideas.

Person and cat with machine-generated tags identifying them
Out of the box, Vizy is only a couple lines of Python away from being a functional Cat Detector project.

I was running pre-installed examples written in Python within minutes, and editing that very same code in about 30 seconds more. Even better, I did it all without installing a development environment, or even leaving my web browser, for that matter. I have to say, it made for a very hacker-friendly experience.

Vizy comes from the folks at Charmed Labs; this isn’t their first stab at smart cameras, and it shows. They also created the Pixy and Pixy 2 cameras, of which I happen to own several. I have always devoured anything that makes machine vision more accessible and easier to integrate into projects, so when Charmed Labs kindly offered to send me one of their newest devices, I was eager to see what was new.

I found Vizy to be a highly-polished platform with a number of truly useful hardware and software features, and a focus on accessibility and ease of use that I really hope to see more of in future embedded products. Let’s take a closer look.

Continue reading “Review: Vizy Linux-Powered AI Camera”

Astrophotography On The Game Boy Camera

The Game Boy Camera was the first digital camera that many of us ever interacted with. At the time it was fairly groundbreaking to take pictures without film, even though the resolution was extremely low by modern standards, and it could only shoot two-bit color. It’s been long enough since its release that it’s starting to become a popular classic with all kinds of hacks and modifications, like this one which adds modern SLR camera lenses which lets it take pictures of the Moon.

The limitations of the camera make for a fairly challenging build. Settings like exposure are automatic on the Game Boy Camera and can’t be changed, and the system only allows the user to change contrast and brightness. But the small sensor size means that astrophotography can be done with a lens that is also much smaller than a photographer would need with a modern DSLR. Once a mount was 3D printed to allow the lenses to be changed and a tripod mount was built, it was time to take some pictures of the moon.

Thanks to the interchangeability of the lenses with this build, the camera can also capture macro images as well. The build went into great detail on how to set all of this up, even going as far as giving tips for how to better 3D print interlocking threads, so it’s well worth a view. And, for other Game Boy Camera builds, take a look at this one which allows the platform to send its pictures over WiFi.

Continue reading “Astrophotography On The Game Boy Camera”

A 3D Printed 35mm Movie Camera

Making a camera can be as easy as taking a cardboard box with a bit of film and a pin hole, but making a more accomplished camera requires some more work. A movie camera has all the engineering challenges as a regular camera with the added complication of a continuous film transport mechanism and shutter. Too much work? Not if you are [Yuta Ikeya], whose 3D printed movie camera uses commonly-available 35 mm film stock rather than the 8 mm or 16 mm film you might expect.

3D printing might not seem to lend itself to the complex mechanism of a movie camera, however with the tech of the 2020s in hand he’s eschewed a complex mechanism in favour of an Arduino and a pair of motors. The camera is hardly petite, but is still within the size to comfortably carry on a shoulder. The film must be loaded into a pair of cassettes, which are pleasingly designed to be reversible, with either able to function as both take-up and dispensing spool.

The resulting images have an extreme wide-screen format and a pleasing artistic feel. Looking at them we’re guessing there may be a light leak or two, but it’s fair to say that they enhance the quality rather than detract from it. Those of us who dabble in movie cameras can be forgiven for feeling rather envious.

We’ve reached out to him asking whether the files might one day be made available, meanwhile you can see it in action in the video below the break.

Continue reading “A 3D Printed 35mm Movie Camera”

Is The IPhone Camera Too Smart? Or Not Smart Enough?

What is a photograph? Technically and literally speaking, it’s a drawing (graph) of light (photo). Sentimentally speaking, it’s a moment in time, captured for all eternity, or until the medium itself rots away. Originally, these light-drawings were recorded on film that had to be developed with a chemical process, but are nowadays often captured by a digital image sensor and available for instant admiration. Anyone can take a photograph, but producing a good one requires some skill — knowing how to use the light and the camera in concert to capture an image.

Eye-Dynamic Range

The point of a camera is to preserve what the human eye sees in a single moment in space-time. This is difficult because eyes have what is described as high dynamic range. Our eyes can process many exposure levels in real time, which is why we can look at a bright sky and pick out details in the white fluffy clouds. But a camera lens can only deal with one exposure level at a time.

In the past, photographers would create high dynamic range images by taking multiple exposures of the same scene and stitching them together.Done just right, each element in the image looks as does in your mind’s eye. Done wrong, it robs the image of contrast and you end up with a murky surreal soup.

Image via KubxLab

Continue reading “Is The IPhone Camera Too Smart? Or Not Smart Enough?”

OpenCV Brings Pinch To Zoom Into The Real World

Gesture controls arrived in the public consciousness a little over a decade ago as touchpads and touchscreens became more popular. The main limitation to gesture controls, a least as far as [Norbert] is concerned, is that they can only control objects in a virtual space. He was hoping to use gestures to control a real-world object instead, and created this device which uses gestures to control an actual picture.

In this unique augmented reality device, not only is the object being controlled in the real world but the gestures are being monitored there as well, thanks to a computer vision system watching his hand which is running OpenCV. The position data is fed into an algorithm which controls a physical picture mounted on a slender robotic arm. Now, when [Norbert] “pinches to zoom”, the servo attached to the picture physically brings it closer to or further from his field of view. He can also use other gestures to move the picture around.

While this gesture-controlled machine is certainly a proof-of-concept, there are plenty of other uses for gesture controls of real-world objects. Any robotics platform could benefit from an interface like this, or even something slightly more mundane like an office PowerPoint presentation. Opportunity abounds, but if you need a primer for OpenCV take a look at this build which tracks a hand in minute detail.

Continue reading “OpenCV Brings Pinch To Zoom Into The Real World”

Wordle bot

Solving Wordle By Adding Machine Vision To A 3D Printer

Truth be told, we haven’t jumped on the Wordle bandwagon yet, mainly because we don’t need to be provided with yet another diversion — we’re more than capable of finding our own rabbit holes to fall down, thank you very much. But the word puzzle does look intriguing, and since the rules and the interface are pretty simple, it’s no wonder we’ve seen a few efforts like this automated Wordle solver crop up lately.

The goal of Wordle is to find a specific five-letter, more-or-less-common English word in as few guesses as possible. Clues are given at each turn in the form of color-coding the letters to indicate whether they appear in the word and in what order. [iamflimflam1]’s approach was to attach a Raspberry Pi camera over the bed of a 3D printer and attach a phone stylus in place of the print head. A phone running Wordle is placed on the printer bed, and Open CV is used to find both the screen of the phone, as well as the position of the phone on the printer bed. From there, the robot uses the stylus to enter an opening word, analyzes the colors of the boxes, and narrows in on a solution.

The video below shows the bot in use, and source code is available if you want to try it yourself. If you need a deeper dive into Wordle solving algorithms, and indeed other variant puzzles in the *dle space, check out this recent article on reverse engineering the popular game.

Continue reading “Solving Wordle By Adding Machine Vision To A 3D Printer”