Stereophotography cameras are difficult to find, so we’re indebted to [DragonSkyRunner] for sharing their build of an exceptionally high-quality example. A stereo camera has two separate lenses and sensors a fixed distance apart, such that when the two resulting images are viewed individually with each eye there is a 3D effect. This camera takes two individual Sony cameras and mounts them on a well-designed wooden chassis, but that simple description hides a much more interesting and complex reality.
Sony once tested photography waters with the QX series — pair of unusual mirrorless camera models which took the form of just the sensor and lens. A wireless connection to a smartphone allows for display and data transfer. This build uses two of these, with a pair of Android-running Odroid C2s standing in for the smartphones. Their HDMI video outputs are captured by a pair of HDMI capture devices hooked up to a Raspberry Pi 4, and there are a couple of Arduinos that simulate mouse inputs to the Odroids. It’s a bit of a Rube Goldberg device, but it allows the system to use Sony’s original camera software. An especially neat feature is that the camera unit and display unit can be parted for remote photography, making it an extremely versatile camera.
It’s good to see a stereo photography camera designed specifically for high-quality photography, previous ones we’ve seen have been closer to machine vision systems.
13 thoughts on “Clever Stereo Camera Uses Sony Wireless Camera Modules”
My first thought is how well would this work for photogrammetry?
“…there is a 3D effect”
It’s not just an “effect.” What you have here is a 3-D camera. They’ve been around since the 1800s.
To be fair, your eyes are a pair of sensors and lenses set a fixed distance apart.
Nice to see other folks are on DIY stereographic projects as well, I’ve a little project myself on http://www.stereoscope.nl
For good quality stereo video, one would want some method to synchronize the image capture of the two cameras. Otherwise moving objects will appear at wrong depths due to slightly different lag.
This is key! Also needs to be super level and aligned
Was really worried about this whenever I was testing it initially, as the only real thing that’s keeping the shutter in sync is the fact the same button is wired into both microcontrollers, which between the microns of a second with the different lengths of wire would induce, and the fact there’s really nothing hard-syncing the two whenever some change happens (they mouse cursor desyncs constantly, though that may be more due to slightly different analog reads between the microcontrollers), it was a real possibility the whole project would be rendered pointless. More so due to the hand-held nature of it and desync might result in slightly skewed stereo pairs.
Then I tested it, and am happy to report that what photos I’ve taken so far came out excellently, both in general quality and in the fact the stereo pairs didn’t have any observable wonkyness. Haven’t tested with video yet, but at least there if there was a desync, at worst I could move the L/R videos a frame or two between each other to get them largely in sync again if they were to get out of sync.
Indeed — this is usually the hardest problem to solve when building a 3D twin-rig camera system. If the sync between the two camera shutters is not perfect or near perfect, you will get an effect known as “binocular rivalry” or “retinal rivalry” if there are any moving objects in the scene. Even leaves on trees, blowing in the breeze, will cause an disquieting flickering effect when the 3D image is viewed. For more info on this effect, check out this Wikipedia article:
True. My experience is that the image can be “liquid” with movement. RPi HQ sensors can be sychronised
I was hoping this would work by using the API ( https://developer.sony.com/develop/cameras/#overview-content ) since i have a Sony QX10 lying in a drawer waiting for a neat project, but the good news is that somebody else already made something in python: https://sourceforge.net/projects/sony-desktop-dsc-qx10/
I’d imagine pretty well. No worse than one of the cameras normally, but it has the benefit of being able to take 2 photos simultaneously if we’re talking trying to ‘scan’ something that’s out in the field. The only real downside is it takes a eternity to do its thing between photos, so the actual time to ‘scan’ anything would be kinda long.
Can you share info about getting 2 captured video signals into the Pi. I tried using OBS to record (yes on the Pi) and could only see one usb video source.
Literally followed https://www.rickmakes.com/cheap-hdmi-usb-capture-card-on-a-raspberry-pi-4/ and https://www.rickmakes.com/capturing-two-hdmi-streams-at-once-on-a-raspberry-pi-4/ to get it going with some modifications for my use case.
Though hilariously enough, I LITERALLY found out that “HDMI to RPi Camera CSI” boards are a thing that exists the DAY AFTER finishing the camera, so I’ll be getting a StereoPi + Compute Module + two of those so I can get something higher resolution without the latency (Can only capture at 480×640 now because latency would be too horrific otherwise) whenever the price for Compute Modules come back down to earth.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)