Kinect And Raspberry Pi Add Focus Pulling To DSLR

Prosumer DSLRs have been a boon to the democratization of digital media. Gear that once commanded professional prices is now available to those on more modest budgets. Not only has this unleashed a torrent of online content, it has also started a wave of camera hacks and accessories, like this automatic focus puller based on a Kinect and a Raspberry Pi.

For [Tom Piessens], the Canon EOS 5D has been a solid platform but suffers from a problem. The narrow depth of field possible with DSLRs makes it difficult to maintain focus on subjects that are moving relative to the camera, making follow-focus scenes like this classic hard to reproduce. Aiming for a better system than the stock autofocus, [Tom] grafted a Kinect sensor and a stepper motor actuator to a Raspberry Pi, and used the Kinect’s depth map to drive the focus ring. Parts are laser-cut, including a nice enclosure for the Pi and display that makes the whole thing reasonably portable. The video below shows the focus remaining locked on a selected region of interest. It seems like movement along only one axis is allowed; we’d love to see this system expanded to follow a designated object no matter where it moves in the frame.

If you’re in need of a follow-focus rig but don’t have a geared lens, check out these 3D-printed lens gears. They’d be a great complement to this backwoods focus-puller.

https://www.youtube.com/watch?v=SjaeiXMa5cw

19 thoughts on “Kinect And Raspberry Pi Add Focus Pulling To DSLR

  1. this would be a new take on follow focus, usually it is manually controlled or timed, not sensor driven.
    i like the look of the footage, it does feel a bit like some of the all in one robotic camera systems, but that is a question of fiddling with parameters.

    1. I too have recently bought a D500. The main reason behind my purchase was the exceptional AF system. I find tracking moving subjects such as aircraft and cars a breeze.
      This rig looks like it would be better suited to video of subjects such as people, and where repeatability is important (stop motion etc.).

    2. When the canon 5D mk2 was released in 2008, I was blown away by its video features… It caused a shock wave in video world, provding independent video makers a high value for money camera. Now 8 years later Canon completely let me down… The video features of their DSLRs are horribly weak compared with other brands… But having some quite expensive lenses, it is not easy to go for another brand.

    3. 8 years ago, the Canon 5D mk 2 was released and caused a shockwave in video world for its excellent video features. Now 8 years later, the video features of Canon DSLRs have amost not evolved and are very weak compared to other brands. But an expensive collection of lenses make it hard for me to go to another brand.

        1. When I bought my 6D a few years ago I was in doubt to switch to Sony. Sony has much more evolved video features like indeed the basic ones such as resolution 2K/4K at that time and slomo (framerate). I read an article a few day ago on the 5D legacy, which completely shared my feeling. The 5Dmk2 really was a legend… But almost no evolution on video features from then till the 5Dmk4 now, 8 years later. For 4K, the mk4 still crops the sensor, so you loose you shallow depth of field of your fullframe sensor in video. Sony I understood used the complete sensor width. Focus tracking in the depth sense is a feature that I also miss, that is why I did this experiment with a Kinect (although I don’t think this feature is available in many DSLRs in video mode… I guess most still work in contrast mode focusing… and that is not reliable enough). And of course the fact that I can control my Canon camera over wifi, only for pictures, but not for video is also something that I really cannot understand. I can see the liveview when in picture mode over wifi, but when I switch to movie mode, the camera says wifi not available in movie mode. Come on… Serious???

          1. Ha, yes that liveview thing can be annoying, many cameras have all kinds of limitations, with only a few being hardware dependent these days.
            I have a camera that has HDMI out but only for shots and movies already taken, no live view at all on the HDMI. And I used to have a camera that went to lowres mode when it was live, but that was some time ago and probably a hardware limitation. Anyway it’s one of those thing you need to check before purchase, and the only way to check reliably is hoping a reviewer mentions it or downloading the manual and going through that. Which in fact is a generally good idea after you narrowed your choices, get that manual.PDF and use it to compare.

    1. Most modern DSLRs do not focus well with Video. That back and forth lock-in focus technique for still photos is terible when you do video. That is what this project aims to fix.

      1. Some newer dslrs have phase type sensors in the main sensor so it can accurately focus without hunting like you do with contrast detection focus like you get with standard live view focusing.

  2. I wonder if it would be simpler to just use an ultrasonic sensor like the old polaroids used to use. Or a infrared laser rangefinder. There are cheap chinese ones all over ebay.

    1. I tested a lot with ultrasound sensors… it is absolutely not reliable an not accurate. I am convinced that depth sensing technology will go a fast pace evolution the coming years, driving by the developments in autonomous vehicles. TOF chips are becoming cheaply available.
      I did some research to lasers, but I could not find one that meets my specs (range + visibility… I dont want a red dot in my footage). Also you only have 1 point of depth. Kinect is an old piece of hardware that I used because I already had it. But TOF (time of flight) cameras become commonly available. Having a complete depth map of the scene has numerous advantages.

  3. Had a similar idea once a while back, but then I came to the conclusion that it doesn’t make too much sense to reinvent the autofocus…
    Also, this construction wouldn’t even work in normal daylight conditions.

    1. Cars need to have a complete and accurate depth map of their environment in all circumstances from broad daylight to dark night to drive autonomously. Depth sensing technology is evolving very rapidly today.
      For me it makes sense to rethink any feature if better technology comes along.

Leave a Reply to tompiessensCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.