Prosumer DSLRs have been a boon to the democratization of digital media. Gear that once commanded professional prices is now available to those on more modest budgets. Not only has this unleashed a torrent of online content, it has also started a wave of camera hacks and accessories, like this automatic focus puller based on a Kinect and a Raspberry Pi.
For [Tom Piessens], the Canon EOS 5D has been a solid platform but suffers from a problem. The narrow depth of field possible with DSLRs makes it difficult to maintain focus on subjects that are moving relative to the camera, making follow-focus scenes like this classic hard to reproduce. Aiming for a better system than the stock autofocus, [Tom] grafted a Kinect sensor and a stepper motor actuator to a Raspberry Pi, and used the Kinect’s depth map to drive the focus ring. Parts are laser-cut, including a nice enclosure for the Pi and display that makes the whole thing reasonably portable. The video below shows the focus remaining locked on a selected region of interest. It seems like movement along only one axis is allowed; we’d love to see this system expanded to follow a designated object no matter where it moves in the frame.
If you’re in need of a follow-focus rig but don’t have a geared lens, check out these 3D-printed lens gears. They’d be a great complement to this backwoods focus-puller.
this would be a new take on follow focus, usually it is manually controlled or timed, not sensor driven.
i like the look of the footage, it does feel a bit like some of the all in one robotic camera systems, but that is a question of fiddling with parameters.
this one was also sensor driven: http://hackaday.com/2014/11/12/an-external-autofocus-for-dslrs/
the link in the article might need an update since they changed the url: http://digitalmedia-bremen.de/en/project/sensopoda/
My Nikon D500 has no problem following a moving target, in any Direction.
I too have recently bought a D500. The main reason behind my purchase was the exceptional AF system. I find tracking moving subjects such as aircraft and cars a breeze.
This rig looks like it would be better suited to video of subjects such as people, and where repeatability is important (stop motion etc.).
When the canon 5D mk2 was released in 2008, I was blown away by its video features… It caused a shock wave in video world, provding independent video makers a high value for money camera. Now 8 years later Canon completely let me down… The video features of their DSLRs are horribly weak compared with other brands… But having some quite expensive lenses, it is not easy to go for another brand.
8 years ago, the Canon 5D mk 2 was released and caused a shockwave in video world for its excellent video features. Now 8 years later, the video features of Canon DSLRs have amost not evolved and are very weak compared to other brands. But an expensive collection of lenses make it hard for me to go to another brand.
So tell me, what features are you thinking of that are missing but available on other brands? Are you talking 4K and bit depth or what?
When I bought my 6D a few years ago I was in doubt to switch to Sony. Sony has much more evolved video features like indeed the basic ones such as resolution 2K/4K at that time and slomo (framerate). I read an article a few day ago on the 5D legacy, which completely shared my feeling. The 5Dmk2 really was a legend… But almost no evolution on video features from then till the 5Dmk4 now, 8 years later. For 4K, the mk4 still crops the sensor, so you loose you shallow depth of field of your fullframe sensor in video. Sony I understood used the complete sensor width. Focus tracking in the depth sense is a feature that I also miss, that is why I did this experiment with a Kinect (although I don’t think this feature is available in many DSLRs in video mode… I guess most still work in contrast mode focusing… and that is not reliable enough). And of course the fact that I can control my Canon camera over wifi, only for pictures, but not for video is also something that I really cannot understand. I can see the liveview when in picture mode over wifi, but when I switch to movie mode, the camera says wifi not available in movie mode. Come on… Serious???
Ha, yes that liveview thing can be annoying, many cameras have all kinds of limitations, with only a few being hardware dependent these days.
I have a camera that has HDMI out but only for shots and movies already taken, no live view at all on the HDMI. And I used to have a camera that went to lowres mode when it was live, but that was some time ago and probably a hardware limitation. Anyway it’s one of those thing you need to check before purchase, and the only way to check reliably is hoping a reviewer mentions it or downloading the manual and going through that. Which in fact is a generally good idea after you narrowed your choices, get that manual.PDF and use it to compare.
Don’t forget about the mandatory LEGO option.
http://hackaday.com/2016/01/01/hillbilly-lego-focus-puller/
Nice project, but modern DSLR now have touch screen and awesome focus tracking.
Most modern DSLRs do not focus well with Video. That back and forth lock-in focus technique for still photos is terible when you do video. That is what this project aims to fix.
Some newer dslrs have phase type sensors in the main sensor so it can accurately focus without hunting like you do with contrast detection focus like you get with standard live view focusing.
I wonder if it would be simpler to just use an ultrasonic sensor like the old polaroids used to use. Or a infrared laser rangefinder. There are cheap chinese ones all over ebay.
Problem is that you might have other objects nearby that disturb that signal since it’s a flat dumb beam.
But for many uses you are right I bet.
Yep. But putting a Kinect on top of dslr is a non starter for me. Plus there are manual follow focus rigs for dSLRs. Just takes practice.
I tested a lot with ultrasound sensors… it is absolutely not reliable an not accurate. I am convinced that depth sensing technology will go a fast pace evolution the coming years, driving by the developments in autonomous vehicles. TOF chips are becoming cheaply available.
I did some research to lasers, but I could not find one that meets my specs (range + visibility… I dont want a red dot in my footage). Also you only have 1 point of depth. Kinect is an old piece of hardware that I used because I already had it. But TOF (time of flight) cameras become commonly available. Having a complete depth map of the scene has numerous advantages.
Had a similar idea once a while back, but then I came to the conclusion that it doesn’t make too much sense to reinvent the autofocus…
Also, this construction wouldn’t even work in normal daylight conditions.
Cars need to have a complete and accurate depth map of their environment in all circumstances from broad daylight to dark night to drive autonomously. Depth sensing technology is evolving very rapidly today.
For me it makes sense to rethink any feature if better technology comes along.