Here’s two photographic takes on the same subject, each with a different depth of focus. [Chaos Collective] came up with a way to make interactive still images that allow a user to adjust the depth of focus by clicking on different objects in the image.
This was inspired by the Lytro camera which uses an array of lenses to take multiple pictures at once. Each of those images has a slightly different depth of focus. The technique used here doesn’t require that you buy one of those $400+ cameras. But it’s not a cheap hack unless you already own a camera that can shoot video and has manual focus.
The technique used by the [Chaos Collective] is to move the camera’s manual focus setting from the nearest to the furthest target while capturing a video. That file can then be processed using their browser-based tool which turns it into an embedded HTML5 image.
Nice. I can see the video quality being the one and only problem,as one still of a video at a photoish resolution will have a lot of noise,plus not the same light and color performance.
Otherwise it’s a nice trick.
I’ve kind of wondered for a while now if lytro was taking a pic where the entire image in in full focus then applying a blur filter with a non blurred spot that would do a really good job at simulating a change in focus. Basically, blur everything past a certain radius from the selected spot.
er… no. The aperture size required to do that would require far too much exposure time to be useful. The Lytro uses a technique much like the one outlined here, except I think instead of taking video, there are several imaging sensors which record the picture at exactly the same time, at different focal lengths.
The technique was first proposed several years ago, but the technology has only become commercially available at a consumer price point within the past two years.
Also, if you look at Lytro images on their site, focusing one part of the image causes everything at that distance to become in-focus. You can’t do that by applying vignette blur.
What do you mean “would require far too much exposure time to be useful”? The video sample on his site is seven seconds long.
Photographic theory says that depth of field is related to the aperture of the lens. Aperture is how big the hole on the end of the lens is. In theory a pin hole camera has infinite depth of field; everything is in focus, but under most circumstances a pinhole would require a very long exposure to let enough light in to make the picture correctly exposed.
The video length is not related to exposure as a video is comprised of many many still images each of a finite exposure.
No, this is not what Lytro does. The problem is, that you cannot take an image with everything in focus (http://en.wikipedia.org/wiki/Depth_of_field). Lytro is a light-field camera – think of every pixel of a traditional camera being a directional 2d light sensor (http://en.wikipedia.org/wiki/Light-field_camera)
i have done this for awhile using a process called “focal bracketing” it also helps to give more realistic bookah and to give bookah to really close backgrounds yet have the subject all the way in focus
I believe you mean “bokeh”.
thier website does not show any pictures on my system (Mint 13), nor in FF or in chromium… (no adblock or ghostery or so enabled)
Im guessing that they overloaded their host, and or are having server issues.+
HaDOS I think
HaDOS indeed. Back up for the time being.
Same…
Apparently the site only works in Google Chrome.
its called focus stacking and nothing new to be honest except for the low rez video part (lets be honest, 1080p is worthless when taking photos). this technique is also used to generate 3d depth by analyzing each frame to find sharp areas and matching them with the focus length from the exif data of each image. doesn’t work at all when there is no texture on the surfaces, as in when it all looks blurred even when its in focus.
lytro doesn’t have multiple sensors or lenses, or any of that junk. it doesn’t capture light direction or anything like that. all lytro has is a regular 11MP chip -probably a rolling shutter cmos to keep the cost down-, and a regular lens. what makes it special is the microlens array that is right over the image sensor. it can be a lens per pixel or it can be 11 lenses for 11 focal lengths, hence the 1MP image resolution. lens array projects and distributes 11 different focal length photos to different areas of the image sensor and the so called image engine puts those pixels or areas of pixels back together.
look up “light field”
frankly i am quite pissed at them for advertising it to be this magical device that can bend light, space, and time….. i was expecting a proper time of flight camera like the old discontinued cheap as hell ZCam that the jerkoffs at M$ bought out and destroyed to make kinect. what a waste….
Modern usage would favor “depth of field” to “depth of focus”, the latter referring to the placement of the film or sensor plane in a camera to the lens.
I’ve seen many people make the same comment about how they think the Lytro camera works. This is incorrect. It does not take a bunch of images at different depths of focus. It instead captures one image that includes information about the wavefront of the light (basically the direction in which the light was traveling) that can then be processed to reconstruct the image with any given depth of focus. More information can be found here: http://electronics.howstuffworks.com/cameras-photography/digital/lytro-camera.htm.
I’ve seen many people make the same comment about how they think the Lytro camera works. This is incorrect. It does not take a bunch of images at different depths of focus. It instead captures one image that includes information about the wavefront of the light (basically the direction in which the light was traveling) that can then be processed to reconstruct the image with any given depth of focus. More information can be found here: http://electronics.howstuffworks.com/cameras-photography/digital/lytro-camera.htm.
[Mike] & [Volfram], the Lytro does not take several images simultaneously with different focal length. It records the direction of light, which allows the scene to be recomputed with any arbitrary focus. Works a lot like ray tracing. And as a result, it can also automatically determine the correct focus for any location in the viewable image you click on.
Whereas with this hack, you have to manually create a depth map; a 20×20 grid in this case. I figured something like that was being used even before I read down to it, because while clicking on the image, quite a few locations focused incorrectly. While still a cool effect from a normal camera, the requirement for a manually created depth map lessens its appeal.
But it could be improved. Microscopes have an incredibly short depth of focus. The ones at my workplace did exactly as this hack does, taking a series of pictures as different focal lengths, under servo control. Then some algorithm automatically determines the correct focus for each location, and can generate a picture than can be dynamically refocused, just as seen here. Perhaps less impressive but more useful, it can instead generate a single picture merged together from all the individual shots, where everything is in focus simultaneously. I wish I knew the details of how it worked, but it does demonstrate that full automation is possible.
FYI, I should mention that we only had this capability briefly, because shortly after purchase, our vendor was forced to remove it entirely from their software due to patent encumbrance. Leica holds the patents and is actively protecting them. Keep that in mind should anyone attempt to duplicate the capability. I’m glad [Mike] & [Volfram] are wrong, and the Lytro works using an entirely different technique, otherwise Leica would probably shut them down.
Thanks for mentioning that Chris, you saved me quite a bit of typing.
very neat indeed, if you need higher def images i’ve been using this software for a while with good success http://www.heliconsoft.com/heliconfocus.html
If i remember correctly from reading the thesis on which Lycro is based they basically have a sensor where pixels have micro lenses. There are multiple kinds of micro lenses on the sensor pixels, some shift the focus in front, some shift it in the back, compared to what comes from the lens. So, at the same time you get multiple images at different focus distances. These are then combined in software to allow you to change the focus. You have a 18 MP sensor that takes pictures at 9 different focus distances then the final image is just 2MP.
Regarding this post: macro photography uses multiple shots at different focus distance to get one with a greater DOF. I’m thinking that such a rig(automatic focus shifting etc) can be used here to take multiple pictures, but then the PP is different.
You might want to check out the thesis paper again, or read Chris C. ‘s comment above. The microlenses they are using dont adjust the focus, and all of the lenses are identical. The idea behind the lytro camera is that it can determine the direction of the light rays hitting each “pixel”, whereas conventional cameras just sum the lightrays that hit each pixel together. In the lytro camera, each microlens represents one “pixel” in the final image. The difference is that this “pixel” is directionally aware. Each microlens covers a certain number of pixels, so light hitting the microlens spreads out behind it and then hits one of these pixels, which gives information as to which direction the light was coming from. The paper has some really nice graphics that explain this. There are a couple of mathematically identical ways of explaining this, but this is the one I found most intuitive. Its a really powerful approach, because when you have the actual direction of the light rays, you can simulate literally any focus distance, or even different focus distances in different parts of the image, all the image in focus, waves of focus, tilt shift effects… Its pretty cool stuff. The issue of course, is that as you gain directional resolution, you loose spatial resolution, meaning the output files are pretty small (like 2MP or so) even if you’re using a 6mp sensor (with reasonable directional resolution. Granted, the lytro camera theyre selling now is pretty much a proof of concept, with the idea being that eventually 60mp sensors will get cheap and then it wont really matter that you’re dropping down to a 12mp output file in order to get good directional resolution.
The trouble (for me as a photographer) with the Lytro is that the camera’s resolution is seriously limited- just over a megapixel.
Welp, Instagram resizes images for display on the web, so this really isn’t a problem for anyone, ever.
(Before you shoot me, I was joking. As a fellow photographer, I know real photographers use the posterize filter in PicMonkey!)
I had this idea, but instead for video. Basically capture “60fps” video with many focus points, by moving the lens with a coil or peizo element, and taking 10 or so “frames” per video frame, and then using software to piece them together into a completely in focus video
I think someone might be able to make one themselves… get a moderately priced high fps camera, and modify the focus to work at very high speeds…
The only problem i can think about is vibration from the high speed moving of parts, but i dont think it will be much of a problem.
I’m about to submit a hack for turning the F-stop knob up a bit!!
Most modern cameras should be able to do this, but at much higher resolution. Some can take as many as 60 frames at full resolution in one second, they can also go through the entire focal range in less. They should therefore be able to, with the right firmware, to take 60 photos at 60 different focal settings, or 20+20+20 at 20 focal settings and 3 different apertures.
Isnt this just a rubbish version of this http://magiclantern.wikia.com/wiki/File:DOF_and_Focus_stacking
With a bit of HTML – granted but hey ho.
I could be wrong and there is more to it, it has been known :)
I certainly wouldn’t be so insulting as to call it ‘rubbish’. Perhaps they simply haven’t heard about MG or CHDK or their scripting abilities. What they have found is one way to accomplish what they set out to do, and bravo for them.
This hack simply emulates the basic demonstration of the lytro technique. The point of the Lytro is that no point in particular has focus, meaning that one could not only chose the depth of field, but also the angle of it like a tilt lens effect or select multiple DOFs in a single image.
Lytro is only interesting in the NEW application it can bring. Emulating the early interface gimmicks is not of real interest. It looks cool but it has no function within the realm of novelties that the Lytro tech will bring.
Only if you combine this hack with focus stacking could you perhaps come close to what Lytro promises at a touch.
Just focus/focal distances, not depth of focus which gives no meaning. Supposed meaning was depth of field which is entirely different, as it relates to different apertures,
Needs firmware support then huh, to automate the shot, or some device that does it via remote controls.
Cycling a mechanical lens element at this frequency would be problematic, as you would have to change focus from ‘near’ to ‘far’ and then snap back to ‘near’ at the end of each frame group. Pinging back and forth won’t work- even if you take 60, or 600 frames per second, the images captured will still be different, you its essential that sub-frames of equal focal length are regularly periodic. You will see some motion error as you try to change focal length, and the subframe’s location in the frame group shifts. A better way is to use optical flow to make all the subframes in a frame group simultaneous, and then you are free to shift focus with no temporal bumps. Trying to move a mechanical element this quickly is a messy approach, though, the light field method used by the lytra is much more elegant, and technology for improvement along this vector is much more forthcoming than that necessary for moving a lens as described.
I was involved in a project that captured different lighting bases at an accelerated frame rate, flowed them together, stacked them up. and used the images to texture a 3d model (generated from the same dataset) with novel lighting and camera views. You can read about it (and lots of other clever things) here- http://gl.ict.usc.edu/Research/CFPC/ While the results are pretty impressive, it’s still pretty cumbersome-just five minutes of stage time amassed a dataset of over half a million frames. The light field approach eliminates this burden completely.
Isn’t this one of the many promises of a photon based image sensor? The sensor captures pure photons and you can apply “photoshop” focus filters in post production! One for the future for sure :)