No More Blurry Pictures

Say goodbye to ruined images thanks to this add-on hardware. It measures the movement of the camera when a picture is taken and corrects the image to get rid of motion blur. Above you see a high-speed camera which is just there for testing and fine-tuning the algorithm that fixes the photos. Once they got it right, the setup that the camera is attached to only includes an Arduino board, Bluetooth modem, 3-axis accelerometer, gyroscope, and a trigger for the camera. You use the new hardware to snap each image and it takes care of triggering the SLR’s shutter in order to ensure that the inertial data and the image are synchronized correctly.

[Thanks Rob]

27 thoughts on “No More Blurry Pictures

  1. That’s pretty neat. It’s basically a homebrew image-stabilization. It would be a lot easier and less cumbersome to just buy a lens with IS, though. Still…very impressive.

  2. This was presented back at Siggraph. It’s not IS, but it achieves similar results. The way it works is tracks motion and then as a post processing step you correct remove the blur via software algorithms on you PC. There are advantages and disadvantages to this system. Some of the advantages are that you aren’t reliant on mechanical systems to compensate for the shake and a significant cost reduction. A disadvantage is that correcting in post processing means you loose some temporal pixel information so you can’t do a perfect correction. It won’t work with your standard IS systems, unless the IS system reports back the corrects it made (highly unlikely) or these sensors are placed on the stabilized surface (even more unlikely).

  3. The tiny pictures look great, but the high-res ones look like crap. Ghosts all over the place, duplicate rocks, even worse blurring in some of them.

    The small low-res images are impressive. And maybe with more accurate sensors the images can be corrected more accurately. 200 Hz sensor poling sounds fast, but sensor blur really sucks in two cases: long exposures and zoom lenses. I get worse blur at 1/100 s and 300mm zoom than I do at 1/50 and 50mm. Are the three data points gathered at 1/100 s going make a huge difference?

  4. Weird. I was just thinking about this concept the other day when I was trying to take some pics without a tripod, remembering about how someone had previously done the same thing; deblurring images by measuring the camera’s movement with accelerometer/gyryoscope sensors and ‘undoing’ the movement in post process.

    I find this kind of image processing very impressive, does it exist in any consumer digicams yet? I know image stabilisation is done on some cameras by using a slightly larger sensor than the resulting image, then the camera tries to keep track of detail and so moves the smaller output frame around the captured frame, which works quite well but not as good as this technique.

  5. While this is an interesting tech demo (which will probably find its way into gyroscope-equipped smartphones), practically speaking you’re better off building a device to help stabilize the camera to prevent the blurriness. A lightweight monopod or a homebrew steadycam will get you much nicer looking pictures than this software fix.

    Treat the cause, not the symptoms.

  6. “yo dawg, I herd you like cameras, so I put a camera on your camera so you can photograph while you photograph..” Totally what I thought. xP

    @Anonymous: What this means is that you can now attach a small pcb to any camera with a tripod screw and take “stable” pics on the fly without a cumbersome tripod.

    I think this would be an awesome thing to implement via software on the 3DS (it has an accel & gyro, if I’m not mistaken). Just a thought from a programmer… :D

  7. @Anonymous I think the benifit of this is when you are panning the camera to find a shot. IS can’t help with this blur and steady cams are for video.

    I’m sure this is being done decently with software alone post process but adding the gyro makes it more accurate.

    You could always turn this feature on/off like IS but you should also be able to adjust the degree to which it corrects your photo.

  8. The images that look like crap are using standard deblurring algorithms WITHOUT inertial data. Adding inertial data recorded during image exposure makes a HUGE improvement to the deblurring process.

    The whole point is to show that image post-processing can be improved with inertial data.

  9. Dear Microsoft,

    Impressive project. But get your dirty closed source hands off my open source hardware. No arduino for you! Why don’t you use some dump controller board with WINCE on it or something. Oh wait cause that would FAIL.

  10. Pentax incorporates this feature in most of its high end cameras now (they compensate for vibration and motion by moving the image sensor). Far too late for a patent.

    The nice thing is that it works for all the old lenses that still fit on the new Pentax DSLR bodies.

  11. I recall reading about this before I bought my first DSLR. Then when I was looking at how IS worked I was sort of surprised that this idea has been marketed for such a long time. I guess I hadn’t read the paper close enough to see that they defend their work as being better than IS in the sense that IS only can predict future motion and it can only dampen 2D motion.

    I would like to see a device that contained some sensors and could snap into my camera’s hotshoe. I have no idea if mine has a digital interface like the Samsung “smart shoe”, but that would be a cool way to record data and sync it up with the exposure. I know little about cameras.

    Oh, I was hoping someone might be able to clarify their calibration process. From what I can understand, the high speed camera is used only for the calibration step. What I don’t understand is whether or not the exposures from the Canon camera are used in the reconstruction or if it’s the the high speed camera images. I mean, I guess there are two things they are calibrating here. They need the intrinsics for the Canon camera, but you don’t need the high speed camera for that part. They also need the 3D reconstruction to optimize the sensor parameters. It’s just unclear to me which technique they used to optimize the sensor parameters or if that’s what they are doing at all. They are using the 3D reconstruction to evaluate their results, so maybe that’s all the high speed camera is used for, validation. After reading the cited paper, I think their intent is to first perform the 3D reconstruction. Then, during their evaluation, they take the high speed photos along with the regular exposure. They find the state of the high speed camera for each exposure. Otherwise, I was confused at first with how they were going to perform bundle adjustment with these narrow-baseline exposures for each regular exposure, but the paper clearly states that wide-baseline exposures are used. At least this is what I can get from it.

  12. @Mike

    I interned in a product group one summer and they were really touchy about anything open source for legal purposes. They are in sort of a position where everyone is just waiting to jump at the chance to catch them accidentally shipping some GPL’d code. That’s just the very narrow perspective I got on the issue in a few months. Actually, I think they open sourced one of my projects on CodePlex.

    My other narrow impression is that MSR is drastically different in culture. This might not be true, but for me, it almost seems like an entirely different company. I think it seems pretty cool based on the publications I see coming out, but then again, I’ve never interned for them, so my opinion is superficial.

  13. This is kind of neat, but it’s really impossible to get perfect results from that sort of postprocessing.

    If you take a look at the high-resolution pictures you will see that there are still very noticeable ringing artifacts. Regardless, I’m impressed at how well this worked.

  14. Dan L

    I have the Pentax K7 and love it. Like you said their IS works for vibration and small motion.

    The way I understand it, this would work for panning motions where the shutter speed isn’t quick enough to stop the action.

    I would guess that this uses the same logic as any motion debluring effects on in a software package like photoshop. Except here they don’t have to determine the motion from the picture they’re fixing, they can use their gyro data.

Leave a Reply to r_dCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.