Orientation Aware Camera


[Andrew Magill] just added his Orientation Aware Camera to the Hack a Day Flickr Pool. It uses a 3-axis magnetometer and 3-axis accelerometer. He didn’t want to spend too much effort on the USB side so he picked up USBMicro’s U421. It’s a fairly well documented preprogrammed microcontroller for USB. He later regretted this; his final sample rate was only 5Hz because of all the overhead. Using the positional data the, webcam image can be corrected for any sort of shaking. [Andrew] took this one step further by using OpenGL and stitching all of the video frames together live into a full panorama. Be sure to watch his excellent video demo embedded below.

[flickr video=2610193676]

27 thoughts on “Orientation Aware Camera

  1. If the accuracy of the orientation detection was better than that of an individual pixel in the image, you could make a higher quality image than the camera sensor could make on its own. That’s something that I wonder about how well it could be implemented.

  2. must. build. immediately.
    he seems to just use the angular information from his box — if this would only be used as a hint for a real stitcher, the output would be even better. or, use an optical flow algorithm in combination with that (can be done easily on the GPU in real time).

  3. Magnetic Sensor is used for detecting where the North Pole is. It acts as a compass. It makes it to the orientation data is more accurate then with the accelerometers alone.

  4. Seriously though, this is the first project I’ve seen to use a magnetic sensor to establish north and using that to determine orientation. I mean that’s basically how dolphins and whales orientate themselves.
    Thats also why they beach themselves occasionally, as the poles shift slightly every so often.

  5. Hey cool, I’m on hackaday! :D
    Thanks everyone. I’m probably not going to take this iteration of the project any farther.. the next logical step is autocorrelating the images to refine the alignment, and that’s a wee bit beyond what I want to do. Of course, everyone is more than welcome to make their own and hack up my source code, though.
    I do notice that my brand new G1 phone has all the requisite hardware (camera, accelerometer, magnetometer) built-in, though.. I’ll definitely be investigating that.

  6. Andrew,
    Let me be the first to congratulate and thank you for letting us see this fantastic project, its not often you see videos online that gives you a wide smile, and makes you keep saying ‘that is fantastic’ every so often!
    Everything about what you’ve done seems so obvious, but of course, it really isn’t obvious at all. Apart from general panoramic photography, I haven’t seen anything like this before, its so ‘immediate’ and rewarding! Especially where it can auto-correct camera orientation AND generate full 360degree views!
    Its just inspired!
    I for one though would love it if you would make it much more accurate, less jittery, and maybe faster, as I’m kind of thinking its got a sort of ‘unfinished’ look to it.
    But I do know where you’re coming from with the preference of that abstract visual effect.
    Thanks again! You’ve given me the same big smile I had when I first saw Jeff Hans Multi-touch reel, and Johnny Chung Lees WiiMote PC based projects.
    Thanks.

  7. Questioning the need for the magnetometer and assuming this could be done with only an accelerometer comes up a lot, so I’d like to correct that. Linear algebra says that you need two mutually orthogonal vectors to fully determine an orientation in 3D space. If I only had the accelerometers, I would only be able to determine pitch and roll, but never yaw. (Without looking, you can tell which position your body is in, but not which direction you’re facing, right?) Similarly with the magnetometers I could only determine yaw and pitch, but never roll. (Sort of.. Magnetic north actually points about 45 degrees into the ground around here.) So this wouldn’t really work with a wiimote, as it only has the accelerometers.
    I’m honored to be compared with Johnny Chung Lee- I remember being blown away by the simplicity and brilliance in what he did. In fact, his video inspired me to make this one.
    I think if I had a *MUCH* faster sample rate from the sensors, I could develop an algorithm to smooth out the jitters and improve accuracy. With such a slow sample rate, though, it doesn’t seem like there’s a lot I could do to improve the data from the current hardware without getting into some hardcore image analysis.

  8. this is pretty cool, andrew. I especially like the real-time display with orientation. If you’re prepared to do some off-line number crunching you can get good image registration using SIFT-based keypoints. An easy way to do this is using Autostitch, which can be automated although its not really designed to do this.

    check out the video example at the link below. I made this by “painting” a background panorama using a video camera, then stitching a live feed into the panorama. Autostitch works using only the images – your camera measures orientation, so if you waded through enough maths you should probably get a better result.

    http://stewartgreenhill.com/blog/2008/07/22/a-batch-controller-for-autostitch/

  9. This is extremly usefull for collecting HDRI maps. Also remember those VR rides at theme parks well this just added a whole new level of reality. Bartsch i was think the same thing but with a laser range detector like a leica disto.

  10. amazing. I had just googled for this kind of thing… and lo it brings me here before my bi-weekly check of hackaday.

    Perhaps this could be hacked/used to make a non-iPhone acquire some tilt features?

  11. What if this technology became a house old trend? You would have a bunch of monitors surrounding you instead of one of two. I’m still waiting for a concaved or convex monitor; imagine playing new video games where you didn’t have to look with the mouse. You actually look with your head.

    this is pretty cool!

  12. I’m curious if the inaccuracy is really due to the sensors, or the simple fact that he is holding the camera. while it can sense up and down and its angle, it cant determine it’s coordinate location, so movement left or right would change the alignment of the pixels. perhaps a tripod mount is all it would take to fix this problem.

    definitely an interesting project that i may be interested in building one day, when i have money.

  13. You don’t need to go through the effort if you just want to make a quick panorama like that. Just shoot in video mode from one pivot point scanning the area, load the video frames as layers in Photoshop and have it auto-align them as a panorama. I’ve done this before – it was hi-res and the accuracy was near perfect.

Leave a Reply to quadrapodCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.