Automatic Projector Calibration

[Johny Lee] sent in his(pdf) awesome projector calibration project. By adding embedding some optical fiber that feed into a set of USB connected light sensors, his groups software can determine the exact pixel position of each sensor. Once the positions are determined, the projected image can be dynamically adjusted to fix the screen. The technique can be used to stitch together multiple projectors, and even calibrate an image to project onto a three dimensional model. I know some home theater nuts that would love to have this system for calibrating their CRT projectors.

This is such an excellent project, that I want to give credit where it’s due – it was developed by [Johnny C. Lee], [Paul H. Dietz], [Dan Maynes-Aminzade], [Ramesh Raskar] and [Scott E. Hudson].

Be sure to check out the video demo after the break!

40 thoughts on “Automatic Projector Calibration

  1. I wonder why nobody thought of this before… but the problem with the car demo is, that it still requires the geometry to be known beforehand in order to wrap the image correctly.
    And how do they want to achieve “interactive” framerates? they have to project log2(width) + log2(height) stripes, which would require 500 projections/sec for VGA, if you refresh the geometry at 25Hz. One could use a special projector that projects the stripes in infrared and at the same time the image that’s for the user to see…
    But wouldn’t it be easier and more flexible, at the expense of 1:1 pixel precision, to point a camera at the projection surface (from the users point of view). you could then project images that look flat from a certain viewpoint onto complex unknown surfaces… but I think that would solve another problem…
    btw, the setup they use looks quite cheap, just hook up some sensors to the IO-ins of your favorite programmable USB-chip…
    If I just had a projector :D

  2. The only problem I see with this approach is that you loose a huge amount of the available number of pixels from the projector… He is using a screen that is about 6″ on a side in a field of view that is about 30 inches on a side, which is ‘wasting’ about 95% of the available pixels… When he turns the screen on its side it further complicates the problem because the screens footprint becomes more along the lines of 2″ by 6″, reducing the pixel utilisation to a meager 1%

    Also, you still have to focus the projector onto the target, and your depth of field is still limited by the lens on your projector…

    This could get really big when laser projectors (using moving mirror galvos to steer the beam) become affordable, as you can then adjust the field of view of the projector on the fly, and the images are always in focus.

    Aside from that, that is an awesome setup. I envision some code to allow beryl/compiz (sorry windows/mac users) to have several small (or large for that matter) panels that can be mapped to an individual window. So you can drag/rotate/etc each of your windows on say a physical desk.

  3. It must be possible to do the calibration in way less time.

    Consider the projector has about a million pixels.

    To uniquely identify each pixel, they must each be given a code about 20 bits (frames) long. The codes would be arranged so two adjacent pixels are different in only one frame. (basicly gray code with an extra dimension, so X and Y can be calculated simultaniously). That means positions can be calculated with only 20 frames, and with a 60Hz refresh rate it should only take a third of a second to identify a pixel (ie. 3Hz). That means it becomes practical to track moving objects using an IR projector (because you can do motion prediction and also you can presume the area certain pixels are in by their previous location, reducing the number of frames required to deduce a location). For fast-moving items, accuracy is also less important again reducing the number of frames.

    I guess it would be possible to then track an item to about 10 Hz, which could have some interesting uses.

    It would be possible to further speed up the process by using all 3 colour channels seperately and have detectors for all 3 colours on the ends of the fibers.

    Another interesting ofshoot is it might be possible to do sub-pixel positions by measuring intensity.

  4. Very impressive. Earlier today, I had an idea conveniently somewhat-relevant to the model car demo, cheap sort-of-invisibility:

    Suppose a camera was added close to the projector, and the background was a random, inconsistent image, rather than a solid-gray one.

    First, the camera takes a picture of the background and remembers it. Then the camera records the binary pattern, from which the offset relative to the projector can be determined; the background should not be a problem to calibration because a previously-recorded image may be used to filter out the graphical background noise.

    Next, the still-flat-gray model car with the optical sensors is placed on the background. The calibration does its thing like in the demo so that the colors can be projected correctly onto it; however, this should also provide enough information to project a piece of the background over the car such that it becomes sort-of-invisible from the point of view of the view of the camera, or if multiple projector-cameras are used, sort-of-invisible from a single arbitrary point of view.

  5. Love this post. I can’t help but dig up an older article from Technology Review (late ’04) that focused on one group’s work with hand held projectors, aiming to shrink them enough to replace/augment cellphone/PDA displays. Some of their tricks included adaptive projecting onto uneven surfaces, keeping a steady image despite moving the projector (so you could tilt and point the projector like a mouse/laser-pointer), and automatically unifying several projectors for an even larger image. So that was baseline already pretty cool. But they took it a bit further and got RFID tags involved that each had a photodiode sensor. They then ran a similar scanlines projection (like this project uses) across a scene, and identically they could exactly locate where the sensor was. Their working example was a storage shelf with different cardboard boxes and embedded tags + sensors, and they would project graphical information for each box onto the side of each box.

    Now, having recently reformatted my lappy, I’m suddenly lacking the link to the online copy of the article. I’ve found a similar copy, but it lacks the images that went with each number. *grumble* Also, I’m not sure if this is related, but they seem to be doing very similar things: Ditto:

    Although the original focused on what you could do with a palm sized (“pencil sized” they hoped) projector, I’d always been rather been intrigued with the RFID part. These days, we’ve got the multitouch demos and cameras tracking markers in realtime, but the light detector + projector combo still seems like a promising combination. Just wanted to share/connect this to some older research. (…and if they’re the same group, I’m going to go find a wall for my forehead.)

  6. Neat stuff.

    It gets better, a DMD can slam out binary frames at 14,300 Hz and the sequence is short (on the order of 20 frames) so you can get a fix in about 1.4 milliseconds and after you have an absolute fix you can use coding tricks to drop the number of frames to do relative tracking, I’d put it at about 8 frames with a periodic 20 frame burst to cancel out cumulative errors.

    The actual reprojection is achieved by mapping the camera space to the projector space and fitting the point positions to identical virtual models. But it can get tricky without overdetermined sampling.

    This also burns resolution like nobody’s business. The actual resolutions with a close up are pretty miserable. Lots of anti aliasing going on too.

    Also check out this honest to goodness projector hack by the same guys:

  7. Some more comments to add to the discussion:

    1). For moving targets you don’t have to use the same search scheme as you already know roughly where the corners of the targers are (or were). You just need to be able to search those specific areas within a radius.

    2). There is no reason why multiple targets could not be placed in projection view, once calibrated each could have a different image projected onto each of them.

    3). The target screen are currently teathered to a compute, again not needed. The electronics could be fully self contained (not that difficult to code) and result sent to projector(s) system wirelessly (Bluetooth for example).

    4). In the same manner that they use two projectors onto a wide target with 6 sensor, you could make a modular screen built from square panels where you just assemble the size screen you need and point a barrage of projectors at it to get some really (!!!) high resolutions.

    A fantastically simple idea. Good hack!

  8. Many CRT projectors have had this feature for many, many years and those of us who have set up CRT projectors more than once know that they are not as good as doing the setup manually.

    The CRT projector system consists of about 5 different ways to focus the electron beam and 4 different ways to control optical focus, multiply that by 3 tubes and you have the number of settings that auto-convergence cannot fix.

    Convergence (getting the 3 tubes to overlap) is about 10 settings pr. tube.

    The tricky part is that several of these settings interoperate so several iterations are needed to get the perfect setup.

  9. Embedded optical sensors seems somewhat complex. Wonder why they don’t use reflecting targets attached to the screen? Put a 2-d barcode on the target and pick up the reflected light with a single sensor at the projector to identify which target it is as well as the precise position. (Cube-corner retro-reflecting targets might help.)

  10. This type of technology, using optical sensors to warp and blend projectors, is already commercially available from at least two companies. Iridas and 3D Perceptions. I’ve been using them to project on cylinders and spheres. However, they take about 30 seconds to auto-align and they are too expensive for everyday use. The speed of this is very impressive. If its being done with inexpensive hardware, thats equally impressive.

    You can’t use reflecting targets because they would be visible to the viewer.

  11. When I worked at IBM we had a system that would project onto multiple interactive surfaces using a projector and mirrors. The problem was always calibration. You’d mount the projector, motorized mirror, and position the surfaces to project onto and then calibrate. If anything got touched then you had to do the calibration all over again. This tech would make it so that all you had to do at the start was a very basic alignment and then the system could handle the calibration as needed.

    The demo itself was very cool. We had a clothing store of the future set up for a major retailer and the projector could put a screen on the wall, on the floor, and on the jeans display depending on where the customer was. The jeans display was interactive so that you could point with your finger and it would show different options.

  12. I think this is amazing but I don’t understand how it works. I have never seen a projector that has this sort of calibration via user-accessible menus. I mean, unless the team also re-wrote the projector code, I am stumped. I understand *how* the system works, I just don’t understand it’s able to tell the project to do anything other than zoom in and out, squeeze sides, or focus.

  13. I really like the Hack-A-Day community because there is a high density of very smart people with an imagination that rivals or surpases that of the recognized research community.

    The comments here about using high-speed DLP, infrared patterns, reflective markers, multiple surfaces, lost pixels are all completely spot on. Since this is my thesis work, some of these ideas have been addressed in my followup work. You can find that here (with more video!):

  14. In the case of this demonstration, the calibration is not bing done in the projector. The video is passing through a computer first. The computer is processing the video and warping it to fit the projection surface. The projector is just being a dumb projector and not doing anything different.

  15. impressive indeed – it is worth pointing out that this isn’t new research (UIST 2004) – MERL and CMU have done some pretty neat stuff in the past. as a followup, they had a publish in UIST 2005 where they built on this where they added the capability to move the display in real time and increased touch capabilities …

  16. Currently, as far as I know, you can’t buy this software. If you were to check out you can request a copy of their callibration software, and from what I can tell, it is pretty similar to what Johnny was using for his callibration technique (which he worked with a few of the shader lamps people with).

    I have tried to contact johnny directly through the e-mail on his website, but no luck so far. I want to try and use this callibration technique in a real-world, entertainment aspect.

    Being a VJ, callibration is not really that important, but to be able to transform a stage into something that is immersive both for the audience, and the performers, this would be absolutely vital.

    I certainly hope that research continues for this, and it someday, soon hopefully, becomes available to users.

  17. i want to stack 2 projectors 3200 lumenes? what is the cheapest solution of software and hardware ? jhonny chun lee has good metods. Please is very imprtant for me can anyone help ? i had read three opcion: vioso wings , scalable displays, chung lee with sensors have anyone the software ? it is not for comercial use please help emergency story!

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.