Bye Bye Green Screen, Hello Monochromatic Screen

It’s not uncommon in 2024 to have some form of green background cloth for easy background effects when in a Zoom call or similar. This is a technology TV and film studios have used for decades, and it’s responsible for many of the visual effects we see every day on our screens. But it’s not perfect — its use precludes wearing anything green, and it’s very bad at anything transparent.

The 1960s Disney film makers seemingly had no problem with this as anyone who has seen Mary Poppins will tell you, so how did they manage to overlay actors with diaphanous accessories over animation? The answer lies in an innovative process which has largely faded from view, and [Corridor Crew] have rebuilt it.

Green screens, or chroma key, to give the effect its real name, relies on the background using a colour not present in the main subject of the shot. This can then be detected electronically or in software, and a switch made between shot and inserted background. It’s good at picking out clean edges between green background and subject, but poor at transparency such as a veil or a bottle of water. The Disney effect instead used a background illuminated with monochromatic sodium light behind the subject illuminated with white light, allowing both a background and foreground image to be filmed using two cameras and a dichroic beam splitter. The background image with its black silhouette of the subject could then be used as a photographic stencil when overlaying a background image.

Sadly even Disney found it very difficult to make more than a few of the dichroic prisms, so the much cheaper green screen won the day. But in the video below the break they manage to replicate it with a standard beam splitter and a pair of filters, successfully filming a colourful clown wearing a veil, and one of them waving their hair around while drinking a bottle of water. It may not find its way back into blockbuster films just yet, but it’s definitely impressive to see in action.

60 thoughts on “Bye Bye Green Screen, Hello Monochromatic Screen

    1. Sodium vapor lamps are cheap, readily available and have high spectral purity. What benefit would using a laser bring other than extra cost, complexity and safety concerns?

        1. Those LPS lamps provide good enough light to be separated with correct filter. The only problem is keeping separation between background and subject(-s), so there will be no light spill-over.

          Before digital age blue was the color of chroma key because it was the farthest from natural skin tones. With digital age it was switched to green as camera sensors were more sensitive to that color. Technically speaking one can use any color for chroma, some work better than others. Alternatively one can use rotoscoping, which is tedious but relatively easy, especially with current state of editing software. Back in the day many of those effects required retouching each frame by hand, using either paints or solvents, depending on desired result.

          1. From the spectrogram I saw a little bit of overlap with the red light source. With imperfect filters, that means you have to filter out a little bit of the higher reds. There was a gap where there was barely any light around yellow, so that’s where they could have put it and achieved better separation in filtering.

          1. I was thinking infrared LEDs could be a decently efficient source of invisible light that would still show up on digital camera sensors. Just leave the IR cut filter in place on one and put and IR pass filter on the other.

          2. I agree with DUDE
            Most glass is opaque to IR and black polyurethane bin bags are transparent. But (there is always a but) the devil is always in the details. Polyurethane with carbon added to make it black, is *probably* still opaque within the Near-IR spectral sensitivity of all silicon light sensors.

            The human body does emit a lot of IR (starts at about : ~1330 nm, the middle would be ~1200 nm, then it probably tails off like all blackbody radiation from a body at 310.15K;37°C ;98.6°F – http://hyperphysics.phy-astr.gsu.edu/hbase/bbrc.html#c4 )
            (ref DOI:10.1109/BHI.2012.6211692 – experiment study on infrared radiation spectrum of human body)

            So if you are using a cheap camera with no-ir filter (raspberry pi no-ir camera) they will probably at most be able to detect 1100 nm (search for “no-ir camera spectral sensitivity”). Silicon has a spectral sensitivity that is restricted to the spectral wavelength between 0.1 and 1,100 nm.

  1. “Sadly even Disney found it very difficult to make more than a few of the dichroic prisms”: wrong! They were only able to build ONE working prism, which is currently inside of the only camera that used it, on display at the Walt Disney Museum.

    1. Hyped up nonsense of course, making such filters is nto that hard at all.
      And vapor deposition is common.
      But you know 10 seconds into this video that you are dealing with a guy who likes to hype up and will hype up. (and yes it was annoying how he and his people were pushing it to the brim of bearable)

      1. Don’t mistake the original prism made by Disney with the one made by this guy. The original one was a dichroic prism that sent ALL the sodium vapor wavelengths to one side, and ALL the other wavelengths to the other side, while what this guy did is using a beam splitter, and filter the light AFTER it, which means that each side only passes half the original light intensity in the desired wavelengths. This is just speculation, but since we are talking of the 60’s and film, I presume that it was paramount to ensure that as much light as possible reached each film strip, and thus why that special prism was made. Also, the history of the unique prism was in the wikipedia at least fifteen years ago, because I remember having read it.

        1. the benefit of doing it with filters is that it’s actually reasonably achievable and able to be integrated into workflows fairly easily. i wonder if we’ll see more people resurrecting this technique.

  2. I want to try to do something similar, but using infrared and a scotchlite background. I want to use a RGB-Ir camera for that, but I can’t find any cheap one. Any ideas?

        1. I wonder if you could get the same effect with one camera and a spinning filter. Every even frame the ir would be filtered and every odd frame the visible light would be filtered. You would need to split the video post. You might also want to shoot at a higher frame rate so that the video still is at 30 fps or whatever you use. As long as the movement is not too quick, it might be okay. There are some motors cheap motors that go up to 8000 RPM with encoders so that you can make sure the image is being taken at the right time.

          Anybody know how janky it would look to combine video one frame off? Maybe as a test, you could shoot at 60 and average every two frames. If it looks too fuzzy, then you know that the effects probably won’t look great.

          1. A simpler idea is to sync the light sources with the vertical sync signal. Is something that another user suggested me in the comments of the original video.

        2. The focus distance of IR is very much offset form visible light, so you can’t have a lens for simultaneous RGB and IR focussed on the same thing simultaneously.
          Although; having said that, I suppose you could focus on one, say RGB, and have the other, IR in that case, just the right distance offset from it and then have both in focus simultaneously in a way that is not possible with a regular sensor, which might open up new ways of making video. An insteresting thing to explore.
          It would put limits on the focal lengths you can use in a practical sense and it would require preparation though, and you need lenses that are IR compatible, many are – but not all.

    1. You could try with RGBW camera like IMX135. It won’t be a clean result straight from the camera, but for wavelengths near the RGB filter boundaries the W channel will have higher response. A linear mapping should be able to get reasonable alpha channel out of that.

    2. Note that some materials are transparent or reflective to IR in ways that visible light isn’t. For example, if you film the front panel of your TV with a near-IR camera, you’ll find that the panel that hides the IR receiver becomes transparent (because that’s the point of it).

      There was a scandal back in the day with a Sony point and shoot camera that used IR for better night shots. That inadvertently made women’s bras visible under certain kinds of t-shirts.

    3. I’m pretty sure that all camera sensors are sensitive to both IR and UV light. I made an IR lamp for a friend years ago, he had DSLR camera with removed IR filter, so he could take some IR photographs. One could use cheap “security” camera, as these are designed to operate with IR light. Just add a filter that will be transparent for IR and block visible light. I’d use a light splitter so there will be no need for perspective and parallax correction in post.

      1. Well, I can imagine the proof of concept is mind boggling, but making a dual camera rig that allows you the proper use of any lens that you prefer might be a bit tricky?

    4. Just a note before you go too deep on it: One of the requirements that ended up with the sodium light is that the ONLY source of that wavelength is your backdrop. People glow in pretty much all wavelengths of IR, and that is going to mess up your mask (though you can probably still get a pretty decent effect with some careful editing).

    1. I do enjoy their videos about emerging technologies. The one with the drone that could do motion control (I think that’s what it’s called, where it can accurately follow the same path at the same timings each time, usually done with a robotic arm, crane, or slider) was pretty cool. It would allow for some pretty amazing composition shots in movies.

  3. Interesting. In my place, when I was little, the “Blue Screen”* was a thing, rather than “Green Screen”. The “Green Screen” was something I haven’t heard of before the 2010s or late 2000s. I mean, I was aware that green as a color was in use, because it didn’t interfere with blue pieces of clothing (such as a tie) worn by news speakers etc. But I can’t remember that the term was being used.

    (*I already see Windows jokes coming)

    1. IIRC, a Banacek episode (starring George Peppard) (1970s) had “the bad guys” wearing blue clothes and exposed skin “painted” blue to be invisible to security cameras. In the show it was referred to as “monochromatic blue”.

      1. I understand what they were trying to do from the writing standpoint but logically in that set up wouldn’t it just make the people appear black to the camera? So instead of people the camera only sees their silhouette? I would think it would make them stand out more. Unless…. The security lights only put out one wavelength and the scene was normally black and only people showed up without the blue paint.

    2. Chroma screens can be blue or green. Or even red actually. (Or “sand” from dune but that’s a different thing)

      You are supposed to pick the color that is going to work the best with your foreground. Green has taken over the post digital cinema camera world because the Bayer pattern on most sensors has more green sites and less noise than blue. Also humans have a lot more blue in our skin tones than green so it’s easier to remove green spill from the foreground than blue.

        1. Yes exactly my point . More green sites means the green channel is better for keying and so green is by far the most common keying background.

          Though I suppose more common is 60% if the background is green and 40% is crew or off the top or sides of the screen because the dp wanted the angle and the vfx vendor was gonna roto everything anyway right?

    3. Blue screen is better for the original analogue process at maintaining convincing skin tones. Green is better for digital sensors, especially as most Bayer filters have double the number of green subpixels.

  4. When I was a kid (1970’s), the local TV news outlet was a very small operation. For one night’s broadcast, the newsreader lady had a neck scarf and eye shadow that matched the chroma key color. Her head appeared to float above her body, and when she blinked, you could “see right through” her head. Funniest thing for a ~11-12 year old kid!

    1. Except not really. Color reproduction, brightness, fast motion.. most of the shots shot this way still end up as vfx where they have to extract the background and replace it. For a show with a shiny reflection helmet it makes sense and actors like it, but a lot of other shows end up having to do tons of cleaning work on those screen shots

    2. Except as Corridor Crew have said in this video and others, often that live background will actually still be painted out and replaced by VFX – it just helps get the lighting in the scene more accurate than a plain old green screen.

  5. When I was a kid (1970’s), the local TV news outlet was a very small operation. For one night’s broadcast, the newsreader lady had a neck scarf and eye shadow that matched the chroma key color. Her head appeared to float above her body, and when she blinked, you could “see right through” her head. Funniest thing for a ~11-12 year old kid!

  6. Would seem to me that a similar effect could be achieved with only one camera using modern LED technology. If you had a highly reflective background, and you backlit the background with a bright Red or Green LED lights, you would get a very tight color-band. You could then film as usual (though you might need to be careful that your normal flood lights and stuff didn’t share too many characteristics with your backing light), then to get your mask, you filter on a very tight red/green band, and similarly filter that out of your final product if there is any spill. You’re probably still going to get some spill issues that have to be cleaned up where things caused the light bands to shift a bit, but this technique sounds like it would be fun to experiment with.

    1. Replying to myself, but I guess I don’t see what is unique about Sodium lights, or hardware filtering. In their final video, you can tell that the yellows are a bit washed out (though this could be color corrected better). Modern green-screen suffers from the fact that the “green” or the “blue” are not very precise due to being a physical color, illuminated by various normal lights. So really the problem is about getting a precise mask color as computers can obviously arbitrarily mask on any color. In the age of super fast shutter speeds, one option might be to cycle between two LED colors on the back panel at high speeds so you get optimal masking. I am guessing that would be quite the nightmare due to synchronizing shutters (rolling shutter) and probably having to build an AI solution to pick the best mask for each frame, also unless it was extremely fast, it might be seizure inducing in the crew, but sacrifices have to be made…

  7. What if you use the same setup but use green or blue light….. Or use a color that isn’t going to be knowingly present in said scene..

    Could dumb it down, nowadays having 3d cameras with depth perception capabilities

    1. Thing with a blue or green screen both the person and screen have to be illuminated with the same lights and recorded with the same camera

      Whats happening here is you separate the camera, lighting, foreground and background in “layers” that can be composited and keyed together without loss of color depth and without screen bleed or screen ghosting from keying errors

          1. Especially when live actors have interact with pretend and make believe drawings they cant see while actually 🎥

            They’d look insane in person out of context. Imaginary friends?

  8. I think Paul Debevec was also influential in starting the HDR process. The solution described in the video is straitforward and was probably not used by Disney because the film at that time lacked sensivity. And no, it would not be too challenging to produce a beamsplitter like Disney’s nowadays – only a little expensive. I think the results would have been even better if they would have synchronized the two cameras. The magenta cast in some motion blurred regions seem to indicate that they were not perfectly synchronized.

  9. Mary Poppins was a brilliant movie that utilized a lot of new techniques. A good friend of mine won two Oscars for his work on the film (music), but I’ve heard lots of his stories about the production of Poppins from him first hand.
    And it’s amazing that the sodium screen technique allowed for so much more wardrobe flexibility than the techniques that became standard for the next 40 years. But they didn’t really go into the reason why the technique wasn’t more broadly used, beyond implying that there was no way to make more of these special prisms. Had the demand actually been there, they could have made them. So why wasn’t the demand there?

    I recall hearing from my instructors when I was in the cinematography program at AFI that it required INSANE amounts of light, and everyone on the stage was miserable, especially the actors that had to be in that light. And these were old-school Technicolor guys who normally key lit every scene with banks of carbon arcs and used 10Ks for casual fill light. So when they say it was an insane amount of light required, it boggles my mind to consider the volume of illumination they must’ve been referring to.

    They touched on virtual production at the end of their video; it’s definitely the future. It solves every one of those color and transparency problems, plus has the benefit of the actors can see their environment rather than having to pretend it’s there.

 These techniques are in a new category called ICVFX (in-camera visual effects); “in-camera” was how all visual effects were done for decades going back to the dawn of cinema, think Georges Méliès, ICVFX is a very high-tech example of how everything old is new again.

    1. LEDs are available within 5-10 nm peak wavelength, which also shifts a bit with temperature. Say you select them again to tighter tol, then they are still narrowband emitters, meaning 50-100nm spectrum have to be rejected at the camera, which will alter the color tones in the scene a bit.
      For LEDs to be useful, they need lenses on top followed by 10-20 nm narrowband dielectric filters and then a diffuser.
      Collimation is needed since the pass band shifts towards shorter wavelengths as the angle of incidence deviates from 90°.
      After that, perhaps 20-40% of the light is left.

  10. Well, the result is interesting, but very far to be perfect!
    At 10mn05, when the girl is turning around and you see her back, the veil part in front of her face becomes fully transparent at both sides of her neck, and even before and after that, there are some stange “holes” of too much transparency.
    Honestly, the green screen result seems more homogeneous regarding veil transparency, and certainly not “yuck”, “destroyed” or to make vomit has overplayed by some of these guys. But it’s true the post-production process is much more involving, and that there are other defaults like those lines around objects.

    1. I think you need to watch in higher quality (though even at the best compression on such details is going to be problematic at times, and your screen may prove a limiting factor to some extent too), as it doesn’t actually become transparent. It certainly blends into the background better, however you can clearly still see it is there. These sort of thin ‘transparent’ materials do very often look very much like that in reality – the position you are viewing it from and the angle it is to you and the lighting makes a huge difference to how visible they are. How many times do you walk into or fail to clean out bits of a spiders web that was entirely invisible from your angle for instance?

      In this case in many places as it wraps her shoulders you have two layers you are viewing the background through, but the dead centre is only one as it doesn’t wrap that far – which I suspect along with the times her veil is in her own shadow are the cause of most if not all of what you are seeing as ‘holes’ – but even when its shadowed and only one layer so really not very visible, and it really shouldn’t be you can still see it.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.