Bye Bye Green Screen, Hello Monochromatic Screen

It’s not uncommon in 2024 to have some form of green background cloth for easy background effects when in a Zoom call or similar. This is a technology TV and film studios have used for decades, and it’s responsible for many of the visual effects we see every day on our screens. But it’s not perfect — its use precludes wearing anything green, and it’s very bad at anything transparent.

The 1960s Disney film makers seemingly had no problem with this as anyone who has seen Mary Poppins will tell you, so how did they manage to overlay actors with diaphanous accessories over animation? The answer lies in an innovative process which has largely faded from view, and [Corridor Crew] have rebuilt it.

Green screens, or chroma key, to give the effect its real name, relies on the background using a colour not present in the main subject of the shot. This can then be detected electronically or in software, and a switch made between shot and inserted background. It’s good at picking out clean edges between green background and subject, but poor at transparency such as a veil or a bottle of water. The Disney effect instead used a background illuminated with monochromatic sodium light behind the subject illuminated with white light, allowing both a background and foreground image to be filmed using two cameras and a dichroic beam splitter. The background image with its black silhouette of the subject could then be used as a photographic stencil when overlaying a background image.

Sadly even Disney found it very difficult to make more than a few of the dichroic prisms, so the much cheaper green screen won the day. But in the video below the break they manage to replicate it with a standard beam splitter and a pair of filters, successfully filming a colourful clown wearing a veil, and one of them waving their hair around while drinking a bottle of water. It may not find its way back into blockbuster films just yet, but it’s definitely impressive to see in action.

Continue reading “Bye Bye Green Screen, Hello Monochromatic Screen”

The World Is Your Green Screen

This year has been the year of home video conferencing. If you are really on the ball, you’ve managed to put some kind of green screen up so you can hide your mess and look as though you are in your posh upper east side office space. However, most of the consumer video conferencing now has some way to try to guess what your background is and replace it even without a green screen. The results, though, often leave something to be desired. A recent University of Washington paper outlines a new background matting procedure using machine learning and, as you can see in the video below, the results are quite good. There’s code on GitHub and even a Linux-based WebCam filter.

The algorithm does require a shot of the background without you in it, which we imagine needs to be relatively static. From watching the video, it appears the acid test for this kind of software is spiky hair. There are several comparisons of definitely not bald people flipping their hair around using this method and other background replacers such as the one in Zoom.

Continue reading “The World Is Your Green Screen”

Background Substitution, No Green Screen Required

All this working from home that people have been doing has a natural but unintended consequence: revealing your dirty little domestic secrets on a video conference. Face time can come at a high price if the only room you have available for work is the bedroom, with piles of dirty laundry or perhaps the incriminating contents of one’s nightstand on full display for your coworkers.

There has to be a tech fix for this problem, and many of the commercial video conferencing platforms support virtual backgrounds. But [Florian Echtler] would rather air his dirty laundry than go near Zoom, so he built a machine-learning background substitution app that works with just about any video conferencing platform. Awkwardly dubbed DeepBackSub — he’s working on a better name — the system does the hard work of finding the person in the frame with Tensorflow Lite. After identifying everything in the frame that’s a person, OpenCV replaces everything that’s not with whatever you choose, and the modified scene is piped over a virtual video device to the videoconferencing software. He’s tested on Firefox, Skype, and guvcview so far, all running on Linux. The resolution and framerates are limited, but such is the cost of keeping your secrets and establishing a firm boundary between work life and home life.

[Florian] has taken the need for a green screen out of what’s formally known as chroma key compositing, which [Tom Scott] did a great primer on a few years back. A physical green screen is the traditional way to do this, but we honestly think this technique is great and can’t wait to try it out with our Hackaday colleagues at the weekly videoconference.

GreenScreen

How Green Screen Worked Before Computers

If you know anything about how films are made then you have probably heard about the “green screen” before. The technique is also known as chroma key compositing, and it’s generally used to merge two images or videos together based on color hues. Usually you see an actor filmed in front of a green background. Using video editing software, the editor can then replace that specific green color with another video clip. This makes it look like the actor is in a completely different environment.

It’s no surprise that with computers, this is a very simple task. Any basic video editing software will include a chroma key function, but have you ever wondered how this was accomplished before computers made it so simple? [Tom Scott] posted a video to explain exactly that.

In the early days of film, the studio could film the actor against an entirely black background. Then, they would copy the film over and over using higher and higher contrasts until they end up with a black background, and a white silhouette of the actor. This film could be used as a matte. Working with an optical printer, the studio could then perform a double exposure to combine film of a background with the film of the actor. You can imagine that this was a much more cumbersome process than making a few mouse clicks.

For the green screen effect, studios could actually use specialized optical filters. They could apply one filter that would ignore a specific wavelength of the color green. Then they could film the actor using that filter. The resulting matte could then be combined with the footage of the actor and the background film using the optical printer. It’s very similar to the older style with the black background.

Electronic analog video has some other interesting tricks to perform the same basic effect. [Tom] explains that the analog signal contained information about the various colors that needed to be displayed on the screen. Electronic circuits were built that could watch for a specific color (green) and replace the signal with one from the background video. Studios even went so far as to record both the actor and a model simultaneously, using two cameras that were mechanically linked together to make the same movements. The signals could then be run through this special circuit and the combined image recorded all simultaneously.

There are a few other examples in the video, and the effects that [Tom] uses to describe these old techniques go a long way to help understand the concepts. It’s crazy to think of how complicated this process can be, when nowadays we can do it in minutes with the computers we already have in our homes. Continue reading “How Green Screen Worked Before Computers”