This year has been the year of home video conferencing. If you are really on the ball, you’ve managed to put some kind of green screen up so you can hide your mess and look as though you are in your posh upper east side office space. However, most of the consumer video conferencing now has some way to try to guess what your background is and replace it even without a green screen. The results, though, often leave something to be desired. A recent University of Washington paper outlines a new background matting procedure using machine learning and, as you can see in the video below, the results are quite good. There’s code on GitHub and even a Linux-based WebCam filter.
The algorithm does require a shot of the background without you in it, which we imagine needs to be relatively static. From watching the video, it appears the acid test for this kind of software is spiky hair. There are several comparisons of definitely not bald people flipping their hair around using this method and other background replacers such as the one in Zoom.
The quality of the results depends somewhat on how many pixels the filter can work with, so 4K video seemed to do better than lower resolutions. Even then, it isn’t perfect, but it does seem like it did a better job than some of the existing tools. If you try it yourself, we did read that you want to avoid blocking bright light, especially during background capture, and try to keep lighting constant. For prolonged use, it may be helpful to re-take the background image as light conditions drift.
As you might expect, you are going to need a big video card to do this in real-time. This is especially true since the algorithm seems to work better with 4K input. However, if you are doing video production in post, you can probably trade hardware for time.