Creating Surreal Short Films From Machine Learning

Ever since we first saw the nightmarish artwork produced by Google DeepDream and the ridiculous faux paintings produced from neural style transfer, we’ve been aware of the ways machine learning can be applied to visual art. With commercially available trained models and automated pipelines for generating images from relatively small training sets, it’s now possible for developers without theoretical knowledge of machine learning to easily generate images, provided they have sufficient access to GPUs. Filmmaker [Kira Bursky] took this a step further, creating a surreal short film that features characters and textures produced from image sets.

She began with about 150 photos of her face, 200 photos of film locations, 4600 photos of past film productions, and 100 drawings as the main datasets.

via [Kira Bursky]
Using GAN models for nebulas, faces, and skyscrapers in RunwayML, she found the results from training her face set disintegrated, realistic, and painterly. Many of the images continue to evoke aspects of her original face with distortions, although whether that is the model identifying a feature common to skyscrapers and faces or our own bias towards facial recognition is up to the viewer.

On the other hand, the results of training the film set photos on models of faces and bedrooms produced abstract textures and “surreal and eerie faces like a fever dream”. Perhaps, unlike the familiar anchors of facial features, it’s the lack of recognizable characteristics in the transformed images that gives them such a surreal feel.

[Kira] certainly uses these results to her advantage, brainstorming a concept for a short film that revolves around her main character experiencing nightmares. Although her objective was to use her results to convey a series of emotionally striking scenes, the models she uses to produce these scenes are also quite interesting.

She started off by using the MiDaS model, created by a team of researchers from ETH Zurich and Intel, for generating monocular depth maps. The results associated levels inside of an image with their appropriate depth in relation to one another. She also used the MASK R-CNN for masking out the backgrounds in generated faces and combined her generated images in Photoshop to create the main character for her short film.

via [Vox]
In order to simulate the character walking, she used the Liquid Warping GAN, a framework for human motion imitation and appearance transfer, created by a team from ShanghaiTech University and Tencent AI Lab. This allowed her to take her original images and synthesize results from reference poses of herself going through the motions of walking by using a 3D body mesh recovery module. Later on, she applied similar techniques for motion tracking on her faces, running them through the First Order Motion Model to simulate different emotions. She went on to join her facial movements with her character using After Effects.

Bringing the results together, she animated a 3D camera blur using the depth map videos to create a less disorienting result by providing anchor points for the viewers and creating a displacement map to heighten the sense of depth and movement within the scenes. In After Effects, she also overlaid dust and film grain effects to give the final result a crisper look. The result is a surprisingly cinematic film entirely made of images and videos generated from machine learning models. With the help of the depth adjustments, it almost looks like something that you might see in a nightmare.

Check out the result below:

11 thoughts on “Creating Surreal Short Films From Machine Learning

  1. Really wish we’d interrogate whether we should be building all these things instead of just focusing on if we can. Not talking about the artist’s specific work or whatever, but AI in general. It’s gonna be Butlerian jihad time soon, Dune fans. Thou shalt not suffer a machine built in the likeness of a human mind.

    1. Ultimately..there will be a time when we create self conscious entities .. this.. combined with genetic engineering and cyborgs..that will deeply disturb our perception to reality as we know it…. weather it will kick us to new realms and horizons or make our life a true hell.

  2. I used maths (because technically that is all this sort of work is) to produce over 1000 “painted” portraits of people who never existed, high quality images too, 3000 x 3000 pixels and you know what one twitter troll said “they are just photo filters” implying that anyone could do that. Hmmm… but nobody else has done that, then or since! This brings us to what Art really is, it is not the artifact rather it is the idea, if there is anything original about it then it is Art because the person who created it has just pushed out the surface of that bubble defined by humanity’s collected culture and knowledge, increased its volume. This is how you should judge a person’s efforts. Not sure if this one is art but I don’t recall seeing anything like it before and it was the improvement in technology that enabled my explorations in that area, but it was my judgement as to how to use the tools. So again it is about what the artist was thinking and communicating, its originality, not the technique or even their “skill” with it. https://pbs.twimg.com/media/ECTAQqwUcAAPbKI?format=jpg

Leave a Reply to RenCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.