Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.

21 thoughts on “Hallucinating Machines Generate Tiny Video Clips

  1. They could get more training data by replacing clips with links to YouTube videos. Or better yet: make it watch every YouTube video in order of generated addresses. Learning capability would be limited only by bandwidth and number of videos watched simultaneously…

  2. Part of me wants there to be a “Hallucinate your pic” drop box using this software… But I fear they’d need to create an “anatomy” category… O_o
    I WOULD love to be able to play with this, but I have not got the first idea how to even try to make any of the downloadable stuff actually work.

  3. This is really interesting from a VR point of view.

    Imagine for a moment if these videos were generated in real time, based on a few set images of actions or things that were to be in the VR world and the hallucinating machine generates context, and visuals that are, as far as we’re aware, real and life like.

    While the premise of this is for what’s not there. Having it fill the void of what VR does want us to see may be a step forward in having a true life like graphics, if not in a streamed video generated rather than simply rendered from polygons.

    Super interesting to see where they take this, and what ‘trippy’ VR experiences that are outside of our normal reality we’ll be able to conceive.

  4. So this will be the next in television production? When auto colorized BW photos and movies started appearing in documentaries, in the 90’s, a complete rerun of every single celluloid stump started appearing repackaged as “WW1 Never before seen footage” and “Nazis in color” etc. Now we will be able to include still photos since the dawn of photography. Old french postcards will be uploaded as GIF’s everywhere!!!

    Another field would be VR. The algorithms might be suitable to remove the “clinical” feel of computer generated environments?

  5. How did they feed the mushrooms to the computer?
    Or a test subject smoking pot was brain scanned and this is how the system interpreted the data?

    This makes me remember a joke about a dude smoking his first joint on the balcony, seeing nice changing orange shades moving up and down, from left to right, then again after taking another sip. After the trip ends, he goes back in the house and his mother asks: where have you been? on the balcony. for two days???

Leave a Reply to Turing Complete Machine Machine Machine Mach....Cancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.