Boost Your Animation To 60 FPS Using AI

The uses of artificial intelligence and machine learning continue to expand, with one of the more recent implementations being video processing. A new method can “fill in” frames to smooth out the appearance of the video, which [LegoEddy] was able to use this in one of his animated LEGO movies with some astonishing results.

His original animation of LEGO figures and sets was created at 15 frames per second. As an animator, he notes that it’s orders of magnitude more difficult to get more frames than this with traditional methods, at least in his studio. This is where the artificial intelligence comes in. The program is able to interpolate between frames and create more frames to fill the spaces between the original. This allowed [LegoEddy] to increase his frame rate from 15 fps to 60 fps without having to actually create the additional frames.

While we’ve seen AI create art before, the improvement on traditionally produced video is a dramatic advancement. Especially since the AI is aware of depth and preserves information about the distance of objects from the camera. The software is also free, runs on any computer with an appropriate graphics card, and is available on GitHub.

Continue reading “Boost Your Animation To 60 FPS Using AI”

LED Art Reveals Itself In Very Slow Motion

Every bit of film or video you’ve ever seen is a mind trick, an optical illusion of continuous movement based on flashing 24 to 30 slightly different images into your eyes every second. The wetware between your ears can’t deal with all that information individually, so it convinces itself that you’re seeing smooth motion.

But what if you slow down time: dial things back to one frame every 100 seconds, or every 1,000? That’s the idea behind this slow-motion LED art display called, appropriately enough, “Continuum.” It’s the work of [Louis Beaudoin] and it was inspired by the original very-slow-motion movie player and the recent update we featured. But while those players featured e-paper displays for photorealistic images, “Continuum” takes a lower-resolution approach. The display is comprised of four nine HUB75 32×32 RGB LED displays, each with a 5-mm pitch. The resulting 96×96 pixel display fits nicely within an Ikea RIBBA picture frame.

The display is driven by a Teensy 4 and [Louis]’ custom-designed SmartLED Shield that plugs directly into the HUB75s. The rear of the frame is rimmed with APA102 LED strips for an Ambilight-style effect, and the front of the display has a frosted acrylic diffuser. It’s configured to show animated GIFs at anything from 1 frame per second its original framerate to 1,000 seconds per frame times slower, the latter resulting in an image that looks static unless you revisit it sometime later. [Louis] takes full advantage of the Teensy’s processing power to smoothly transition between each pair of frames, and the whole effect is quite wonderful. The video below captures it as best it can, but we imagine this is something best seen in person.

Continue reading “LED Art Reveals Itself In Very Slow Motion”

Nvidia Transforms Standard Video Into Slow Motion Using AI

Nvidia is back at it again with another awesome demo of applied machine learning: artificially transforming standard video into slow motion – they’re so good at showing off what AI can do that anyone would think they were trying to sell hardware for it.

Though most modern phones and cameras have an option to record in slow motion, it often comes at the expense of resolution, and always at the expense of storage space. For really high frame rates you’ll need a specialist camera, and you often don’t know that you should be filming in slow motion until after an event has occurred. Wouldn’t it be nice if we could just convert standard video to slow motion after it was recorded?

That’s just what Nvidia has done, all nicely documented in a paper. At its heart, the algorithm must take two frames, and artificially create one or more frames in between. This is not a manual algorithm that interpolates frames, this is a fully fledged deep-learning system. The Convolutional Neural Network (CNN) was trained on over a thousand videos – roughly 300k individual frames.

Since none of the parameters of the CNN are time-dependent, it’s possible to generate as many intermediate frames as required, something which sets this solution apart from previous approaches.  In some of the shots in their demo video, 30fps video is converted to 240fps; this requires the creation of 7 additional frames for every pair of consecutive frames.

The video after the break is seriously impressive, though if you look carefully you can see the odd imperfection, like the hockey player’s skate or dancer’s arm. Deep learning is as much an art as a science, and if you understood all of the research paper then you’re doing pretty darn well. For the rest of us, get up to speed by wrapping your head around neural networks, and trying out the simplest Tensorflow example.

Continue reading “Nvidia Transforms Standard Video Into Slow Motion Using AI”