Advanced Techniques For Realistic Baking Animations

Computer graphics have come a long way since the days of Dire Straits and their first computer animated music video in 1985. To move the state of the art forward has taken the labor of countless artists, developers and technicians. Working in just that field, a group from UCLA have developed an advanced system for simulating baking in computer graphics, and the results look absolutely delicious.

We propose a porous thermo-viscoelastoplastic mixture model.

The work is being presented at SIGGRPAH Asia, and being an academic paper, is dense in arcane terminology. To properly simulate baking, the team had to consider a multitude of interdependent processes. There’s heat transfer to consider, the release of carbon dioxide from leavening agents, the browning of dough due to evaporation of water, and all manner of other complicated chemical and physical interactions.

With a model that takes all of these factors into account, the results are amazingly realistic. The team have shown off renders of cookies in the oven, freshly baked loaves of bread being torn apart, and even muffins full of melted chocolate chips.

We imagine it would have been difficult not to work up an appetite during the research process. We’ve seen impressive work from SIGGRAPH before, like this method for printing photorealistic images on 3D surfaces. Video after the break.

Continue reading “Advanced Techniques For Realistic Baking Animations”

Neural Networks Walk Better Than Humans For Game Animation

Modern day video games have come a long way from Mario the plumber hopping across the screen. Incredibly intricate environments of games today are part of the lure for new gamers and this experience is brought to life by the characters interacting with the scene. However the illusion of the virtual world is disrupted by unnatural movements of the figures in performing actions such as turning around suddenly or climbing a hill.

To remedy the abrupt movements, [Daniel Holden et. al] recently published a paper (PDF) and a video showing a method to greatly improve the real-time character control mechanism. The proposed system uses a neural network that has been trained using a large data set of walking, jumping and other sequences on various terrains. The key is breaking down the process of bipedal movement and its cyclic behaviour into a series of sub-steps or phases. Each phase translates to a natural posture for the character while moving. The system precomputes the next-phases offline to conserve computational resources at runtime. Then considering user control, previous pose of the character(including joint positions) and terrain geometry, the consequent frame of the animation is computed. The computation is done by a regression network that calculates future position of the joints and a blending function is used for Motion Matching as described in a presentation (PDF) and video by [Simon Clavet]. Continue reading “Neural Networks Walk Better Than Humans For Game Animation”

Retrotechtacular: The Early Days Of CGI

We all know what Computer-Generated Imagery (CGI) is nowadays. It’s almost impossible to get away from it in any television show or movie. It’s gotten so good, that sometimes it can be difficult to tell the difference between the real world and the computer generated world when they are mixed together on-screen. Of course, it wasn’t always like this. This 1982 clip from BBC’s Tomorrow’s World shows what the wonders of CGI were capable of in a simpler time.

In the earliest days of CGI, digital computers weren’t even really a thing. [John Whitney] was an American animator and is widely considered to be the father of computer animation. In the 1940’s, he and his brother [James] started to experiment with what they called “abstract animation”. They pieced together old analog computers and servos to make their own devices that were capable of controlling the motion of lights and lit objects. While this process may be a far cry from the CGI of today, it is still animation performed by a computer. One of [Whitney’s] best known works is the opening title sequence to [Alfred Hitchcock’s] 1958 film, Vertigo.

Later, in 1973, Westworld become the very first feature film to feature CGI. The film was a science fiction western-thriller about amusement park robots that become evil. The studio wanted footage of the robot’s “computer vision” but they would need an expert to get the job done right. They ultimately hired [John Whitney’s] son, [John Whitney Jr] to lead the project. The process first required color separating each frame of the 70mm film because [John Jr] did not have a color scanner. He then used a computer to digitally modify each image to create what we would now recognize as a “pixelated” effect. The computer processing took approximately eight hours for every ten seconds of footage. Continue reading “Retrotechtacular: The Early Days Of CGI”