Modern day video games have come a long way from Mario the plumber hopping across the screen. Incredibly intricate environments of games today are part of the lure for new gamers and this experience is brought to life by the characters interacting with the scene. However the illusion of the virtual world is disrupted by unnatural movements of the figures in performing actions such as turning around suddenly or climbing a hill.
To remedy the abrupt movements, [Daniel Holden et. al] recently published a paper (PDF) and a video showing a method to greatly improve the real-time character control mechanism. The proposed system uses a neural network that has been trained using a large data set of walking, jumping and other sequences on various terrains. The key is breaking down the process of bipedal movement and its cyclic behaviour into a series of sub-steps or phases. Each phase translates to a natural posture for the character while moving. The system precomputes the next-phases offline to conserve computational resources at runtime. Then considering user control, previous pose of the character(including joint positions) and terrain geometry, the consequent frame of the animation is computed. The computation is done by a regression network that calculates future position of the joints and a blending function is used for Motion Matching as described in a presentation (PDF) and video by [Simon Clavet].
This approach proves effective in environments such as rough terrains and obstacles that involve interaction such as circumventing, climbing, jumping or stepping while following user directions. The end result is a very realistic rendering at a very low computational cost as shown in the video below. It’s applications go beyond games and all the way into the realm of Augmented Reality and Virtual Reality.
Neural networks are all the buzz these days and with Google’s Tensor Flow projects coming to DIY robots, it is a sign that a new era in programming is on the horizon.
You dont need neural networks and fancy animations to make an epic game. Minecraft is blocky and simple but it’s better than any other AAA game like GTA, BF, COD, Half Life 2 or Postal 2
No.
Yes. This is fun.
No, it isn’t
Yes, it is.
yes it true, but tetris is even better
Ok, but consider: nethack.
dwarf fortress!
postal2 as an AAA game? seriously? also, i love minecraft, and also i consider many “simple” indie games as giving a more rewarding experience than many AAA games( mostly because they are made by people loving video games as opposed to people loving money from video game) but compare what’s comparable, not a sandbox /building/survival/exploration game to mainstream fps.
also consider that AAA video games companies are also one of the main engine of computing innovation and these sort of works on neural network might lead to impressive stuff for an hypothetical minecraft2 or increased immersion in games where the actual animation sucks( hearing me andromeda?)
If you say that you never played those games
animated walking was perfected back in ’83:
https://en.wikipedia.org/wiki/Manic_Miner
In fact, the best games you can make with a 555!
With IC logic gates you mean
“better”?
You do know that the words “opinion” and “fact” have different meanings?
NO. The great game is the open source version Minetest.
The main problem with these systems and why they never actually become common place is that the only mechanism for artistic control is manipulating a training set.
Or because, you know, they didn’t exist until very very recently.
Um,no.
Neural nets are pretty old hat, but they became *practical* only recently. Computational expense and all that. And we now have chips specialised in running them efficiently.
I disagree with your comment. “Manipulating a training set” in itself is in fact – “artistic control”. Manipulation allows the vendor to be as creative as he wants to be, there is no limit.
Yeah but you forgot that artists are not allowed to be programmers and vice versa /s
It’s not perfect, but certainly a massive improvement over current game animations. I would love to see this in any game with characters, as the aforementioned immersion will benefit handsomely.
It’s perfect for adventure/exploration games, but for quick action oriented games, you don’t necessarily want your character to play “slow” realistic animations at each direction change :-)
I dunno, a major problem I have with these FPS games is how unnaturally everyone moves. I don’t want instantaneous direction changes.
I don’t know, it seems to me that if you play an FPS game seriously, you disable all details, disable all effects and animations, replace object models with icons, set the field of view to 120° and pick a player character model that is least visible. The initial good-looking graphics is only for making you buy the game.
For a more abstract competitive FPS, sure. If you want to do something that mimics real life, like Arma, this would be great.
Seems with some AAA titles they’ve already dealt with the problem. Probably a lot of preprocessing to keeping it real though.
Amazing! It looks like they have finally solved the problem of shitty character movement that doesn’t require an ungodly amount of animation data.
Instead it requires an ungodly amount of training data! And unless all of the characters walk like a middle-aged man with chronic back pain as in the demo above, you need significantly more training data. Also, now your characters can only do things that people IRL can do, and let me tell you, real life is very boring.
This only needed 1 hour of motion capture. I wouldn’t call that ungodly.
P.S. Hackaday, can you place a yes/no confirmation on the report comment button? It’s too easy to hit it when you’re right handed and browsing the site using mobile.
Not only that but I’m sure it won’t be long before you can just download training data libraries for free. The rise of indy games!
classic canned animations would have required a lot more than an hour of mocap
Ugh, video games came a long way TO Mario the plumber hopping across the screen. Adventure, Pong, and Space War 4 the win!
It is replicating the way our brains anticipate future positions and the moves required to get to them, therefore the end results looks more natural?
This is cool, though time will tell if its ultimately usefull.
It’s amazing how character animation is stuck in Lara croft time and the gaming industry is not penalized for it. Here is the same kind of real-time animation from 11 years ago combining sampled data + inverse Dynamics of joint muscles to achieve realistic running and jumping effects with superpowers.
(Ignore the ragdoll effects.)
https://youtu.be/hdImUIhbG9E
This reminds me of the original Prince of Persia. Talk about fluid animation.
Okay. Here you go – fluid animation.
https://m.youtube.com/watch?v=ureGelZPi3o
So how good does this algorithm have to get before it becomes good enough for a real life giant bipedal mecha?
If only George Lucas had this technology when he made Han Solo walk over Jabba’s tail…
Great Work. keep It up