Boston Dynamics’ Dancing Bots Beg For Your Love A La Napoleon Dynamite

How do you get people to love you and sidestep existential fear of robots eclipsing humans as the solar system’s most advanced thinking machines? You put on a dance routine to the music of Berry Gordy.

The video published by Boston Dynamics shows off a range of their advanced robots moving as if they were humans, greyhounds, and ostriches made of actual flesh. But of course they aren’t, which explains the safety barriers surrounding the dance floor and that lack of actual audio from the scene. After picking our jaws up off the floor we began to wonder what it sounds like in the room as the whine of motors must certainly be quite impressive — check out the Handle video from 2017 for an earful of that. We also wonder how long a dance-off of this magnitude can be maintained between battery swaps.

Anthropomorphism (or would it be canine-pomorphism?) is trending this year. We saw the Spot robot as part of a dance routine in an empty baseball stadium back in July. It’s a great marketing move, and this most recent volley from BD shows off some insane stunts like the en pointe work from the dog robot while the Atlas humanoids indulge in some one-footed yoga poses. Seeing this it’s easy to forget that these machines lack the innate compassion and empathy that save humans from injury when bumping into one another. While our robotic future looks bright, we’re not in a rush to share the dance floor anytime soon.

Still, it’s an incredible tribute to the state of the art in robotics — congratulations to the roboticists that have brought use here. Looking back eleven and a half years to the first time we covered these robots here on Hackaday, this seems more like CGI movie footage than real life. What’s more amazing? Hobby builds that are keeping up with this level of accomplishment.

77 thoughts on “Boston Dynamics’ Dancing Bots Beg For Your Love A La Napoleon Dynamite

    1. Boston Dynamics has a habit of embellishing their results, like the one time when they showed the dog robot opening a door and conveniently left without mention that it was operating under radio control.

      There’s a difference between executing a planned canned motion sequence, and having actual dynamic balance and control. The dance routine has exactly defined paths and the robot’s task is to follow those paths, which means someone has already calculated the dynamic balancing problem in advance – you have to because the robot has no mind to improvise the dance on the go. Therefore it is left to the robot to correct the difference between the already planned motions and their physical execution, which is a far simpler problem. The main issue is having motors with enough power and speed to follow along, which is what the previous robots were lacking.

      It’s the same trick like with Google/Waymo self-driving cars: calculate everything you can outside of the vehicle, so all the environment modeling is done by a supercomputer (or a mechanical turk as it is). As long as the environment doesn’t change, you’re good, but outside the pre-computed areas the car has no clue what’s happening or what anything is.

      So, the robot might dance you a jig, but still fall on its face coming down simple stairs because it doesn’t have a supercomputer for a brain and its motion planning algorithm is too primitive to do that in real time.

      1. Still, it’s pretty cool. I’m not one for dancing and certainly fell down stairs as a kid. Wouldn’t it be better to congratulate the awesomeness rather than pick holes? Sounds like back in the day, someone might be late to the party believing the world is round.

        Personally I prefer the idea of a cube shaped earth. Maps would be way easier to render, seas would be round and mountains would usefully poke through the atmosphere.

      2. For example, you might motion-capture a person doing the same dance, and then force a CAD model of the robot with the correct proportions and masses to dance the same routine. The dynamic simulation gives out the exact forces and the timings the motors need to apply to make the motions.

        Then it’s a matter of breaking the dance down to a number of “checkpoints” where you allow a certain amount of deviation from the plan, which is then corrected the next time you reach a stable waypoint, and the next part can continue from a known fixed state.

        Then you set your computer farm to churn it overnight, press play, and the robot dances.

        In contrast, the real deal would be for the robot itself to calculate the necessary motions just from knowing where its feet and arms should go. Here the robot is still very dumb, and relies on many simplifications and tricks to reduce the calculation effort.

        For example, when the dog robot walks, it always makes diagonal pivots with is legs, so it keeps falling left and forward, right and forward, left and forward… etc. in an alternating pattern. It can’t just put a leg down anywhere and push from that, because it’s not part of the pattern it keeps repeating, which you can see from the way the robot keeps trotting when it’s moving slowly. It has to switch patterns to do different things. It’s doing dynamic balancing, but the motion planning isn’t dynamic – it’s running a static plan and switching between plans depending on the required actions. In other words, the Boston Dynamic robots can’t “walk and chew bubble gum”.

          1. Pfft! Some humans couldn’t dance to save their own lives, no matter how much practice (don’t ask me how I know).

            In my single years, I could have benefited from one of these robots as my wing-man. They’re cooler now than I ever was, and they’re still running Beta software.

          2. No! We are doing exactly what Dude above was saying, trying stuff out, then using our (slow, wet) supercomputer to precalculate things (maybe during sleep) so after a number of days practice we start to “get the hang” of it.

            The pathways that allow real-time adjustments are laid down in the sympathetic nervous system (like adding some FPGA code) because our supercomputers aren’t fast enough to do this stuff in real time.

            Then we start to skate, or play snooker, or touch type, reasonably well. Over further weeks of practice we refine our motor skills further, but slowly.

            However, all these precomputed pathways still get tripped up by something untowards happening (a bump in the ice, a kick from the cue ball, a different keyboard layout) and we fall flat on our arses.

            I guess that means we’re only as good as Boston Dynamic’s robots, and the Turing test has been passed!

          3. “Most humans don’t get dancing right the first time. Or diving, or skating, or any other complex motion. Does this mean that we are faking it?”

            I’m certainly faking it when I am dancing!
            What is that saying? Dance as if no one is watching(?)
            B^)

          4. >We are doing exactly what Dude above was saying

            It doesn’t take us days of special programming to learn the basic motions of some task, but that wasn’t even the point. The special programming is the point: instead of learning a new dance, you make one up to match your present skills in dancing.

      3. Sorry, but I don’t believe your claim that this is all dependent on pre-calculated movements or environments, unless you can come up with sources. The claim about the google self-driving cars is especially suspect, you’re about twenty years behind the times on how these systems work.

        Dynamic non-linear systems like this have that pesky “sensitive dependence on initial conditions” problem, they *have* to compensate in realtime for the errors that build up in the system.

        1. Indeed, a slightly different way of looking at and phrasing it than I was going for.

          This isn’t a akin to a blender animation of robots dancing, its happening in the real world, with wind, not level floors etc etc. So while the choreography is given to the robot for it to follow (much like human dancing come to that) it has to correct for all the little variances the real world has to stick with it. Remote controlling spot to move around the robot is just being told ‘go direction x’ – it has to work out how, and if you watch them interacting with the world, carrying loads etc its clear it is adaptive to the environment and in control of how it goes in the direction – its not like the RC operator is manually doing every move of every joint for it.

          Yes its not yet going to understand ‘Find Bert and bring them here’ or other such processing, though that isn’t beyond the realm of possible now – facial matching and natural language processing is really getting good.

          1. > it has to correct for all the little variances the real world has

            When the routine is expertly planned, the envelope for what variances the robot CAN compensate for can be arbitrarily narrow. You make sure you aren’t accumulating the errors, and then if by random chance the robot does fail, you just reset and re-start until you get through the whole routine without falling down.

          2. > its not like the RC operator is manually doing every move of every joint for it.

            That’s just basic stuff though. In the spot robot case, the operator was directly radio-controlling the robot to grab a door handle and open the door for a person. Of course they didn’t do it joint-by-joint but through inverse kinematics and the walking routines, but the point is that it was still a mechanical turk, because the person was in the control loop.

            Even when the robot is doing “it’s own thing”, the person controlling it knows the routine and can add to the robot’s programming by their own experience – like an expert excavator driver who knows the dynamic behavior of their machine very well, and makes the whole operation look like ballet.

        2. >you’re about twenty years behind the times on how these systems work.

          It’s rather that the systems haven’t really changed in the last 20 years, they’ve just gotten more powerful. Google/Waymo still hasn’t got the AI to make the car drive itself, so they’re still relying on environmental mapping, outside processing, and then uploading the cleaned up and annotated model back to the car so it can use the map as a reference to fix its exact location AND identify objects from the background. Of course it does not rely solely on this system anymore (it never did), but the 3D lidar map is the car’s “ground truth” that makes it go.

          >Dynamic non-linear systems like this have that pesky “sensitive dependence on initial conditions” problem

          Hence why you chop the routine up into pieces, where after every short piece you have a small window to correct the posture to a known state to remove the deviation. I already explained this. That’s just one way of doing it. Another one is filming the dance moves in many small cuts and then joining them into what appears to be a seamless dance after the fact, leaving all the bits where the robot fell down in the recycling bin.

      4. Personally, I think the balancing and ranges of motion are the impressive part. I know that the robots aren’t dancing by themselves, I know every move is pre-planned and programmed, and I know that this shoot probably took many, many takes which probably included a lot of expensive plastic hitting the ground.

        I still find it impressive and fun to watch. This isn’t a serious demonstration of their capabilities, its a funny video.

  1. As Softbank sells Boston Dynamics to Hyundai, the OG robots are dancing while they can. Unfortunately the company seems to be absolutely brilliant at creating solutions for everything but their own mis-estimations of markets for products.

  2. “Still, it’s an incredible tribute to the state of the art in robotics”

    Nope, doesn’t impress me. We’ve had a history of dancing automata stretching back 400+ years. I see programmed movement, don’t care if it’s programmed on a cam, punched tape or flash memory, doesn’t impress me. Nice flat floor too. I know BD robots can do a lot more things, but this is just bare freaking minimum move how you tell it stuff.

    Now what would be impressive is if they could dodge tires you were rolling at them while dancing, or dance while playing frisbee, anything that indicates capability to react to situations while accomplishing other tasks.

      1. But we infer that from other material. They could be perfectly compensated by precisely calibrated analog feedback mechanisms, that work on a nice ideal flat floor environment such as that. Like the movie ppl say, “You can’t see the money on the screen.”

        1. I don’t see the cheat. If you develop a control system that can handle this level of balance, that should be applicable to ANY motion expected of it, so the product benefits in all applications.

      2. I very much doubt that.

        For one big reason: you have to calculate the “high level motions” in advance anyways to make sure the robot can even perform them. If the motions are outside of its dynamic envelope, it will fail to execute them. In other words, even if the robot could calculate the motions in real time, someone must have computed a model of the robot to design a motion that the robot CAN perform in real time – which is still just showboating.

        It’s kinda like showing off how well your 3D printer works by using a deliberately massaged G-code file that takes into account the wobble in your Z axis and adds precisely calculated deviations to make the print come out straight.

          1. No, it’s not. Any monochromatic printer can print “in color” – just in one color at a time.

            If you were to swap the cartridge and put the same paper through three more times, then claim you have a CMYK color process printer, that would be showboating because the result was obtained by a trick: you had to print a special picture where the precise alignment of the colors don’t matter.

            It may be a fun hack, but when you are selling the printer as something it’s not, that’s when “fun” turns into “fraud”.

          2. My black and whiter printer can print ANY color… as long as I correctly spell the name of color.

            Me (sending text to printer): (print red)

            Printer: red

            Me: (print chartruse)

            Printer: WTF?

            B^)

      1. – It would be interesting to know if that is dynamic solving of a ‘pick up the box’ high-level input command, or someone is just driving it and chasing/picking up the box after it was dropped… Slipping/tripping in hilly-snow covered terrain on two legs and recovering though, it has at least some decent dynamic situational response – and that video is 2017…

    1. Which is like comparing an animation of a rocket booster landing on its tail with the actual thing. Several orders of magnitude difference in difficulty. So if you’re wondering why robots are doing things we “saw” them doing thirty years ago, this.

  3. Impressive work from the first mules and walking pair of legs to the dancing team we’ve seen on video.
    I didn’t liked, though, of the fat bottommed oystrich bot.
    I don’t fear skynet. Being programmed by humans, the bots most probably will fight with each other before starting to see humans as a threat.

  4. Plenty of party poopers in the comments section, but I was still impressed at the high level of performance not seen before. Despite whatever level of programming was needed to enable the machines to dance, they were able to perform impressive high-speed motions, which stand on their own merit.

    Spot’s performance had an almost CGI look to it.

    There may have been dancing robots before, but none like this. Technology marches on.

    1. The air force said yes apparently according to news in the last week or two.

      So, if you want one, you’re gonna need a couple of those snow speeder drones and some thin but strong wire…

  5. I remember watching with an odd vicarious pride that first slow-motion clip of the first time Asimo had both feet off the ground at the same time- which, technically, was a run… despite the mere 2.4” stride (more like 8”, but whatevs)…

    Now the humanoid-bipedal has better dance moves than I do (really not saying a lot), the dog legit scares me (which… I don’t run. Last time I was concerned was when I was digging a post hole and found myself between a mama Clydesdale and her baby. The dog scares me. …I’d be less unsure about the ostrich if they just went ahead and mounted the hacksmiths protosabre in place of that Plumbus hanging from the arm…

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.