Robots of the entertainment industry are given life by character animation, where the goal is to emotionally connect with the audience to tell a story. In comparison, real-world robot movement design focus more on managing physical limitations like sensor accuracy and power management. Tools for robot control are thus more likely to resemble engineering control consoles and not artistic character animation tools. When the goal is to build expressive physical robots, we’ll need tools like ROBiTS project to bridge the two worlds.
As an exhibitor at Maker Faire Bay Area 2019, this group showed off their first demo: a plugin to Autodesk Maya that translate joint movements into digital pulses controlling standard RC servos. Maya can import the same STL files fed to 3D printers, easily creating a digital representation of a robot. Animators skilled in Maya can then use all the tools they are familiar with, working in full context of a robot’s structure in the digital world. This will be a far more productive workflow for animation artists versus manipulating a long flat list of unintuitive slider controls or writing code by hand.
Of course, a virtual world offers some freedoms that are not available in the physical world. Real parts are not allowed to intersect, for one, and then there are other pesky physical limitations like momentum and center of gravity. Forgetting to account for them results in a robot that falls over! One of the follow-up projects on their to-do list is a bridge in the other direction: bringing physical world sensor like an IMU into digital representations in Maya.
From the banks of levers and steam gauges of 1927’s Metropolis to the multicolored jewels that the crew would knowingly tap on in the original Star Trek, the entertainment industry has always struggled with producing imagery of advanced technology. Whether constrained by budget or imagination, portrayals usually go in one of two directions: they either rely too heavily on contemporary technology, or else they go so far in the opposite direction that it borders on comical.
But it doesn’t always have to be that way. In fact, when technology is shown properly in film it often serves as inspiration for engineers. The portrayal of facial recognition and gesture control in Minority Report was so well done that it’s still referenced today, nearly 20 years after the film’s release. For all its faults, Star Trek is responsible for a number of “life imitating art” creations; such as early mobile phones bearing an unmistakable resemblance to the flip communicators issued to Starfleet personnel.
So when I saw the exceptional use of 3D printing in the Netflix reboot of Lost in Space, I felt it was something that needed to be pointed out. From the way the crew made use of printed parts to the printer’s control interface, everything felt very real. It took existing technology and pushed it forward in a way that was impressive while still being believable. It was the kind of portrayal of technology that modern tech-savvy audiences deserve.
It left such an impression that we decided to reach out to Seth Molson, the artist behind the user interfaces from Lost in Space, and try to gain a little insight from somebody who is fighting the good fight for technology in media. To learn how he creates his interfaces, the pitfalls he navigates, and how the expectations of the viewer have changed now that we all have a touch screen supercomputer in our pocket.
Computer animation is a task both delicate and tedious, requiring the manipulation of a computer model into a series of poses over time saved as keyframes, further refined by adjusting how the computer interpolates between each frame. You need a rig (a kind of digital skeleton) to accurately control that model, and researcher [Alec Jacobson] and his team have developed a hands-on alternative to pushing pixels around.
The skeletal systems of computer animated characters consists of kinematic chains—joints that sprout from a root node out to the smallest extremity. Manipulating those joints usually requires the addition of easy-to-select control curves, which simplify the way joints rotate down the chain. Control curves do some behind-the-curtain math that allows the animator to move a character by grabbing a natural end-node, such as a hand or a foot. Lifting a character’s foot to place it on chair requires manipulating one control curve: grab foot control, move foot. Without these curves, an animator’s work is usually tripled: she has to first rotate the joint where the leg meets the hip, sticking the leg straight out, then rotate the knee back down, then rotate the ankle. A nightmare.
[Alec] and his team’s unique alternative is a system of interchangeable, 3D-printed mechanical pieces used to drive an on-screen character. The effect is that of digital puppetry, but with an eye toward precision. Their device consists of a central controller, joints, splitters, extensions, and endcaps. Joints connected to the controller appear in the 3D environment in real-time as they are assembled, and differences between the real-world rig and the model’s proportions can be adjusted in the software or through plastic extension pieces.
The plastic joints spin in all 3 directions (X,Y,Z), and record measurements via embedded Hall sensors and permanent magnets. Check out the accompanying article here (PDF) for specifics on the articulation device, then hang around after the break for a demonstration video.