It seems like modern roboticists have decided to have a competition to see which group can develop the most terrifying robot ever invented. As of this writing the leading candidate seems to be the robot that can fuel itself by “eating” organic matter. We can only hope that the engineers involved will decide not to flesh that one out completely. Anyway, if we can get past the horrifying and/or uncanny valley-type situations we find ourselves in when looking at these robots, it turns out they have a lot to teach us about the theories behind a lot of complicated electric motors.
This research paper (gigantic PDF warning) focuses on the construction methods behind MIT’s cheetah robot. It has twelve degrees of freedom and uses a number of exceptionally low-cost modular actuators as motors to control its four legs. Compared to other robots of this type, this helps them jump a major hurdle of cost while still retaining an impressive amount of mobility and control. They were able to integrate a brushless motor, a smart ESC system with feedback, and a planetary gearbox all into the motor itself. That alone is worth the price of admission!
The details on how they did it are well-documented in the 102-page academic document and the source code is available on GitHub if you need a motor like this for any other sort of project, but if you’re here just for the cheetah doing backflips you can also keep up with the build progress at the project’s blog page. We also featured this build earlier in its history as well.
A great many robots exist in our modern world, and the vast majority of them are highly specialized machines. They do a job, and they do it well, but they don’t have much of a personality. [Guilherme Martins] was working on a fun project to build a robot arm that could create chocolate artworks, but it needed something to humanize it a bit more. Thankfully, Jibo was there to lend a hand.
For the uninitiated, Jibo was a companion robot produced by a startup company that later folded. Relying on the cloud meant that when the money ran out and the servers switched off, Jibo was essentially dead. [Guilherme] managed to salvage one of these units, however, and gave it a new life.
With the dead company unable to provide an SDK, the entire brains of the robot were replaced with a LattePanda, which is a Windows 10 single-board computer with an integrated Arduino microcontroller. This was combined with a series of Phidgets motor drivers to control all of Jibo’s joints, and with some Unity software to provide the charming expressions on the original screen.
With the Jibo body mounted upon the robot arm, a simple chocolate-decorating robot now has a personality. The robot can wave to humans, and emote as it goes about its day. It’s an interesting feature to add to a project, and one that certainly makes it more fun. We’ve seen projects tackle similar subject matter before, attempting to build friendly robot pets as companions. Video after the break.
Reinforcement learning is a subset of machine learning where the machine is scored on their performance (“evaluation function”). Over the course of a training session, behavior that improved final score is positively reinforced gradually building towards an optimal solution. [Dheera Venkatraman] thought it would be fun to use reinforcement learning for making a little robot lamp move. But before that can happen, he had to build the hardware and prove its basic functionality with a manual test script.
Inspired by the hopping logo of Pixar Animation Studios, this particular form of locomotion has a few counterparts in the natural world. But hoppers of the natural world don’t take the shape of a Luxo lamp, making this project an interesting challenge. [Dheera] published all of his OpenSCAD files for this 3D-printed lamp so others could join in the fun. Inside the lamp head is a LED ring to illuminate where we expect a light bulb, while also leaving room in the center for a camera. Mechanical articulation servos are driven by a PCA9685 I2C PWM driver board, and he has written and released code to interface such boards with Robot Operating System (ROS) orchestrating our lamp’s features. This completes the underlying hardware components and associated software foundations for this robot lamp.
Once all the parts have been printed, electronics wired, and everything assembled, [Dheera] hacked together a simple “Hello World” script to verify his mechanical design is good enough to get started. The video embedded after the break was taken at OSH Park’s Bring-A-Hack afterparty to Maker Faire Bay Area 2019. This motion sequence was frantically hand-coded in 15 minutes, but these tentative baby hops will serve as a great baseline. Future hopping performance of control algorithms trained by reinforcement learning will show how far this lamp has grown from this humble “Hello World” hop.
[Dheera] had previously created the shadow clock and is no stranger to ROS, having created the ROS topic text visualization tool for debugging. We will be watching to see how robot Luxo will evolve, hopefully it doesn’t find a way to cheat! Want to play with reinforcement learning, but prefer wheeled robots? Here are a few options.
At first, we thought this robot was like a rabbit until we realized rabbits have a 300% bonus in the leg department. SALTO — a robot from [Justin Yim], [Eric Wang], and [Ronald Fearing] only has one leg but gets around quite well hopping from place to place. If you can’t picture it, the video below will make it very obvious.
According to the paper about SALTO, existing hopping robots require external sensors and often are tethered. SALTO is self-contained. The robot weighs a tenth of a kilogram and takes its name from the word saltatorial (adapted for leaping ) which itself comes from the Latin saltare which means to jump or leap.
Robots of the entertainment industry are given life by character animation, where the goal is to emotionally connect with the audience to tell a story. In comparison, real-world robot movement design focus more on managing physical limitations like sensor accuracy and power management. Tools for robot control are thus more likely to resemble engineering control consoles and not artistic character animation tools. When the goal is to build expressive physical robots, we’ll need tools like ROBiTS project to bridge the two worlds.
As an exhibitor at Maker Faire Bay Area 2019, this group showed off their first demo: a plugin to Autodesk Maya that translate joint movements into digital pulses controlling standard RC servos. Maya can import the same STL files fed to 3D printers, easily creating a digital representation of a robot. Animators skilled in Maya can then use all the tools they are familiar with, working in full context of a robot’s structure in the digital world. This will be a far more productive workflow for animation artists versus manipulating a long flat list of unintuitive slider controls or writing code by hand.
Of course, a virtual world offers some freedoms that are not available in the physical world. Real parts are not allowed to intersect, for one, and then there are other pesky physical limitations like momentum and center of gravity. Forgetting to account for them results in a robot that falls over! One of the follow-up projects on their to-do list is a bridge in the other direction: bringing physical world sensor like an IMU into digital representations in Maya.
With the wide array of digital entertainment that’s available to young students, it can be difficult for educators to capture their imagination. In decades past, a “volcano” made with baking soda and vinegar would’ve been enough to put a class of 5th graders on the edge of their seats, but those projects don’t pack quite the same punch on students who may have prefaced their school day with a battle royale match. Today’s educators are tasked with inspiring kids who already have the world at their fingertips.
The electronics for the bot consist primarily of an Arduino Uno with Sensor Shield, a dual H-bridge motor controller, and a wireless receiver for a PS2 controller. This allows the students to control the bot’s dual drive motors with an input scheme that’s likely very familiar to them already. By mapping the controller’s face buttons to digital pins on the Arduino, additional functions such as the spinner seen in the bot after the break, easily be activated.
[Misty] has already done some test runs with an early version of the kit, and so far its been a huge success. Students were free to design their own bodies and add-ons for the remote controlled platform, and it’s fascinating to see how unique the final results turned out to be. We’ve seen in the past how excited students can be when tasked with customizing their own robots, so any entry into that field is a positive development in our book.
I spent a good chunk of Saturday afternoon hanging out at the Homebrew Robotics Club booth at Maker Faire Bay area. They have a ton of really interesting robot builds on display and I just loved hearing about what went into these two in particular.
It’s obvious where BugBot gets its name. The six-legged walker is the creation of [Mark Johnston] who built the beast in a time where components for robots were much harder to come by. Each leg is driven by a very thin strand of muscle wire which contracts when high voltage is run through it. One of the really tricky parts of the build was finding a way to attach this wire. It has a very low melting point, so trying to solder it usually results in melting right through. His technique is to wrap the wire around the leg itself, then slide a small bit of brass tubing over it and make a crimp connection.
PIC microcontroller and muscle wire connections visible in this closeup
The underside of BugBot is impressive too! Two hoops normally hold the battery which is not shown here
At the heart of the little bug is a PIC microcontroller that is point-to-point soldered to the rest of the components. This only caused real problems once, when Mark somehow bricked the chip and had to replace it. Look close and you’ll see there’s a lot of fiddly bits to work around to pull that off. As I said, robot building was more difficult before the explosion of components and breakout modules hit the scene. The wireless control components on this were actually salvaged out of children’s RC toys. They’re not great by any stretch of the imagination, but it was the best source at the time and it works! You can find a demo of the robot embedded after the jump.
Ralph Campbell (left) and Mark Johnston (right)
An Android robot was on display, but of course, I was most interested in seeing what was beneath the skin. In the image above you can see the mask sitting to the left of the “Pat” skeleton. Ralph Campbell has been working on this build, and plans to incorporate interactive features like facial recognition and gesture recognition to affect the gaze of the robot.
Overview of “Pat” without skin
Hoops are coat hangers soldered together
Inside each of the ping pong ball eyes is a Raspberry Pi camera (actually the Adafruit Spy Camera because of its small board size). Ralph has a separate demonstration for facial recognition that he’s in the process of incorporating. But for me, it was the mechanical design of the bot that I find fascinating.
The structure of the skull is coat hanger lashed and soldered together using magnet wires. The eyes move thanks to a clever frame made out of paper clips. The servos to the side of each eye move the gaze up and down, while a servo beneath the eye takes care of left and right. A wooden match stick performs double duty — keeping the camera in place as the pupil of the eye, and allowing it to pivot along the paperclip track of the vertical actuator. It’s as simple as it can be and I find it quite clever!