We have heard bipedal walking referred to as a series of controlled falls, or one continuous fall where we repeatedly catch ourselves, and it is a long way to fall at 9.8m/s2. Some of us are more graceful than others, but most grade-schoolers have gained superior proficiency in comparison to our most advanced bipedal robots. Legs involve all kinds of tricky joints which bend and twist and don’t get us started on knees. Folks at the Keio University and the University of Tokyo steered toward a robot which does not ride on wheels, treads, walk or tumble. The Mochibot uses thirty-two telescopic legs to move, and each leg only moves in or out from the center.
Multi-leg locomotion like this has been done in a process called tensegrity, but in that form, the legs extend only far enough to make the robot tumble in the desired direction. Mochibot doesn’t wait for that controlled fall, it keeps as many downward-facing legs on the ground as possible and retracts them in front, as the rear legs push it forward. In this way, the robot is never falling, and the motion is controlled, but the processing power is higher since the legs are being meticulously controlled. Expecting motion control on so many legs also means that turns can be more precise and any direction can become the front. This also keeps the nucleus at the same level from the ground. We can’t help but think it would look pretty cool stuffed into a giant balloon.
Some people already know of tensegrity robots from NASA, but they may not know about the toolkit NASA published for it. Okay, seriously, how did knees pass the test of evolution? I guess they work for this jumping robot.
Continue reading “Robot Never Misses Leg Day”
In a recent paper in Bioinspiration & Biomimetics, researchers at Florida Atlantic University describe the process of building and testing five free-swimming soft robotic jellyfish. The paper contains build details and data on how three different variables – tentacle stiffness, stroke frequency, and stroke amplitude – affect the swimming characteristics of each bot. For a more in-depth build log, we found the original masters thesis by Jennifer Frame to be very thorough, including processes, schematics, parts lists, and even some Arduino code.
Though a landlubber may say the robots look more like a stumpy octopus than a jellyfish, according to the paper the shape is actually most similar to a juvenile “ephyra stage” moon jellyfish, with 8 short tentacles radiating from a central body. The flexible tentacles are made of a silicon rubber material from Smooth-On, and were cast in 3D printed molds. Inside the waterproof main body is a Teensy 3.2 microcontroller, some flash memory, a nine-axis IMU, a temperature sensor, and a 9 V battery.
There are two flexible resistors embedded in the body to measure tentacle flex, and the actual flexing is done by pumping seawater through open circuit hydraulic channels cast into the tentacles. Two 3 V mini pumps are sufficient for pumping, and the open circuit means that when the pumps turn off, the tentacles bleed off any remaining pressure and quickly snap back to their “neutral” position without the use of complicated valves.
Another simple feature is two hall effect sensors that were mounted in the body to enable waterproof “wireless communication” with the microcontroller. The wireless protocol of choice: manually waving magnets over the sensors to switch the robot between a few predefined operating modes.
There’s a soothing, atmospheric video after the break, where you can see the robots in action off the coast of Florida.
Continue reading “Soft Robotic Jellyfish Get Pumped in the Atlantic”
Modeling machines off of biological patterns is the dry definition of biomimicry. For most people, this means the structure of robots and how they move, but Christine Sunu makes the argument that we should be thinking a lot more about how biomimicry has the power to make us feel something. Her talk at the 2017 Hackaday Superconference looks at what makes robots more than cold metal automatons. There is great power in designing to complement natural emotional reactions in humans — to make machines that feel alive.
We live in a world that is being filled with robots and increasingly these are breaking out of the confines of industrial automation to take a place side by side with humans. The key to making this work is to make robots that are recognizable as machines, yet intuitively accepted as being lifelike. It’s the buy-in that these robots are more than appliances, and Christine has boiled down the keys to unlocking these emotional reactions.
Continue reading “Christine Sunu Proves the Effect of Being Alive on Hardware Design”