Muscle Wire BugBot And A Raspberry Pi Android With Its Eye On You At Maker Faire

I spent a good chunk of Saturday afternoon hanging out at the Homebrew Robotics Club booth at Maker Faire Bay area. They have a ton of really interesting robot builds on display and I just loved hearing about what went into these two in particular.

It’s obvious where BugBot gets its name. The six-legged walker is the creation of [Mark Johnston] who built the beast in a time where components for robots were much harder to come by. Each leg is driven by a very thin strand of muscle wire which contracts when high voltage is run through it. One of the really tricky parts of the build was finding a way to attach this wire. It has a very low melting point, so trying to solder it usually results in melting right through. His technique is to wrap the wire around the leg itself, then slide a small bit of brass tubing over it and make a crimp connection.

At the heart of the little bug is a PIC microcontroller that is point-to-point soldered to the rest of the components. This only caused real problems once, when Mark somehow bricked the chip and had to replace it. Look close and you’ll see there’s a lot of fiddly bits to work around to pull that off. As I said, robot building was more difficult before the explosion of components and breakout modules hit the scene. The wireless control components on this were actually salvaged out of children’s RC toys. They’re not great by any stretch of the imagination, but it was the best source at the time and it works! You can find a demo of the robot embedded after the jump.

Ralph Campbell (left) and Mark Johnston (right)

An Android robot was on display, but of course, I was most interested in seeing what was beneath the skin. In the image above you can see the mask sitting to the left of the “Pat” skeleton. Ralph Campbell has been working on this build, and plans to incorporate interactive features like facial recognition and gesture recognition to affect the gaze of the robot.

Inside each of the ping pong ball eyes is a Raspberry Pi camera (actually the Adafruit Spy Camera because of its small board size). Ralph has a separate demonstration for facial recognition that he’s in the process of incorporating. But for me, it was the mechanical design of the bot that I find fascinating.

The structure of the skull is coat hanger lashed and soldered together using magnet wires. The eyes move thanks to a clever frame made out of paper clips. The servos to the side of each eye move the gaze up and down, while a servo beneath the eye takes care of left and right. A wooden match stick performs double duty — keeping the camera in place as the pupil of the eye, and allowing it to pivot along the paperclip track of the vertical actuator. It’s as simple as it can be and I find it quite clever!

Continue reading “Muscle Wire BugBot And A Raspberry Pi Android With Its Eye On You At Maker Faire”

Teardown Video: What’s Inside The Self-Solving Rubik’s Cube Robot

You can find all kinds of robots at Bay Area Maker Faire, but far and away the most interesting bot this year is the Self-Solving Rubik’s Cube built by [Takashi Kaburagi]. Gently mix up the colored sides of the cube, set it down for just a moment, and it will spring to life, sorting itself out again.

I arrived at [Takashi’s] booth at just the right moment: as the battery died. You can see the video I recorded of the battery swap process embedded below. The center tile on the white face of the cube is held on magnetically. Once removed, a single captive screw (nice touch!) is loosened to lift off the top side. From there a couple of lower corners are lifted out to expose the tiny lithium cell and the wire connector that links it to the robot.

Regular readers will remember seeing this robot when we featured it in September. We had trouble learning details about the project at the time, but since then Takashi has shared much more about what went into it. Going back to 2017, the build started with a much larger 3D-printed version of a cube. With proof of concept in hand, the design was modeled in CAD to ensure everything had a carefully planned place. The result is a hand-wired robotic core that feels like science fiction but is very, very real.

I love seeing all of the amazing robots on the grounds of the San Mateo County Event Center this weekend. There is a giant mech wandering the parking lot at the Faire. There’s a whole booth of heavy-metal quadruped bots the size of dogs. And if you’re not careful where you walk you’ll step on a scaled-down Mars rover. These are all incredible, out of this world builds and I love them. But the mental leap of moving traditional cube-solvers inside the cube itself, and the craftsmanship necessary to succeed, make this the most under-appreciated engineering at this year’s Maker Faire Bay Area. I feel lucky to have caught it during a teardown phase! Let’s take a look.

Continue reading “Teardown Video: What’s Inside The Self-Solving Rubik’s Cube Robot”

The Kalman Filter Exposed

If we are hiring someone such as a carpenter or an auto mechanic, we always look for two things: what kind of tools they have and what they do when things go wrong. For many types of embedded systems, one important tool that serious developers use is the Kalman filter. It is also something you use when things go “wrong.” [Carcano] recently posted a tutorial on Kalman filter equations that tries to demystify the topic. His example — a case of things going wrong — is when you have a robot that knows how far it is supposed to move and also has GPS coordinates of its positions. Since the positions probably don’t agree, you can consider that a problem with the system.

The obvious answer is to average the two positions. That’s fine if the error is small. But a Kalman filter is much more robust in more situations. [Carcano] does a good job of taking you through the math, but we will warn you it is plenty of math. If you don’t know what a Gaussian distribution is or the word covariance makes you think of sailboats, you are going to have to do some reading to get through the post.

Continue reading “The Kalman Filter Exposed”

Nvidia Teaching Robots To Master IKEA Kitchens

The current wave of excitement around machine learning kicked off when graphics processors were repurposed to make training deep neural networks practical. Nvidia found themselves the engine of a new revolution and seized their opportunity to help push frontiers of research. Their research lab in Seattle will focus on one such field: making robots smart enough to work alongside humans in an IKEA kitchen.

Today’s robots are mostly industrial machines that require workspaces designed for robots. They run day and night, performing repetitive tasks, usually inside cages to keep squishy humans out of harm’s way. Robots will need to be a lot smarter about their surroundings before we could safely dismantle those cages. While there are some industrial robots making a start in this arena, they have a hard time justifying their price premium. (Example: financial difficulty of Rethink Robotics, who made the Baxter and Sawyer robots.)

So there’s a lot of room for improvement in this field, and this evolution will need a training environment offering tasks of varying difficulty levels for robots. Anywhere from the rigorous structured environment where robots work well today, to a dynamic unstructured environment where robots are hopelessly lost. Lab lead Dr. Dieter Fox explained how a kitchen is ideal. A meticulously cleaned and organized kitchen is very similar to an industrial setting. From there, we can gradually make a kitchen more challenging for a robot. For example: today’s robots can easily pick up a can with its rigid regular shape, but what about a half-full bag of flour? And from there, learn to pick up a piece of fresh fruit without bruising it. These tasks share challenges with many other tasks outside of a kitchen.

This isn’t about building a must-have home cooking robot, it’s about working through the range of challenges shared with common kitchen tasks. The lab has a lot of neat hardware, but its success will be measured by the software, and like all research, published results should be reproducible by other labs. You don’t have a high-end robotics lab in your house, but you do have a kitchen. That’s why it’s not just any kitchen, but an IKEA kitchen, to take advantage of the fact they are standardized, affordable, and available around the world for other robot researchers to benchmark against.

Most of us can experiment in a kitchen, IKEA or not. We have access to all the other tools we need: affordable AI hardware from Google, from Beaglebone, and from Nvidia. And we certainly have no shortage of robot arms and manipulators on these pages, ranging from a small laser-cut MeArm to our 2018 Hackaday Prize winner Dexter.

Robot Hummingbird Imitates Nature

Purdue’s Bio-Robotics lab has been working on a robotic hummingbird and, as you can see in the videos below, have had a lot of success. What’s more, is they’ve shared that success on GitHub. If you want to make a flapping-winged robot, this is definitely where you start.

If you’ve ever watched a hummingbird, you know their flight capability is nothing short of spectacular. The Purdue robot flies in a similar fashion (although on a tether to get both power and control information) and relies on each wing having its own motor. The motors not only propel the wings but also act as sensors. For example, they can detect if a wing is damaged, has made contact with something, or has changed performance due to atmospheric conditions.

In addition to the tethered control system, the hummingbird requires a motion capture sensor external to itself and some machine learning. Researchers note that there is sufficient payload capacity to put batteries onboard and they would also need additional sensors to accomplish totally free flight. It is amazing when you realize that a real hummingbird manages all this with a little bitty brain.

The published code is in Python and is part of three presentations later this month at a technical conference (the IEEE International Conference on Robotics and Automation).  If you don’t want to wait on the paper, there’s a post on IEEE Spectrum about the robotic beast, available now and that article contains preprint versions of the papers. The Python code does require a bit to run, so expect a significant flight computer.

The last hummingbird bot we saw was a spy. We’ve also seen robots that were like bees — sort of.

Continue reading “Robot Hummingbird Imitates Nature”

New Part Day: Lynxmotion Smart Servos

Anyone who shops for robotics kits would have come across a few designed by Lynxmotion. They’ve been helping people build robots since 1995, from robot arm kits to hexapod chassis and everything in between. We would expect these people know their motors, so when they launched their own line of servo motors called Lynxmotion Smart Servos (LSS), it is worth spending a bit of time to look over what they offer.

While these new devices have a PWM mode compatible with classic remote control servos, unleashing their full power requires bidirectional communication over a serial bus. We’ve previously given an overview of three serial bus servos already on the market for comparison. A quick look at the $68-$100 price tags listed on Lynxmotion’s parent company RobotShop made it clear they do not intend to compete on price, so what interesting features do these new kids on the block have?

Digging into product documentation found some great details. Acceleration and deceleration rates are adjustable, which can help with smoother robot movement. There’s also an adjustable level of “stiffness” that adds some “give” (compliance) so a robot won’t have to be as stiff as… well, a robot!

Mechanically, the most interesting internal component is the magnetic position sensor. They are far more precise than potentiometers, but more importantly, they allow positioning anywhere within full 360 degrees. Many other serial bus servos are constrained to positions within an arc less than 360 degrees leaving a blind spot.

An interesting quirk of the LSS offerings is that the serial communication protocol uses human-readable text characters, so sending a number 255 means transmitting a three byte string ‘2’, ‘5’, and ‘5’ instead of single byte 0xFF. This would make debugging our custom robot code far easier, at the cost of reduced bandwidth efficiency and loss of checksum for detecting communication errors. It’s a trade-off that some robot builders would be happy to make, but others might not.

Externally, these servos have bountiful mounting options including some we didn’t know to ask for. Historically Lynxmotion kits have used a wide variety of servo mounting brackets, so they are motivated to make mechanical integration easy. The most novel offering is the ability to bolt external gears to the servo body. A set of 1:3 gears allow for gearing the servo up or down, or you can use a set of 1:1 gears for a compact gripper.

As you’d expect of servos in this price range, they all have metal gears, but they also have the ability to power the motor directly from a battery pack (a 3 cell lithium polymer is recommended). There are additional features, like an RGB LED for visual feedback, which we didn’t cover here so dig into the documentation for more. We look forward to seeing how these interesting little actuators perform in future robotics projects.

Self-aware Robotic Arm

If you ever tried to program a robotic arm or almost any robotic mechanism that has more than 3 degrees of freedom, you know that a big part of the programming goes to the programming of the movements themselves. What if you built a robot, regardless of how you connect the motors and joints and, with no knowledge of itself, the robot becomes aware of the way it is physically built?

That is what Columbia Engineering researchers have made by creating a robot arm that learns how it is connected, with zero prior knowledge of physics, geometry, or motor dynamics. At first, the robot has no idea what its shape is, how its motors work and how they affect its movement. After one day of trying out its own outputs in a pretty much random fashion and getting feedback of its actions, the robot creates an accurate internal self-simulation of itself using deep-learning techniques.

The robotic arm used in this study by Lipson and his PhD student Robert Kwiatkowski is a four-degree-of-freedom articulated robotic arm. The first self-models were inaccurate as the robot did not know how its joints were connected. After about 35 hours of training, the self-model became consistent with the physical robot to within four centimeters. The self-model then performed a pick-and-place task that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model.

To test whether the self-model could detect damage to itself, the researchers 3D-printed a deformed part to simulate damage and the robot was able to detect the change and re-train its self-model. The new self-model enabled the robot to resume its pick-and-place tasks with little loss of performance.

Since the internal representation is not static, not only this helps the robot to improve its performance over time but also allows it to adapt to damage and changes in its own structure. This could help robots to continue to function more reliably when there its part start to wear off or, for example, when replacement parts are not exactly the same format or shape.

Of course, it will be long before this arm can get a precision anywhere near Dexter, the 2018 Hackaday Prize winner, but it is still pretty cool to see the video of this research: