The video in question was of [The 8-bit Guy] doing a small restoration of a 1984 Radio Shack Armatron toy. Expecting a mess of wiring we were absolutely surprised to discover that the internals of the arm were all mechanical with only a single electric motor. Perhaps the motors were more expensive back then?
The arm is driven by a Sarlacc Pit of planetary gears. These in turn are driven by a clever synchronized transmission. It’s very, very cool. We, admittedly, fell down the google rabbit hole. There are some great pictures of the internals here. Whoever designed this was very clever.
The robot arm can do full 360 rotations at every joint that supports it without slip rings. The copper shafts were also interesting. It’s a sort of history lesson on the prices of metal and components at the time.
Regardless, the single motor drive was what attracted [crabfu], ten entire years ago, to attach a steam engine to the device. A quick cut through the side of the case, a tiny chain drive, and a Jensen steam engine was all it took to get the toy converted over. Potato quality video after the break.
Robots are increasingly seeing the world outside of laboratories and factories, and most of us think we would be able to spot one relatively quickly. What if you walked past one on the street — would you recognize it for what it was? How long would it take for you to realize that homeless organ grinder was a robot?
The brainchild of [Fred Ables], Dirk the homeless robot will meander through a crowd, nodding at passers-by and occasionally — with a tilt of his hand — ask for change, churning out a few notes on his organ for those who oblige him. [Ables] controls Dirk’s interactions with others remotely from nearby, blending into the crowds that flock to see the lifelike automaton, selling the illusion that Dirk is a real human. This is often effective since — as with most homeless people — pedestrians won’t spare Dirk a second glance; the reactions of those who don’t pass him over range from confusion to anger or mirth over being so completely duped before looking for the puppeteer.
For her Hackaday Prize entry, [ThunderSqueak] is building an artificial intelligence. P.A.L., the Self-Programming AI Robot, is building on the intelligence displayed by Amazon’s Alexa, Apple’s Siri, and whatever the Google thing is called, to build a robot that’s able to learn from its environment, track objects, judge distances, and perform simple tasks.
As with any robotic intelligence, the first question that comes to mind is, ‘what does it look like’. The answer here is, ‘a little bit like Johnny Five.’ [ThunderSqueak] has designed a robotic chassis using treads for locomotion and a head that can emote by moving its eyebrows. Those treads are not a trivial engineering task – the tracks are 3D printed and bolted onto a chain – and building them has been a very, very annoying part of the build.
But no advanced intelligent robot is based on how it moves. The real trick here is the software, and for this [ThunderSqueak] has a few tricks up her sleeve. She’s doing voice recognition through a microcontroller, correlating phonemes to the spectral signature without using much power.
The purpose of P.A.L. isn’t to have a conversation with a robotic friend in a weird 80s escapade. The purpose of P.A.L. is to build a machine that can learn from its mistakes and learn just a little bit about its environment. This is where the really cool stuff happens in artificial intelligence and makes for an excellent entry for the Hackaday Prize.
The coolest part about Zizzy is the 3D printable pneumatic artificial muscles. Project creator, [Michael Roybal] said it took over a year of development to arrive at the design.
The muscles are hollow bellows printed out of Ninjaflex with carefully calibrated settings. A lot of work must have gone into the design to make sure that they were printable. After printing the muscles are painted with a mixture of fabric glue and MEK solvent. If all is done correctly the bellows should be able to hold 20 PSI without any problem.
This results in a robot with very smooth and precise movement. It has none of the gear noise and can also give when it collides with a user, a feature typically found only in very expensive motor systems. If [Michael] can find a quiet compressor system the robot will be nearly silent.
There is a significant constituency among hackers and makers for whom it is not the surroundings in which the drink is served or the character of the person serving it that is important, but the quality of its preparation. Not for them the distilled wit and wisdom of a bartender who has seen it all, instead the computer-controlled accuracy of a precisely prepared drink. They are the creators of bartending robots, and maybe some day all dank taverns will be replaced with their creations.
Drinkro is a bartending robot built by the team at [Synchro Labs]. It uses a Raspberry Pi 3 and a custom motor controller board driving a brace of DC peristaltic liquid pumps. that lift a variety of constituent beverages into the user’s glass. There is a multi-platform app through which multiple thirsty drinkers can place their orders, and all the source code and hardware files can be found in GitHub repositories. The robot possesses a fairly meagre repertoire of vodka and only three mixers, but perhaps it will be expanded with more motor driver and pump combinations.
This collaboration between ETH and the Disney empire’s research arm is a ultra-light robot that can roll across horizontal surfaces and also transition and climb walls.
The robot has four wheels with one steerable set, but its secret sauce is the two propellers gimbaled on its back. Using these propellers it can move itself across the ground, but also, when approaching a wall, provide enough thrust to overcome the gravity vector.
Naturally, the lighter the robot, the less force will be needed to keep it on the wall. That’s why the frame is made from carbon fiber corrugated sandwich panels. The motors, batteries, and controllers are all also light and small.
We liked how the robot was, apparently, using its propellers to provide additional stability even while on the ground. There is a video after the break, and more information can also be found on the Disney Research webpage.
For humans, moving our arms and hands onto an object to pick it up is pretty easy; but for manipulators, it’s a different story. Once we’ve found the object we want our robot to pick up, we still need to plan a path from our robot hand to the object all the while lugging the remaining limbs along for the ride without snagging them on any incoming obstacles. The space of all possible joint configurations is called the “joint configuration space.” Planning a collision-free path through them is called path planning, and it’s a tricky one to solve quickly in the world of robotics.
These days, roboticists have nailed out a few algorithms, but executing them takes 100s of milliseconds to compute. The result? Robots spend most of their time “thinking” about moving, rather than executing the actual move.
It’s worth asking: why is this problem so hard? How did hardware make it faster? There’s a few layers here, but it’s worth investigating the big ones. Planning a path from point A to point B usually happens probabilistically (randomly iterating to the finishing point), and if there exists a path, the algorithm will find it. The issue, however, arises when we need to lug our remaining limbs through the space to reach that object. This feature is called the swept volume, and it’s the entire shape that our ‘bot limbs envelope while getting from A to B. This is not just a collision-free path for the hand, but for the entire set of joints.
Encoding a map on a computer is done by discretizing the space into a sufficient resolution of 3D voxels. If a voxel is occupied by an obstacle, it gets one state. If it’s not occupied, it gets another. To compute whether or not a path is OK, a set of voxels that represent the swept volume needs to be compared against the voxels that represent the environment. Here’s where the FPGA kicks in with the speed bump. With the hardware implementation, voxel occupation is encoded in bits, and the entire volume calculation is done in parallel. Nifty to have custom hardware for this, right?
We applaud the folks at Duke University for getting this up-and-running, and we can’t wait to see custom “robot path-planning chips” hit the market some day. For now, though, if you’d like to sink your teeth into seeing how FPGAs can parallelize conventional algorithms, check out our linear-time sorting feature from a few months back.