The Robot That Lends The Deaf-Blind Community A Hand

The loss of one’s sense of hearing or vision is likely to be devastating in the way that it impacts daily life. Fortunately many workarounds exist using one’s remaining senses — such as sign language — but what if not only your sense of hearing is gone, but you are also blind? Fortunately here, too, a workaround exists in the form of tactile signing, which is akin to visual sign language, except that it uses one’s sense of touch. This generally requires someone who knows tactile sign language to translate from spoken or written forms to tactile signaling. Yet what if you’re deaf-blind and without human assistance? This is where a new robotic system could conceivably fill in.

The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)
The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)

Developed by Tatum Robotics, the Tatum T1 is a a robotic hand and associated software that’s intended to provide this translation function, by taking in natural language information, whether spoken, written or in some digital format, and using a number of translation steps to create tactile sign language as output, whether it’s the ASL format, the BANZSL alphabet or another. These tactile signs are then expressed using the robotic hand, and a connected arm as needed, ideally using ASL gloss to convey as much information as quickly as possible, not unlike with visual ASL.

This also answers the question of why one would not just use a simple braille cell on a hand, as the signing speed is essential to keep up with real-time communications, unlike when, say, reading a book or email. A robotic companion like this could provide deaf-blind individuals with a critical bridge to the world around them. Currently the Tatum T1 is still in the testing phase, but hopefully before long it may be another tool for the tens of thousands of deaf-blind people in the US today.

Humans And Balloon Hands Help Bots Make Breakfast

Breakfast may be the most important meal of the day, but who wants to get up first thing in the morning and make it? Well, there may come a day when a robot can do the dirty work for you. This is Toyota Research Institute’s vision with their innovatively-trained breakfast bots.

Going way beyond pick and place tasks, TRI has, so far, taught robots how to do more than 60 different things using a new method to teach dexterous skills like whisking eggs, peeling vegetables, and applying hazelnut spread to a substrate. Their method is built on generative AI technique called Diffusion Policy, which they use to create what they’re calling Large Behavior Models.

Instead of hours of coding and debugging, the robots learn differently. Essentially, the robot gets a large flexible balloon hand with which to feel objects, their weight, and their effect on other objects (like flipping a pancake). Then, a human shows them how to perform a task before the bot is let loose on an AI model. After a number of hours, say overnight, the bot has a new working behavior.

Now, since TRI claims that their aim is to build robots that amplify people and not replace them, you may still have to plate your own scrambled eggs and apply the syrup to that short stack yourself. But they plan to have over 1,000 skills in the bag of tricks by the end of 2024. If you want more information about the project and to learn about Diffusion Policy without reading the paper, check out this blog post.

Perhaps the robotic burger joint was ahead of its time, but we’re getting there. How about a robot barista?

Continue reading “Humans And Balloon Hands Help Bots Make Breakfast”

Hackaday Prize 2023: Computer Vision Guides This Farm Mower

It’s a problem common to small-scale mixed agriculture worldwide, that of small areas of grass and weeds that need mowing. If you have a couple of sheep and enough electric fence there’s one way to do it, otherwise, if you rely on machinery, there’s a lot of hefting and pushing a mower in your future. Help is at hand, though, thanks to [Yuta Suito], whose pylon-guided mower is a lightweight device that mows an area defined by a set of orange traffic cones. Simply set the cones around the edge of the plot, place the mower within them, and it does the rest.

At its heart is a computer vision system that detects the cones and estimates distance from them by their perceived size. It mows in a spiral pattern by decreasing the cone height at which it turns, thus covering the whole area set out. Inside is a Raspberry Pi doing the heavy lifting, and because it’s designed for farmland rather than lawns, it has an adaptive track system to deal with obstacles. In its native Japan there is an ageing rural population, so it is particularly suitable for being operated by an older person. See it in action in the video below the break.

A robotic mower aimed at farms is certainly unusual here, but we’ve seen a lot of more conventional lawnmowers.

Continue reading “Hackaday Prize 2023: Computer Vision Guides This Farm Mower”

3D Printed Robot Wants To Be Your Pet

Robots are cool. Robots you build yourself are cooler, especially ones that use stuff you have lying around already. Snoopy is a new open-source robot that uses an Arduino as a brain but with a 3D printed body and a short list of parts that can probably be sourced from the junk drawer. It’s still being developed, but it looks like a cool project heading in the right direction to produce an interesting robot.

It’s based on a new robot software platform called Kaia.ai that is built on top of the Robot Operating System 2 (ROS2), but with a more friendly and beginner-focused interface. Currently, the Snoopy project includes enough to get up and running with a printed frame and the electronics to install an Arduino running ROS2 that controls it. That’s an excellent place to start if you want to get into robotics, but without diving straight into the technical challenges of working with real-time operating systems.

It is also interesting that the previous project from the creator (called Kiddo) fell into the complexity trap, where you keep adding features and create an overly complex design that is a pain to build. Hopefully the designers have learned from Kiddo and will keep Snoopy simple.

We’ve covered plenty of other robot projects here at Hackaday, from ones that venture into nuclear reactors to ones that write your thank-you notes for you or give you hugs. We’ve even looked at how to give your robots a personality. Combine all those together with Snoopy and you could build a hugging, compassionate robot that has nice handwriting and can repair a nuclear reactor. And if you do, write it up and send it to our tips line!

Helping Robots Learn By Letting Them Fail

The [MIT Technology Review] has just released its annual list of the top innovators under the age of 35, and there are some interesting people on this list of the annoyingly accomplished at a young age. Like [Lerrel Pinto], an associate professor of computer science at NY University. His work focuses on teaching robots how to do things in the home by failing.

Continue reading “Helping Robots Learn By Letting Them Fail”

Open Source Rover Gets An Update For Easier Building

Once upon a time, NASA-JPL put out a design for an open-source rocker-bogie rover. It was an impressive and capable thing, albeit a little expensive and difficult to build. Now, the open source community has dived in and refreshed the design, making it cheaper and more accessible than ever before.

Many parts of the original design have either become prohibitively expensive, gone out of stock, or been discontinued entirely. The new version, developed by the community that formed around the project, focuses on using off-the-shelf parts to bring costs down. Where the original design could cost as much as $3000 to build, the new model slashes that bill almost in half. It also eliminates any need for anything custom fabricated, with no machined or 3D printed parts required.

Other optimizations include cutting the rover’s head out from the basic model, as it’s not necessary for a great deal of applications. There is also better fluid and dust ingress protection, and improved serviceability. The entire rover model can also be loaded in OnShape for those desiring to inspect it or make their own modifications.

Parts lists are on GitHub for those desiring to build their own. Alternatively, check out the original design to learn more. Video after the break.

Continue reading “Open Source Rover Gets An Update For Easier Building”

Machine Learning Robot Runs Arduino Uno

When we think about machine learning, our minds often jump to datacenters full of sweating, overheating GPUs. However, lighter-weight hardware can also be used to these ends, as demonstrated by [Nikodem Bartnik] and his latest robot.

The robot is charged with autonomously navigating a simple racetrack delineated by cardboard barriers. The robot is based on a two-wheeled design with tank-style steering. Controlled by an Arduino Uno, the robot uses a Slamtec RPLIDAR sensor to help map out its surroundings. The microcontroller is also armed with a Bluetooth link and an SD card for storage.

The robot was first driven around the racetrack multiple times under manual control, all the while collecting LIDAR data. This data was combined with control inputs to help create a data set that could be used to train a machine learning model. Feature selection techniques were used to refine down the data points collected to those most relevant to completing the driving task. [Nikodem] explains how the model was created and then refined to drive the robot by itself in a variety of race track designs.

It’s a great primer on machine learning techniques applied to a small embedded platform.

Continue reading “Machine Learning Robot Runs Arduino Uno”