Hackaday Prize 2023: An Agricultural Robot That Looks Ready For The Field

In the world of agriculture, not all enterprises are large arable cropland affairs upon which tractors do their work traversing strip by strip under the hot sun. Many farms raise far more intensive crops on a much smaller scale, and across varying terrain. When it comes to automation these farms offer their own special challenges, but with the benefit of a smaller machine reducing some of the engineering tasks. There’s an entry in this year’s Hackaday prize which typifies this, [KP]’s Agrofelis robot is a small four-wheeled carrier platform designed to deliver autonomous help on smaller farms. It’s shown servicing a vineyard with probably one of the most bad-ass pictures you could think of as a pesticide duster on its implement platform makes it look for all the world like a futuristic weapon.

A sturdy tubular frame houses the battery bank and brains, while motive power is provided by four bicycle derived motorized wheels with disk brakes. Interestingly this machine steers mechanically rather than the skid-steering found in so many such platforms. On top is a two degrees of freedom rotating mount which serves as the implement system — akin to a 3-point linkage on a tractor. This is the basis of the bad-ass pesticide duster turret mentioned above. Running it all is a Nvidia Jetson Nano, with input from a range of sensors including global positioning and LIDAR.

The attention to detail in this agricultural robot is clearly very high, and we could see machines like it becoming indispensable in the coming decades. Many tasks on a small farm are time-consuming and involve carrying or wheeling a small machine around performing the same task over and over. Something like this could take that load off the farmer. We’ve been there, and sure would appreciate something to do the job.

While we’re on the subject of farm robots, this one’s not the only Prize entry this year.

How Warehouse Robots Actually Work, As Explained By Amazon

Amazon has been using robots to manage and automate their warehouses for years. Here’s a short feature on their current robot, Hercules. This is absolutely Amazon tooting their own horn, but if you have been curious about what exactly such robots do, and how exactly they help a busy warehouse work better, it’s a good summary with some technical details.

Amazon claims to have over 750,000 robots across their network.

The main idea is that goods are stored on four-sided shelves called pods. Hercules can scoot underneath to lift and move these pods a little like a robotic forklift, except much smaller and more nimble. Interestingly, the robots avoid rotating shelves as much as possible and are designed to facilitate this. To change direction, Hercules sets the pod down, turns, then picks the pod back up.

The overall system is centralized, but Hercules itself navigates autonomously thanks to a depth-sensing camera and a grid of navigation markers present on the floor throughout the facility.  Hercules also can wirelessly sense and communicate with nearby human-worn vests and other robots outside its line of sight.

Essentially, instead of human workers walking up and down aisles of shelves to pick products, the product shelves come to the humans. This means the organization and layout of the shelves themselves can be dynamic, higher density, and optimized for efficient robotic access. Shelves do not need to be in fixed rows or aisles, conform to a human-readable categorical layout, nor do they necessarily need walking space between them.

Sometimes robots really are the right tool for the job, and our favorite product-retrieval robot remains [Cliff Stoll]’s crawlspace warehouse bot, a diminutive device made to access boxes of product — in [Cliff]’s case, Klein bottles — stored in an otherwise quite claustrophobic crawlspace.

Micro Robot Disregards Gears, Embraces Explosions

Researchers at Cornell University have developed a tiny, proof of concept robot that moves its four limbs by rapidly igniting a combination of methane and oxygen inside flexible joints.

The device can’t do much more than blow each limb outward with a varying amount of force, but that’s enough to be able to steer and move the little unit. It has enough power to make some very impressive jumps. The ability to navigate even with such limited actuators is reminiscent of hopped-up bristebots.

Electronic control of combustions in the joints allows for up to 100 explosions per second, which is enough force to do useful work. The prototype is only 29 millimeters long and weighs only 1.6 grams, but it can jump up to 56 centimeters and move at almost 17 centimeters per second.

The prototype is tethered, so those numbers don’t include having to carry its own power or fuel supply, but as a proof of concept it’s pretty interesting. Reportedly a downside is that the process is rather noisy, which we suppose isn’t surprising.

Want to see it in action? Watch the video (embedded below) to get an idea of what it’s capable of. More details are available from the research paper, as well.

Continue reading “Micro Robot Disregards Gears, Embraces Explosions”

FedEx Robot Solves Complex Packing Problems

Despite the fact that it constantly seems like we’re in the midst of a robotics- and artificial intelligence-driven revolution, there are a number of tasks that continue to elude even the best machine learning algorithms and robots. The clothing industry is an excellent example, where the flimsy materials can easily trip up robotic manipulators. But one task like this that seems like it might soon be solve is packing cargo into trucks, as FedEx is trying to do with one of their new robots.

Part of the reason this task is so difficult is that packing problems, similar to “traveling salesman” problems, are surprisingly complex. The packages are not presented to the robot in any particular order, and need to be efficiently placed according to weight and size. This robot, called DexR, uses artificial intelligence paired with an array of sensors to get an idea of each package’s dimensions, which allows it to then plan stacking and ordering configurations and ensure secure fits between all of the other packages. The robot must also be capable of quickly adapting if any packages shift during stacking and re-order or re-stack them.

As robotics platforms and artificial intelligence continue to improve, it’s likely we’ll see a flurry of complex problems like these solved by machine instead of by human. Tackling real-world tasks are often more complex than they seem, as anyone with a printer an a PC LOAD LETTER error can attest to, even handling single sheets of paper can be a difficult task for a robot. Interfacing with these types of robots can be a walk in the park, though, provided you read the documentation first.

A badminton shuttle launcher loaded with shuttles

Hackaday Prize 2023: Automated Shuttle Launcher Enables Solo Badminton Practice

If you want to get better at your favorite sport, there’s really no substitute to putting in more training hours. For solo activities like running or cycling that’s simple enough: the only limit to your training time is your own endurance. But if you’re into games that require a partner, their availability is another limiting factor. So what’s a badminton enthusiast like [Peter Sinclair] to do, when they don’t have a club nearby? Build a badminton training robot, of course.

Automatic shuttlecock launchers are available commercially, but [Peter] found them very expensive and difficult to use. So he set himself a target to design a 3D-printable, low-cost, safe machine that would still be of real use in badminton training. After studying an apparently defunct open-source shuttle launcher called Baddy, he came up with the basic design: a vertical shuttle magazine, a loading mechanism to extract one shuttle at a time and position it for launch, and two wheels spinning at high speed to launch the shuttle forward. Video after the break. Continue reading “Hackaday Prize 2023: Automated Shuttle Launcher Enables Solo Badminton Practice”

The Robot That Lends The Deaf-Blind Community A Hand

The loss of one’s sense of hearing or vision is likely to be devastating in the way that it impacts daily life. Fortunately many workarounds exist using one’s remaining senses — such as sign language — but what if not only your sense of hearing is gone, but you are also blind? Fortunately here, too, a workaround exists in the form of tactile signing, which is akin to visual sign language, except that it uses one’s sense of touch. This generally requires someone who knows tactile sign language to translate from spoken or written forms to tactile signaling. Yet what if you’re deaf-blind and without human assistance? This is where a new robotic system could conceivably fill in.

The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)
The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)

Developed by Tatum Robotics, the Tatum T1 is a a robotic hand and associated software that’s intended to provide this translation function, by taking in natural language information, whether spoken, written or in some digital format, and using a number of translation steps to create tactile sign language as output, whether it’s the ASL format, the BANZSL alphabet or another. These tactile signs are then expressed using the robotic hand, and a connected arm as needed, ideally using ASL gloss to convey as much information as quickly as possible, not unlike with visual ASL.

This also answers the question of why one would not just use a simple braille cell on a hand, as the signing speed is essential to keep up with real-time communications, unlike when, say, reading a book or email. A robotic companion like this could provide deaf-blind individuals with a critical bridge to the world around them. Currently the Tatum T1 is still in the testing phase, but hopefully before long it may be another tool for the tens of thousands of deaf-blind people in the US today.

Humans And Balloon Hands Help Bots Make Breakfast

Breakfast may be the most important meal of the day, but who wants to get up first thing in the morning and make it? Well, there may come a day when a robot can do the dirty work for you. This is Toyota Research Institute’s vision with their innovatively-trained breakfast bots.

Going way beyond pick and place tasks, TRI has, so far, taught robots how to do more than 60 different things using a new method to teach dexterous skills like whisking eggs, peeling vegetables, and applying hazelnut spread to a substrate. Their method is built on generative AI technique called Diffusion Policy, which they use to create what they’re calling Large Behavior Models.

Instead of hours of coding and debugging, the robots learn differently. Essentially, the robot gets a large flexible balloon hand with which to feel objects, their weight, and their effect on other objects (like flipping a pancake). Then, a human shows them how to perform a task before the bot is let loose on an AI model. After a number of hours, say overnight, the bot has a new working behavior.

Now, since TRI claims that their aim is to build robots that amplify people and not replace them, you may still have to plate your own scrambled eggs and apply the syrup to that short stack yourself. But they plan to have over 1,000 skills in the bag of tricks by the end of 2024. If you want more information about the project and to learn about Diffusion Policy without reading the paper, check out this blog post.

Perhaps the robotic burger joint was ahead of its time, but we’re getting there. How about a robot barista?

Continue reading “Humans And Balloon Hands Help Bots Make Breakfast”