Robots Collaborate To Localize Themselves Precisely

Here’s the thing about robots. It’s hard for them to figure out where to go or what they should be doing if they don’t know where they are. Giving them some method of localization is key to their usefulness in almost any task you can imagine. To that end, [Guy Elmakis], [Matan Coronel] and [David Zarrouk] have been working on methods for pairs of robots to help each other in this regard.

As per the research paper, the idea is to perform real-time 3D localization between two robots in a given location. The basic idea is that the robots take turns moving. While one robot moves, the other effectively acts as a landmark. The robots are equipped with inertial measurement units and cameras in a turret, which they use to track each other and their own movements. Each robot is equipped with a Raspberry Pi 4 for processing image data and computing positions, and the two robots communicate via Bluetooth to coordinate their efforts.

It’s an interesting technique that could have some real applications in swarm robotics, and in operations in areas where satellite navigation and other typical localization techniques are not practical. If you’re looking for more information, you can find the paper here. We’ve seen some other neat localization techniques for small robots before, too. Video after the break.

Continue reading “Robots Collaborate To Localize Themselves Precisely”

Supercon 2023: Soft Actuators As Assistive Tech

When we think of assistive prostheses or braces, we often think of hard and rigid contraptions. After all, it wasn’t that long ago that prosthetic limbs were still being made out of wood. Even devices made of more modern materials tend to have a robotic quality that inevitably limits their dexterity. However, advancements in soft robotics could allow for assistive devices that more closely mimic their organic counterparts.

At Supercon 2023, Benedetta Lia Mandelli and Emilio Sordi presented their work in developing soft actuator orthosis — specifically, a brace that can help tetraplegics with limited finger and thumb control. Individuals with certain spinal cord injuries can move their arms and wrists but are unable to grasp objects.

A traditional flexor hinge brace

Existing braces can help restore this ability, but they are heavy and limited by the fact that the wearer needs to hold their wrist in a specific position to keep pressure on the mechanism. By replacing the rigid linkage used in the traditional orthosis, the experience of using the device is improved in many ways.

Not only is it lighter and more comfortable to wear, but the grip strength can also be more easily adjusted. The most important advancement however is how the user operates the device.

Like the more traditional designs, the wearer controls the grip through the position of their wrist. But the key difference with the soft actuator version is that the user doesn’t need to maintain that wrist position to keep the grip engaged. Once the inertial measurement units (IMUs) have detected the user has put their wrist into the proper position, the electronics maintain the pressure inside the actuator until commanded otherwise. This means that the user can freely move their wrist after gripping an object without inadvertently dropping it.

Continue reading “Supercon 2023: Soft Actuators As Assistive Tech”

Achieving Human Level Competitive Robot Table Tennis

A team at Google has spent a lot of time recently playing table tennis, purportedly only for science. Their goal was to see whether they could construct a robot which would not only play table tennis, but even keep up with practiced human players. In the paper available on ArXiv, they detail what it took to make it happen. The team also set up a site with a simplified explanation and some videos of the robot in action.

Table tennis robot vs human match outcomes. B is beginner, I is intermediate, A is advanced. (Credit: Google)
Table tennis robot vs human match outcomes. B is beginner, I is intermediate, A is advanced. (Credit: Google)

In the end, it took twenty motion-capture cameras, a pair of 125 FPS cameras, a 6 DOF robot on two linear rails, a special table tennis paddle, and a very large annotated dataset to train multiple convolutional neural networks (CNN) on to analyze the incoming visual data. This visual data was then combined with details like knowledge of the paddle’s position to churn out a value for use in the look-up table that forms the core of the high-level controller (HLC). This look-up table then decides which low-level controller (LLC) is picked to perform a certain action. In order to prevent the CNNs of the LLCs from ‘forgetting’ the training data, a total of 17 different CNNs were used, one per LLC.

The robot was tested with a range of players from a local table tennis club which made clear that while it could easily defeat beginners, intermediate players pose a serious threat. Advanced players completely demolished the table tennis robot. Clearly we do not have to fear our robotic table tennis playing overlords just yet, but the robot did receive praise for being an interesting practice partner. Continue reading “Achieving Human Level Competitive Robot Table Tennis”

Re-imagining Telepresence With Humanoid Robots And VR Headsets

Don’t let the name of the Open-TeleVision project fool you; it’s a framework for improving telepresence and making robotic teleoperation far more intuitive than it otherwise would be. It accomplishes this in part by taking advantage of the remarkable technology packed into modern VR headsets like the Apple Vision Pro and Meta Quest. There are loads of videos on the project page, many of which demonstrate successful teleoperation across vast distances.

Teleoperation of robotic effectors typically takes some getting used to. The camera views are unusual, the limbs don’t move the same way arms do, and intuitive human things like looking around to get a sense of where everything is don’t translate well.

A stereo camera with gimbal streaming to a VR headset complete with head tracking seems like a very hackable design.

To address this, researches provided a user with a robot-mounted, real-time stereo video stream (through which the user can turn their head and look around normally) as well as mapping arm and hand movements to humanoid robotic counterparts. This provides the feedback to manipulate objects and perform tasks in a much more intuitive way. In short, when our eyes, bodies, and hands look and work more or less the way we expect, it turns out it’s far easier to perform tasks.

The research paper goes into detail about the different systems, but in essence, a stereo depth and RGB camera is perched with a 3D printed gimbal atop a humanoid robot frame like the Unitree H1 equipped with high dexterity hands. A VR headset takes care of displaying a real-time stereoscopic video stream and letting the user look around. Hand tracking for the user is mapped to the dexterous hands and fingers. This lets a person look at, manipulate, and handle things without in-depth training. Perhaps slower and more clumsily than they would like, but in an intuitive way all the same.

Interested in taking a closer look? The GitHub repository has the necessary code, and while most of us will never be mashing ADD TO CART on something like the Unitree H1, the reference design for a stereo camera streaming to a VR headset and mirroring head tracking with a two-motor gimbal looks like the sort of thing that would be useful for a telepresence project or two.

Continue reading “Re-imagining Telepresence With Humanoid Robots And VR Headsets”

A bright orange sailboat with solar panels on the wing sail and the hull of the craft. A number of protuberances from the wing are visible containing instruments and radio equipment.

Saildrones Searching The Sea For Clues To Hurricane Behavior

Hurricanes can cause widespread destruction, so early forecasting of their strength is important to protect people and their homes. The US National Oceanic and Atmospheric Administration (NOAA) is using saildrones to get better data from inside these monster storms.

Rising ocean temperatures due to climate change are causing hurricanes to intensify more rapidly than in the past, although modeling these changes is still a difficult task. People on shore need to know if they’re in store for a tropical storm or a high strength hurricane to know what precautions to take. Evacuating an area is expensive and disruptive, so it’s understandable that people want to know if it’s necessary.

Starting with five units in 2021, the fleet has gradually increased in size to twelve last summer. These 23ft (7m), 33ft (10m), or 65ft (20m) long vessels are propelled by wing sails and power their radio and telemetry systems with a combination of solar and battery power. No fossil fueled vessel can match the up to 370 days at sea without refueling that these drones can achieve, and the ability to withstand hurricane winds and sea conditions allow scientists an up-close-and-personal look at a hurricane without risking human lives.

We’ve covered how the data gets from a saildrone to shore before, and if you want to know how robots learn to sail, there’s a Supercon talk for that.

Thanks to [CrLz] for the tip!

On the left, a transluscent yellowy-tan android head with eyes set behind holes in the face. On the right, a bright pink circle with small green eyes. It is manipulated into the image of a smiling face via its topography.

A Robot Face With Human Skin

Many scifi robots have taken the form of their creators. In the increasingly blurry space between the biological and the mechanical, researchers have found a way to affix human skin to robot faces. [via NewScientist]

Previous attempts at affixing skin equivalent, “a living skin model composed of cells and extracellular matrix,” to robots worked, even on moving parts like fingers, but typically relied on protrusions that impinged on range of motion and aesthetic concerns, which are pretty high on the list for robots designed to predominantly interact with humans. Inspired by skin ligaments, the researchers have developed “perforation-type anchors” that use v-shaped holes in the underlying 3D printed surface to keep the skin equivalent taut and pliable like the real thing.

The researchers then designed a face that took advantage of the attachment method to allow their robot to have a convincing smile. Combined with other research, robots might soon have skin with touch, sweat, and self-repair capabilities like Data’s partial transformation in Star Trek: First Contact.

We wonder what this extremely realistic humanoid hand might look like with this skin on the outside. Of course that raises the question of if we even need humanoid robots? If you want something less uncanny, maybe try animating your stuffed animals with this robotic skin instead?

Supercon 2023: Jesse T. Gonzalez Makes Circuit Boards That Breathe And Bend

Most robots are built out of solid materials like metal and plastic, giving them rigid structures that are easy to work with and understand. But you can open up much wider possibilities if you explore alternative materials and construction methods. As it turns out, that’s precisely what [Jesse T. Gonzalez] specializes in.

Jesse is a PhD candidate at Carnegie Mellon’s Human-Computer Interaction Institute, and an innovator to boot. His talk at the 2023 Hackaday Supercon covers his recent work on making circuit boards that can breathe and bend. You might not even call them robots, but his creations are absolutely robotic.

Continue reading “Supercon 2023: Jesse T. Gonzalez Makes Circuit Boards That Breathe And Bend”