Will Embodied AI Make Prosthetics More Humane?

Building a robotic arm and hand that matches human dexterity is tougher than it looks. We can create aesthetically pleasing ones, very functional ones, but the perfect mix of both? Still a work in progress. Just ask [Sarah de Lagarde], who in 2022 literally lost an arm and a leg in a life-changing accident. In this BBC interview, she shares her experiences openly – highlighting both the promise and the limits of today’s prosthetics.

The problem is that our hands aren’t just grabby bits. They’re intricate systems of nerves, tendons, and ridiculously precise motor control. Even the best AI-powered prosthetics rely on crude muscle signals, while dexterous robots struggle with the simplest things — like tying shoelaces or flipping a pancake without launching it into orbit.

That doesn’t mean progress isn’t happening. Researchers are training robotic fingers with real-world data, moving from ‘oops’ to actual precision. Embodied AI, i.e. machines that learn by physically interacting with their environment, is bridging the gap. Soft robotics with AI-driven feedback loops mimic how our fingers instinctively adjust grip pressure. If haptics are your point of interest, we have posted about it before.

The future isn’t just robots copying our movements, it’s about them understanding touch. Instead of machine learning, we might want to shift focus to human learning. If AI cracks that, we’re one step closer.

Original photo by Marco Bianchetti on Unsplash

 

4 thoughts on “Will Embodied AI Make Prosthetics More Humane?

  1. This is the sort of thing AI is actually great for.

    It’s like games using AI to generate extra frames because of the PCIe bottleneck leaving the GPU idle for milliseconds between frames. Since the CPU can’t communicate with the GPU fast enough, the GPU uses AI to infer what the CPU would be saying if it could.

    Or using AI in media decompression, intelligently reconstructing details crushed in lossy compression that’s required by low bandwidth connections.

    If we can’t match organic nerves for signal propagation and decoding, AI can fill the gaps.

  2. To improve prosthetics they need only to be able to accurately communicate information to and from the brain. The brain will be able to work out how to control the prosthetic better from there. What they suggest is making a shortcut by using AI that doesn’t actually improve communication. To me this feels like a cop-out instead of doing the hard work (R&D) that is needed to improve man-machine communications.

    1. (complete amateur feedback here)

      The brain will be able to work out how to control the prosthetic better from there.

      Agreed. Isn’t this an ongoing process in all our lives with all our sensors and “actuators”? Constantly learning, re-learning and adjusting “what is where” (spacial orientation/position of our own extremities), “how we feel what”, “what signals need to be send to make a specific move” and so forth …

      I’d assume no ones brain signals to the hand and sensory information coming back is identical to anyone else’s when doing the same task.
      They probably even change during a single individuals lifetime.

      -> using ML algos to “train” chips for delicate fine control in closed loop with the sensors in an prosthetic hand may be a good idea.
      Even when “fitting”(?) one to/for a patient as a limited initial learning stage between human and machine.

      But after that I’d think in-/output should be “static”.
      AI I should at most internally to the prosthesis compensate for it’s own wear & tear.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.