Several video clips of a robot arm manipulating objects in a kitchen environment, demonstrating some of the 12 generalized skills

RoboAgent Gets Its MT-ACT Together

Researchers at Carnegie Mellon University have shared a pre-print paper on generalized robot training within a small “practical data budget.” The team developed a system that breaks movement tasks into 12 “skills” (e.g., pick, place, slide, wipe) that can be combined to create new and complex trajectories within at least somewhat novel scenarios, called MT-ACT: Multi-Task Action Chunking Transformer. The authors write:

Trained merely on 7500 trajectories, we are demonstrating a universal RoboAgent that can exhibit a diverse set of 12 non-trivial manipulation skills (beyond picking/pushing, including articulated object manipulation and object re-orientation) across 38 tasks and can generalize them to 100s of diverse unseen scenarios (involving unseen objects, unseen tasks, and to completely unseen kitchens). RoboAgent can also evolve its capabilities with new experiences.

Continue reading “RoboAgent Gets Its MT-ACT Together”

Smell Tech over Internet Wide

Researchers Seek To Create The Digital Smell Interface

We hear digital audio, we see digital video, and we feel digital haptic feedback. However, we don’t have an analog for the sense of smell. [Kasun] and his team of researchers from the Imagineering Institute in Malaysia are in the midst of changing that reality. Their project aims to transmit fragrances via electronic stimulation. Though it’s really more of a step toward creating a multi-sensory internet.

The team’s “electric smell machine” consists of a variable power supply connected to silver electrodes wrapped around an endoscopic camera. The camera is necessary to ensure contact with the user’s olfactory bulb as electric current pulses through the electrodes. The current values vary based upon the scent being replicated and are in the 0.2mA neighborhood. Early trials of the machine have revealed that around one-quarter of test subjects are able to identify the smells being replicated. They reported smells being fruity, sweet, and woody though all had a chemical-like odor attached.

The concept of “smell-o-vision” is not a new one, as it has been around longer than motion pictures with sound. Previous attempts at accompanying film and television with scent have been a result of chemical reactions. Devices from these types of experiments typically involved cartridges that would need to be replaced when the chemical substances were depleted. [Kasun]’s team approach is to avoid the chemical approach in favor of directly stimulating the olfactory receptors. Those interested in the gritty details can read the research paper on digitizing smell.

[Kasun] and his team uploaded a video on the project that you can view below. It’s all a work in progress at this point, but sign me up for a trial when they pinpoint the true essence of new car smell.

Continue reading “Researchers Seek To Create The Digital Smell Interface”

3D Printed Robotic Arms For Sign Language

A team of students in Antwerp, Belgium are responsible for Project Aslan, which is exploring the feasibility of using 3D printed robotic arms for assisting with and translating sign language. The idea came from the fact that sign language translators are few and far between, and it’s a task that robots may be able to help with. In addition to translation, robots may be able to assist with teaching sign language as well.

The project set out to use 3D printing and other technology to explore whether low-cost robotic signing could be of any use. So far the team has an arm that can convert text into finger spelling and counting. It’s an interesting use for a robotic arm; signing is an application for which range of motion is important, but there is no real need to carry or move any payloads whatsoever.

Closeup of hand actuators and design. Click to enlarge.

A single articulated hand is a good proof of concept, and these early results show some promise and potential but there is still a long ways to go. Sign language involves more than just hands. It is performed using both hands, arms and shoulders, and incorporates motions and facial expressions. Also, the majority of sign language is not finger spelling (reserved primarily for proper names or specific nouns) but a robot hand that is able to finger spell is an important first step to everything else.

Future directions for the project include adding a second arm, adding expressiveness, and exploring the use of cameras for the teaching of new signs. The ability to teach different signs is important, because any project that aims to act as a translator or facilitator needs the ability to learn and update. There is a lot of diversity in sign languages across the world. For people unfamiliar with signing, it may come as a surprise that — for example — not only is American Sign Language (ASL) related to French sign language, but both are entirely different from British Sign Language (BSL). A video of the project is embedded below.

Continue reading “3D Printed Robotic Arms For Sign Language”

Smart Eyeglasses That Auto Focus Where You Look

A University of Utah team have a working prototype of a new twist on fluid-filled lenses for correction of vision problems: automatic adjustment and refocus depending on what you’re looking at. Technically, the glasses have a distance sensor embedded into the front of the frame and continually adjust the focus of the lenses. An 8 gram, 110 mAh battery powers the prototype for roughly 6 hours.

Eyeglasses that can adapt on the fly to different focal needs is important because many people with degraded vision suffer from more than one condition at the same time, which makes addressing their vision problems more complex than a single corrective lens. For example, many people who are nearsighted or farsighted (where near objects and far objects far objects and near objects are seen out of focus, respectively) also suffer from a general loss of the eye’s ability to change focus, a condition that is age-related. As a result, people require multiple sets of eyeglasses for different conditions. Bifocal or trifocal or progressive lenses are really just multiple sets of lenses squashed into a smaller form factor, and greatly reduce the wearer’s field of view which is itself a significant vision impairment. A full field of view could be restored if eyeglass lenses were able to adapt to different needs based on object distance, and that is what this project achieves.

Continue reading “Smart Eyeglasses That Auto Focus Where You Look”