Four images in as many panes. Top left is a fuchsia bottle with a QR code that only shows up on the smartphone screen held above it. Top right image is A person holding a smartphone over a red wristband. The phone displays a QR code on its screen that it sees but is invisible in the visible wavelengths. Bottom left is a closeup of the red wristband in visible light and the bottom right image is the wristband in IR showing the three QR codes embedded in the object.

Fluorescent Filament Makes Object Identification Easier

QR codes are a handy way to embed information, but they aren’t exactly pretty. New work from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have a new way to produce high contrast QR codes that are invisible. [PDF]

If this sounds familiar, you may remember CSAILs previous project embedding QR codes into 3D prints via IR-transparent filament. This followup to that research increases the detection of the objects by using an IR-fluorescent filament. Another benefit of this new approach is that while the InfraredTags could be any color you wanted as long as it was black, BrightMarkers can be embedded in objects of any color since the important IR component is embedded in traditional filament instead of the other way around.

One of the more interesting applications is privacy-preserving object detection since the computer vision system only “sees” the fluorescent objects. The example given is marking a box of valuables in a home to be detected by interior cameras without recording the movements of the home’s occupants, but the possibilities certainly don’t end there, especially given the other stated application of tactile interfaces for VR or AR systems.

We’re interested to see if the researchers can figure out how to tune the filament to fluoresce in more colors to increase the information density of the codes. Now, go forth and 3D print a snake with snake in a QR code inside!

Continue reading “Fluorescent Filament Makes Object Identification Easier”

MIT’s Hair-Brushing Robot Untangles Difficult Robotics Problem

Whether you care to admit it or not, hair is important to self-image, and not being able to deal with it yourself feels like a real loss of independence. To help people with limited mobility, researchers at MIT CSAIL have created a hair-brushing robot that combines a camera with force feedback and closed-loop control to adjust to any hair type from straight to curly on the fly. They achieved this by examining hair as double helices of soft fibers and developed a mathematical model to untangle them much like a human would — by working from the bottom up.

It may look like a hairbrush strapped to a robot arm, but there’s more to it than that. Before it ever starts brushing, the robot’s camera takes a picture that gets cropped down to a rectangle of pure hair data. This image is converted to grayscale, and then the program analyzes the x/y image gradients. The straighter the hair, the more edges it has in the x-direction, whereas curly hair is more evenly distributed. Finally, the program computes the ratio of straightness to curliness, and uses this number to set the pain threshold.

The brush is equipped with sensors that measure the forces being exerted on the hair and scalp as it’s being brushed, and compares this input to a baseline established by a human who used it to brush their own hair. We think it would be awesome if the robot could grasp the section of hair first so the person can’t feel the pull against their scalp, and start by brushing out the ends before brushing from the scalp down, but we admit that would be asking a lot. Maybe they could get it to respond to exclamations like ‘ow’ and ‘ouch’. Human trials are still in the works. For now, watch it gently brush out various wigs after the break.

Even though we have wavy hair that tangles quite easily, we would probably let this robot brush our hair. But this haircut robot? We’re not that brave.

Continue reading “MIT’s Hair-Brushing Robot Untangles Difficult Robotics Problem”

MIT Prints Robots With Lasers

MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) wants to convert laser cutters into something more. By attaching a head to a commercial laser cutter and adding software, they combine the functions of a cutter, a conductive printer, and a pick and place system. The idea is to enable construction of entire devices such as robots and drones.

The concept, called LaserFactory, sounds like a Star Trek-style replicator, but it doesn’t create things like circuit elements and motors. It simply picks them up, places them, and connects them using silver conductive ink. You can get a good idea of how it works by watching the video below.

Continue reading “MIT Prints Robots With Lasers”

Automating The Disinfection Of Large Spaces With Robots

What do you do when you have to disinfect an entire warehouse? You could send a group of people through the place with UV-C lamps, but that would take a long time as said humans cannot be in the same area as the UV-C radiation, as much as they may like the smell of BBQ chicken. Constantly repositioning the lamps or installing countless lamps would get in the way during normal operation. The answer is to strap UV-C lights to a robot according to MIT’s CSAIL, and have it ride around the space.

As can be seen in the video (also embedded after the break), a CSAIL group has been working with telepresence robotics company Ava Robotics and the Greater Boston Food Bank (GBFB). Their goal was to create a robotic system that could autonomously disinfect a GBFB warehouse using UV-C without exposing any humans to the harmful radiation. While the robotics can be controlled remotely, they can also map the space and navigate between waypoints.

While testing the system, the team used a UV-C dosimeter to confirm the effectiveness of this setup. With the robot driving along at a leisurely 0.22 miles per hour (~0.35 kilometer per hour), it was able to cover approximately 4,000 square feet (~372 square meter) in about half an hour. They estimated that about 90% of viruses like SARS-CoV-2 could be neutralized this way.

During trial runs, they discovered the need to have the robot adapt to the constantly changing layout of the warehouse, including which aisles require which UV-C depending on how full they are. Having multiple of these robots in the same space coordinate with each other would also be a useful feature addition.

Continue reading “Automating The Disinfection Of Large Spaces With Robots”

An Algorithm For De-Biasing AI Systems

A fundamental truth about AI systems is that training the system with biased data creates biased results. This can be especially dangerous when the systems are being used to predict crime or select sentences for criminals, since they can hinge on unrelated traits such as race or gender to make determinations.

A group of researchers from the Massachusetts Institute of Technology (MIT) CSAIL is working on a solution to “de-bias” data by resampling it to be more balanced. The paper published by PhD students [Alexander Amini] and [Ava Soleimany] describes an algorithm that can learn a specific task – such as facial recognition – as well as the structure of the training data, which allows it to identify and minimize any hidden biases.

Testing showed that the algorithm minimized “categorical bias” by over 60% compared against other widely cited facial detection models, all while maintaining the same precision of detection. This figure was maintained when the team evaluated a facial-image dataset from the Algorithmic Justice League, a spin-off group from the MIT Media Lab.

The team says that their algorithm would be particularly relevant for large datasets that can’t easily be vetted by a human, and can potentially rectify algorithms used in security, law enforcement, and other domains beyond facial detection.

How To Telepathically Tell A Robot It Screwed Up

Training machines to effectively complete tasks is an ongoing area of research. This can be done in a variety of ways, from complex programming interfaces, to systems that understand commands in natural langauge. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wanted to see if it was possible for humans to communicate more directly when training a robot. Their system allows a user to correct a robot’s actions using only their brain.

The concept is simple – using an EEG cap to detect brainwaves, the system measures a special type of brain signals called “error-related potentials”. Simply noticing the robot making a mistake allows the robot to correct itself, and for a nice extra touch – blush in embarassment.

This interface allows for a very intuitive way of working with a robot – upon noticing a mistake, the robot is able to automatically stop or correct its behaviour. Currently the system is only capable of being used for very simple tasks – the video shows the robot sorting objects of two types into corresponding bins. The robot knows that if the human has detected an error, it must simply place the object in the other bin. Further research seeks to expand the possibilities of using this automatic brainwave feedback to train robots for more complex tasks. You can read the research paper here.

MIT’s CSAIL work on lots of exciting projects – their video microphone technology is truly astounding.

[Thanks to Adam Connor-Simmons for the tip!]