Recycling is on paper at least, a wonderful thing. Taking waste and converting it into new usable material is generally more efficient than digging up more raw materials. Unfortunately though, sorting this waste material is a labor-intensive process. With China implementing bans on waste imports, suddenly the world is finding it difficult to find anywhere to accept its waste for reprocessing. In an attempt to help solve this problem, MIT’s CSAIL group have developed a recycling robot.
The robot aims to reduce the reliance on human sorters and thus improve the viability of recycling operations. This is achieved through a novel approach of using special actuators that sort by material stiffness and conductivity. The actuators are known as handed shearing auxetics – a type of actuator that expands in width when stretched. By having two of these oppose each other, they can grip a variety of objects without having to worry about orientation or grip strength like conventional rigid grippers. With pressure sensors to determine how much a material squishes, and a capacitive sensor to determine conductivity, it’s possible to sort materials into paper, plastic, and metal bins.
The research paper outlines the development of the gripper in detail. Care was taken to build something that is robust enough to deal with the recycling environment, as well as capable of handling the sorting tasks. There’s a long way to go to take this proof of concept to the commercially viable stage, but it’s a promising start to a difficult resource problem.
Robot design traditionally separates the body geometry from the mechanics of the gait, but they both have a profound effect upon one another. What if you could play with both at once, and crank out useful prototypes cheaply using just about any old 3D printer? That’s where Interactive Robogami comes in. It’s a tool from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) that aims to let people design, simulate, and then build simple robots with a “3D print, then fold” approach. The idea behind the system is partly to take advantage of the rapid prototyping afforded by 3D printers, but mainly it’s to change how the design work is done.
To make a robot, the body geometry and limb design are all done and simulated in the Robogami tool, where different combinations can have a wild effect on locomotion. Once a design is chosen, the end result is a 3D printable flat pack which is then assembled into the final form with a power supply, Arduino, and servo motors.
A white paper is available online and a demonstration video is embedded below. It’s debatable whether these devices on their own qualify as “robots” since they have no sensors, but as a tool to quickly prototype robot body geometries and gaits it’s an excitingly clever idea.
There are a lot of ways to try to mathematically quantify how healthy a person is. Things like resting pulse rate, blood pressure, and blood oxygenation are all quite simple to measure and can be used to predict various clinical outcomes. However, one you may not have considered is gait velocity, or the speed at which a person walks. It turns out gait velocity is a viable way to predict the onset of a wide variety of conditions, such as congestive heart failure or chronic obtrusive pulmonary disease. It turns out, as people become sick, elderly or infirm, they tend to walk slower – just like the little riflemen in your favourite RTS when their healthbar’s way in the red. But how does one measure this? MIT’s CSAIL has stepped up, with a way to measure walking speed completely wirelessly.
You can read the paper here (PDF). The WiGate device sends out a low-power radio signal, and then measures the reflections to determine a person’s location over time. Alone, however, this is not enough – it’s important to measure the walking speed specifically, to avoid false positives being triggered by a person simply not moving while watching television, for example. Algorithms are used to separate walking activity from the data set, allowing the device to sit in the background, recording walking speed data with no user interaction required whatsoever.
This form of passive monitoring could have great applications in nursing homes, where staff often have a huge number of patients to monitor. It would allow the collection of clinically relevant data without the need for any human intervention; the device could simply alert staff when a patient’s walking pattern is indicative of a bigger problem.
Training machines to effectively complete tasks is an ongoing area of research. This can be done in a variety of ways, from complex programming interfaces, to systems that understand commands in natural langauge. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wanted to see if it was possible for humans to communicate more directly when training a robot. Their system allows a user to correct a robot’s actions using only their brain.
The concept is simple – using an EEG cap to detect brainwaves, the system measures a special type of brain signals called “error-related potentials”. Simply noticing the robot making a mistake allows the robot to correct itself, and for a nice extra touch – blush in embarassment.
This interface allows for a very intuitive way of working with a robot – upon noticing a mistake, the robot is able to automatically stop or correct its behaviour. Currently the system is only capable of being used for very simple tasks – the video shows the robot sorting objects of two types into corresponding bins. The robot knows that if the human has detected an error, it must simply place the object in the other bin. Further research seeks to expand the possibilities of using this automatic brainwave feedback to train robots for more complex tasks. You can read the research paper here.
MIT’s Computer Science and Artificial Intelligence Laboratory, CSAIL, put out a paper recently about an interesting advance in 3D printing. Naturally, being the computer science and AI lab the paper had a robotic bend to it. In summary, they can 3D print a robot with a rubber skin of arbitrarily varying stiffness. The end goal? Shock absorbing skin!
They modified an Objet printer to print simultaneously using three materials. One is a UV curing solid. One is a UV curing rubber, and the other is an unreactive liquid. By carefully depositing these in a pattern they can print a material with any property they like. In doing so they have been able to print mono body robots that, simply put, crash into the ground better. There are other uses of course, from joints to sensor housings. There’s more in the paper.
We’re not sure how this compares to the Objet’s existing ability to mix flexible resins together to produce different Shore ratings. Likely this offers more seamless transitions and a wider range of material properties. From the paper it also appears to dampen better than the alternatives. Either way, it’s an interesting advance and approach. We wonder if it’s possible to reproduce on a larger scale with FDM.
If a picture is worth a thousand words, a video must be worth millions. However, computers still aren’t very good at analyzing video. Machine vision software like OpenCV can do certain tasks like facial recognition quite well. But current software isn’t good at determining the physical nature of the objects being filmed. [Abe Davis, Justin G. Chen, and Fredo Durand] are members of the MIT Computer Science and Artificial Intelligence Laboratory. They’re working toward a method of determining the structure of an object based upon the object’s motion in a video.
The technique relies on vibrations which can be captured by a typical 30 or 60 Frames Per Second (fps) camera. Here’s how it works: A locked down camera is used to image an object. The object is moved due to wind, or someone banging on it, or any other mechanical means. This movement is captured on video. The team’s software then analyzes the video to see exactly where the object moved, and how much it moved. Complex objects can have many vibration modes. The wire frame figure used in the video is a great example. The hands of the figure will vibrate more than the figure’s feet. The software uses this information to construct a rudimentary model of the object being filmed. It then allows the user to interact with the object by clicking and dragging with a mouse. Dragging the hands will produce more movement than dragging the feet.
The results aren’t perfect – they remind us of computer animated objects from just a few years ago. However, this is very promising. These aren’t textured wire frames created in 3D modeling software. The models and skeletons were created automatically using software analysis. The team’s research paper (PDF link) contains all the details of their research. Check it out, and check out the video after the break.
A Group of MIT, Microsoft, and Adobe researchers have managed to reproduce sound using video alone. The sounds we make bounce off every object in the room, causing microscopic vibrations. The Visual Microphoneutilizes a high-speed video camera and some clever signal processing to extract an audio signal from these vibrations. Using video of everyday objects such as snack bags, plants, Styrofoam cups, and water, the team was able to reproduce tones, music and speech. Capturing audio from light isn’t exactly new. Laser microphones have been around for years. The difference here is the fact that the visual microphone is a completely passive device. No laser or special illumination is required.
The secret is in the signal processing, which the team explains in their SIGGRAPH paper (pdf link). They used a complex steerable pyramid along with wavelet filters to obtain local pixel motion values. These local values are averaged into a global motion value. From this global motion value the team is able to measure movement down to 1/1000 of a pixel. Plenty of resolution to decode audio data.
Most of the research is performed with high-speed video cameras, which are well outside the budget of the average hacker. Don’t despair though, the team did prove out that the same magic can be performed with consumer cameras, albeit with lower quality results. The team took advantage of the rolling shutter found in most of today’s CMOS imager based consumer cameras. Rolling shutter CMOS sensors capture images one row at a time. Each row can be processed in a similar fashion to the frames of the high-speed camera. There are some inter-frame gaps when the camera isn’t recording anything though. Even with the reduced resolution, it’s easy to pick out “Mary had a little lamb” in the video below.
We’re blown away by this research, and we’re sure certain organizations will be looking into it for their own use. Don’t pull out your tin foil hats yet though. Foil containers proved to be one of the best sound reflectors.