MIT Cryptographers Are No Match For A Determined Belgian

Twenty years ago, a cryptographic puzzle was included in the construction of a building on the MIT campus. The structure that houses what is now MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) includes a time capsule designed by the building’s architect, [Frank Gehry]. It contains artifacts related to the history of computing, and was meant to be opened whenever someone solved a cryptographic puzzle, or after 35 years had elapsed.

The puzzle was not expected to be solved early, but [Bernard Fabrot], a developer in Belgium, has managed it using not a supercomputer but a run-of-the-mill Intel i7 processor. The capsule will be opened later in May.

The famous cryptographer, [Ronald Rivest], put together what we now know is a deceptively simple challenge. It involves a successive squaring operation, and since it is inherently sequential there is no possibility of using parallel computing techniques to take any shortcuts. [Fabrot] used the GNU Multiple Precision Arithmetic Library in his code, and took over 3 years of computing time to solve it. Meanwhile another team is using an FPGA and are expecting a solution in months, though have been pipped to the post by the Belgian.

The original specification document is a fascinating read, for both the details of the puzzle itself and for [Rivest]’s predictions as to the then future direction of computing power. He expected the puzzle would take the full 35 years to solve and that there would be 10Ghz processors by 2012 when Moore’s Law would begin to tail off, but he is reported as saying that he underestimated the corresponding advances in software.

Header image: Ray and Maria Stata Center, Tafyrn (CC BY 3.0)

This Bot Might Be The Way To Save Recycling

Recycling is on paper at least, a wonderful thing. Taking waste and converting it into new usable material is generally more efficient than digging up more raw materials. Unfortunately though, sorting this waste material is a labor-intensive process. With China implementing bans on waste imports, suddenly the world is finding it difficult to find anywhere to accept its waste for reprocessing. In an attempt to help solve this problem, MIT’s CSAIL group have developed a recycling robot.

The robot aims to reduce the reliance on human sorters and thus improve the viability of recycling operations. This is achieved through a novel approach of using special actuators that sort by material stiffness and conductivity. The actuators are known as handed shearing auxetics – a type of actuator that expands in width when stretched. By having two of these oppose each other, they can grip a variety of objects without having to worry about orientation or grip strength like conventional rigid grippers. With pressure sensors to determine how much a material squishes, and a capacitive sensor to determine conductivity, it’s possible to sort materials into paper, plastic, and metal bins.

The research paper outlines the development of the gripper in detail. Care was taken to build something that is robust enough to deal with the recycling environment, as well as capable of handling the sorting tasks. There’s a long way to go to take this proof of concept to the commercially viable stage, but it’s a promising start to a difficult resource problem.

MIT’s CSAIL is a hotbed of interesting projects, developing everything from visual microphones to camoflauge for image recognition systems. Video after the break.

Continue reading “This Bot Might Be The Way To Save Recycling”

Design And 3D Print Robots With Interactive Robogami

Internals of 3D printed “print and fold” robot. [Image source: MIT CSAIL]
Robot design traditionally separates the body geometry from the mechanics of the gait, but they both have a profound effect upon one another. What if you could play with both at once, and crank out useful prototypes cheaply using just about any old 3D printer? That’s where Interactive Robogami comes in. It’s a tool from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) that aims to let people design, simulate, and then build simple robots with a “3D print, then fold” approach. The idea behind the system is partly to take advantage of the rapid prototyping afforded by 3D printers, but mainly it’s to change how the design work is done.

To make a robot, the body geometry and limb design are all done and simulated in the Robogami tool, where different combinations can have a wild effect on locomotion. Once a design is chosen, the end result is a 3D printable flat pack which is then assembled into the final form with a power supply, Arduino, and servo motors.

A white paper is available online and a demonstration video is embedded below. It’s debatable whether these devices on their own qualify as “robots” since they have no sensors, but as a tool to quickly prototype robot body geometries and gaits it’s an excitingly clever idea.

Continue reading “Design And 3D Print Robots With Interactive Robogami”

Measuring Walking Speed Wirelessly

There are a lot of ways to try to mathematically quantify how healthy a person is. Things like resting pulse rate, blood pressure, and blood oxygenation are all quite simple to measure and can be used to predict various clinical outcomes. However, one you may not have considered is gait velocity, or the speed at which a person walks. It turns out gait velocity is a viable way to predict the onset of a wide variety of conditions, such as congestive heart failure or chronic obtrusive pulmonary disease. It turns out, as people become sick, elderly or infirm, they tend to walk slower – just like the little riflemen in your favourite RTS when their healthbar’s way in the red. But how does one measure this? MIT’s CSAIL has stepped up, with a way to measure walking speed completely wirelessly.

You can read the paper here (PDF). The WiGate device sends out a low-power radio signal, and then measures the reflections to determine a person’s location over time. Alone, however, this is not enough – it’s important to measure the walking speed specifically, to avoid false positives being triggered by a person simply not moving while watching television, for example. Algorithms are used to separate walking activity from the data set, allowing the device to sit in the background, recording walking speed data with no user interaction required whatsoever.

This form of passive monitoring could have great applications in nursing homes, where staff often have a huge number of patients to monitor. It would allow the collection of clinically relevant data without the need for any human intervention; the device could simply alert staff when a patient’s walking pattern is indicative of a bigger problem.

We see some great health research here at Hackaday – like this open source ECG. Video after the break.

Continue reading “Measuring Walking Speed Wirelessly”

How To Telepathically Tell A Robot It Screwed Up

Training machines to effectively complete tasks is an ongoing area of research. This can be done in a variety of ways, from complex programming interfaces, to systems that understand commands in natural langauge. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wanted to see if it was possible for humans to communicate more directly when training a robot. Their system allows a user to correct a robot’s actions using only their brain.

The concept is simple – using an EEG cap to detect brainwaves, the system measures a special type of brain signals called “error-related potentials”. Simply noticing the robot making a mistake allows the robot to correct itself, and for a nice extra touch – blush in embarassment.

This interface allows for a very intuitive way of working with a robot – upon noticing a mistake, the robot is able to automatically stop or correct its behaviour. Currently the system is only capable of being used for very simple tasks – the video shows the robot sorting objects of two types into corresponding bins. The robot knows that if the human has detected an error, it must simply place the object in the other bin. Further research seeks to expand the possibilities of using this automatic brainwave feedback to train robots for more complex tasks. You can read the research paper here.

MIT’s CSAIL work on lots of exciting projects – their video microphone technology is truly astounding.

[Thanks to Adam Connor-Simmons for the tip!]

Robots With 3D Printed Shock Absorbing Skin

MIT’s Computer Science and Artificial Intelligence Laboratory, CSAIL, put out a paper recently about an interesting advance in 3D printing. Naturally, being the computer science and AI lab the paper had a robotic bend to it. In summary, they can 3D print a robot with a rubber skin of arbitrarily varying stiffness. The end goal? Shock absorbing skin!

They modified an Objet printer to print simultaneously using three materials. One is a UV curing solid. One is a UV curing rubber, and the other is an unreactive liquid. By carefully depositing these in a pattern they can print a material with any property they like. In doing so they have been able to print mono body robots that, simply put, crash into the ground better.  There are other uses of course, from joints to sensor housings. There’s more in the paper.

We’re not sure how this compares to the Objet’s existing ability to mix flexible resins together to produce different Shore ratings. Likely this offers more seamless transitions and a wider range of material properties. From the paper it also appears to dampen better than the alternatives. Either way, it’s an interesting advance and approach. We wonder if it’s possible to reproduce on a larger scale with FDM.

Interactive Dynamic Video

If a picture is worth a thousand words, a video must be worth millions. However, computers still aren’t very good at analyzing video. Machine vision software like OpenCV can do certain tasks like facial recognition quite well. But current software isn’t good at determining the physical nature of the objects being filmed. [Abe Davis, Justin G. Chen, and Fredo Durand] are members of the MIT Computer Science and Artificial Intelligence Laboratory. They’re working toward a method of determining the structure of an object based upon the object’s motion in a video.

The technique relies on vibrations which can be captured by a typical 30 or 60 Frames Per Second (fps) camera. Here’s how it works: A locked down camera is used to image an object. The object is moved due to wind, or someone banging on it, or  any other mechanical means. This movement is captured on video. The team’s software then analyzes the video to see exactly where the object moved, and how much it moved. Complex objects can have many vibration modes. The wire frame figure used in the video is a great example. The hands of the figure will vibrate more than the figure’s feet. The software uses this information to construct a rudimentary model of the object being filmed. It then allows the user to interact with the object by clicking and dragging with a mouse. Dragging the hands will produce more movement than dragging the feet.

The results aren’t perfect – they remind us of computer animated objects from just a few years ago. However, this is very promising. These aren’t textured wire frames created in 3D modeling software. The models and skeletons were created automatically using software analysis. The team’s research paper (PDF link) contains all the details of their research. Check it out, and check out the video after the break.

Continue reading “Interactive Dynamic Video”