MIT Is Building a Better 3D Printer

Traditional desktop 3D printing technology has effectively hit a wall. The line between a $200 and a $1000 printer is blurrier now than ever before, and there’s a fairly prevalent argument in the community that you’d be better off upgrading two cheap printers and pocketing the change than buying a single high-end printer if the final results are going to be so similar.

The reason for this is simple: physics. Current printers have essentially hit the limits of how fast the gantry can move, how fast plastic filament can pushed through the extruder, and how fast that plastic can be melted. To move forward, we’re going to need to come up with something altogether different. Recently a team from MIT has taken the first steps down that path by unveiling a fundamental rethinking of 3D printing that specifically addresses the issues currently holding all our machines back, with a claimed 10-fold increase in performance over traditional printing methods.

MIT’s revolutionary laser-assisted hot end.

As anyone who’s pushed their 3D printer a bit too hard can tell you, the first thing that usually happens is the extruder begins to slip and grind the filament down. As the filament is ground down it starts depositing plastic on the hobbed gear, further reducing grip in the extruder and ultimately leading to under-extrusion or a complete print failure. To address this issue, MIT’s printer completely does away with the “pinch wheel” extruder design and replaces it with a screw mechanism that pulls special threaded filament down into the hot end. The vastly increased surface area between the filament and the extruder allows for much higher extrusion pressure.

An improved extruder doesn’t do any good if you can’t melt the incoming plastic fast enough to keep up with it, and to that end MIT has pulled out the really big guns. Between the extruder and traditional heater block, the filament passes through a gold-lined optical cavity where it is blasted with a pulse modulated 50 W laser. By closely matching the laser wavelength to the optical properties of the plastic, the beam is able to penetrate the filament and evenly bring it up to nearly the melting point. All without physically touching the filament and incurring frictional losses.

There are still technical challenges to face, but this research may well represent the shape of things to come for high-end printers. In other words, don’t expect a drop-in laser hot end replacement for your $200 printer anytime soon; the line is about to get blurry again.

Speeding up 3D printing is a popular topic lately, and for good reason. While 3D printing is still a long way off from challenging traditional manufacturing in most cases, it’s an outstanding tool for use during development and prototyping. The faster you can print, the faster you can iterate your design.

Thanks to [Maave] for the tip.

Continue reading “MIT Is Building a Better 3D Printer”

AI Watches You Sleep; Knows When You Dream

If you’ve never been a patient at a sleep laboratory, monitoring a person as they sleep is an involved process of wires, sensors, and discomfort. Seeking a better method, MIT researchers — led by [Dina Katabi] and in collaboration with Massachusetts General Hospital — have developed a device that can non-invasively identify the stages of sleep in a patient.

Approximately the size of a laptop and mounted on a wall near the patient, the device measures the minuscule changes in reflected low-power RF signals. The wireless signals are analyzed by a deep neural-network AI and predicts the various sleep stages — light, deep, and REM sleep — of the patient, negating the task of manually combing through the data. Despite the sensitivity of the device, it is able to filter out irrelevant motions and interference, focusing on the breathing and pulse of the patient.

What’s novel here isn’t so much the hardware as it is the processing methodology. The researchers use both convolutional and recurrent neural networks along with what they call an adversarial training regime:

Our training regime involves 3 players: the feature encoder (CNN-RNN), the sleep stage predictor, and the source discriminator. The encoder plays a cooperative game with the predictor to predict sleep stages, and a minimax game against the source discriminator. Our source discriminator deviates from the standard domain-adversarial discriminator in that it takes as input also the predicted distribution of sleep stages in addition to the encoded features. This dependence facilitates accounting for inherent correlations between stages and individuals, which cannot be removed without degrading the performance of the predictive task.

Anyone out there want to give this one a try at home? We’d love to see a HackRF and GNU Radio used to record RF data. The researchers compare the RF to WiFi so repurposing a 2.4 GHz radio to send out repeating uniformed transmissions is a good place to start. Dump it into TensorFlow and report back.

Continue reading “AI Watches You Sleep; Knows When You Dream”

Design and 3D Print Robots with Interactive Robogami

Internals of 3D printed “print and fold” robot. [Image source: MIT CSAIL]
Robot design traditionally separates the body geometry from the mechanics of the gait, but they both have a profound effect upon one another. What if you could play with both at once, and crank out useful prototypes cheaply using just about any old 3D printer? That’s where Interactive Robogami comes in. It’s a tool from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) that aims to let people design, simulate, and then build simple robots with a “3D print, then fold” approach. The idea behind the system is partly to take advantage of the rapid prototyping afforded by 3D printers, but mainly it’s to change how the design work is done.

To make a robot, the body geometry and limb design are all done and simulated in the Robogami tool, where different combinations can have a wild effect on locomotion. Once a design is chosen, the end result is a 3D printable flat pack which is then assembled into the final form with a power supply, Arduino, and servo motors.

A white paper is available online and a demonstration video is embedded below. It’s debatable whether these devices on their own qualify as “robots” since they have no sensors, but as a tool to quickly prototype robot body geometries and gaits it’s an excitingly clever idea.

Continue reading “Design and 3D Print Robots with Interactive Robogami”

The PDP-1: The Machine that Started Hacker Culture

One of my bucket list destinations is the Computer History Museum in Mountain View, California — I know, I aim high. I’d be chagrined to realize that my life has spanned a fair fraction of the Information Age, but I think I’d get a kick out of seeing the old machines, some of which I’ve actually laid hands on. But the machines I’d most like to see are the ones that predate me, and the ones that contributed to the birth of the hacker culture in which I and a lot of Hackaday regulars came of age.

If you were to trace hacker culture back to its beginning, chances are pretty good that the machine you’d find at the root of it all is the Digital Equipment Corporation’s PDP-1. That’s a tall claim for a machine that was introduced in 1959 and only sold 53 units, compared to contemporary offerings from IBM that sold tens of thousands of units. And it’s true that the leading edge of the explosion of digital computing in the late 50s and early 60s was mainly occupied by “big iron” machines, and that mainframes did a lot to establish the foundations for all the advances that were to come.

Continue reading “The PDP-1: The Machine that Started Hacker Culture”

How To Telepathically Tell A Robot It Screwed Up

Training machines to effectively complete tasks is an ongoing area of research. This can be done in a variety of ways, from complex programming interfaces, to systems that understand commands in natural langauge. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wanted to see if it was possible for humans to communicate more directly when training a robot. Their system allows a user to correct a robot’s actions using only their brain.

The concept is simple – using an EEG cap to detect brainwaves, the system measures a special type of brain signals called “error-related potentials”. Simply noticing the robot making a mistake allows the robot to correct itself, and for a nice extra touch – blush in embarassment.

This interface allows for a very intuitive way of working with a robot – upon noticing a mistake, the robot is able to automatically stop or correct its behaviour. Currently the system is only capable of being used for very simple tasks – the video shows the robot sorting objects of two types into corresponding bins. The robot knows that if the human has detected an error, it must simply place the object in the other bin. Further research seeks to expand the possibilities of using this automatic brainwave feedback to train robots for more complex tasks. You can read the research paper here.

MIT’s CSAIL work on lots of exciting projects – their video microphone technology is truly astounding.

[Thanks to Adam Connor-Simmons for the tip!]

Owning Hacker As A Word

To a casual observer it might seem as though our community is in the news rather a lot at the moment. It’s all about hacks on our TV screens in the soap opera of Washington politics, who hacked this, whether those people over there helped that lot hack the other lot, or even whether that person’s emails could have been hacked on that server. Keeping up with it as an outsider can become a full-time job.

XKCD 932 says it all. (CC BY-NC 2.5)
XKCD 932 says it all. (CC BY-NC 2.5)

Of course, as we all know even if the mainstream journalists (or should I refer to them colloquially as “hacks”?) don’t, it’s not us they’re talking about. Their hackers are computer criminals, while we are people with some of the hardware and software skills to bend technology to our will, even beyond what its designers might have intended. And that divergence between the way we use the word in a sense of reappropriation and they use it in disapprobation sometimes puts us in an odd position. Explaining to a sober-suited businessman as the director of a hackspace, that no, we’re not *those*hackers can sometimes  feel like skating on thin ice.

Continue reading “Owning Hacker As A Word”

Nylon Fibre Artificial Muscles — Powered by Lasers!

If only we had affordable artificial muscles, we might see rapid advances in prosthetic limbs, robots, exo-skeletons, implants, and more. With cost being one of the major barriers — in addition to replicating the marvel of our musculature that many of us take for granted — a workable solution seems a way off. A team of researchers at MIT present a potential answer to these problems by showing nylon fibres can be used as synthetic muscles.

Some polymer fibre materials have the curious property of increasing in  diameter while decreasing in length when heated. Taking advantage of this, the team at MIT were able to sculpt nylon fibre and — using a number of heat sources, namely lasers — could direct it to bend in a specific direction. More complex movement requires an array of heat sources which isn’t practical — yet — but seeing a nylon fibre dance tickles the imagination.

Continue reading “Nylon Fibre Artificial Muscles — Powered by Lasers!”