Many people got their start with 3D printing by downloading designs from Thingiverse, and some of these designs could be modified in the browser using the Thingiverse Customizer. The mechanism behind this powerful feature is OpenSCAD’s parametric design capability, which offers great flexibility but is still limited by 3D printer size. In the interest of going bigger, a team at MIT built a system to adopt parametric design idea to woodworking.
The “AutoSaw” has software and hardware components. The software side is built on web-based CAD software Onshape. First the expert user builds a flexible design with parameters that could be customized, followed by one or more end users who specify their own custom configuration.
Once the configuration is approved, the robots go to work. AutoSaw has two robotic woodworking systems: The simpler one is a Roomba mounted jigsaw to cut patterns out of flat sheets. The more complex system involves two robot arms on wheels (Kuka youBot) working with a chop saw to cut wood beams to length. These wood pieces are then assembled by the end-user using dowel pegs.
AutoSaw is a fun proof of concept and a glimpse at a potential future: One where a robotic wood shop is part of your local home improvement store’s lumber department. Ready to cut/drill/route pieces for you to take home and assemble.
As a civilization, we are proficient with the “boil water, make steam” method of turning various heat sources into power we feed our infrastructure. Away from that, we can use solar panels. But what if direct sunlight is not available either? A team at MIT demonstrated how to extract power from daily temperature swings.
Running on temperature difference between day and night is arguably a very indirect form of solar energy. It could work in shaded areas where solar panels would not. But lacking a time machine, or an equally improbable portal to the other side of the planet, how did they bring thermal gradient between day and night together?
This team called their invention a “thermal resonator”: an assembly of materials tuned to work over a specific range of time and temperature. When successful, the device output temperature is out-of-phase with its input: cold in one section while the other is hot, and vice versa. Energy can then be harvested from the temperature differential via “conventional thermoelectrics”.
Power output of the initial prototype is modest. Given a 10 degree Celsius daily swing in temperature, it could produce 1.3 milliwatt at maximum potential of 350 millivolt. While the Hackaday coin-cell challenge participants and other pioneers of low-power electronics could probably do something interesting, the rest of us will have to wait for thermal resonator designs to evolve and improve on its way out of the lab.
How does one go about programming a drone to fly itself through the real world to a location without crashing into something? This is a tough problem, made even tougher if you’re pushing speeds higher and high. But any article with “MIT” implies the problems being engineered are not trivial.
The folks over at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have put their considerable skill set to work in tackling this problem. And what they’ve come up with is (not surprisingly) quite clever: they’re embracing uncertainty.
Why Is Autonomous Navigation So Hard?
Suppose we task ourselves with building a robot that can insert a key into the ignition switch of a motor vehicle and start the engine, and could do so in roughly the same time-frame that a human could do — let’s say 10 seconds. It may not be an easy robot to create, but we can all agree that it is very doable. With foreknowledge of the coordinate information of the vehicle’s ignition switch relative to our robotic arm, we can place the key in the switch with 100% accuracy. But what if we wanted our robot to succeed in any car with a standard ignition switch?
Now the location of the ignition switch will vary slightly (and not so slightly) for each model of car. That means we’re going to have to deal with this in real time and develop our coordinate system on the fly. This would not be too much of an issue if we could slow down a little. But keeping the process limited to 10 seconds is extremely difficult, perhaps impossible. At some point, the amount of environment information and computation becomes so large that the task becomes digitally unwieldy.
This problem is analogous to autonomous navigation. The environment is always changing, so we need sensors to constantly monitor the state of the drone and its immediate surroundings. If the obstacles become too great, it creates another problem that lies in computational abilities… there is just too much information to process. The only solution is to slow the drone down. NanoMap is a new modeling method that breaks the artificial speed limit normally imposed with on-the-fly environment mapping.
The team from MIT led by [Xuanhe Zhao] and [Timothy Lu] have programmed bacteria cells to respond to specific compounds. To demonstrate, they printed a temporary tattoo of a tree formed of the sturdy bacteria and a hydrogel ‘ink’ loaded with nutrients, that lights up over a few hours when adhered to skin swabbed with these specific stimuli.
So far, the team has been able to produce objects as large as several centimetres, capable of being adapted into active materials when printed and integrated as wearables, displays, sensors and more.
Rapid Liquid Printing(RLP) is being developed in collaboration by Michigan-based company [Steelcase] and [Skylar Tibbits’] Self Assembly Lab at MIT. RLP is touting advantages over traditional 3D printing technology such as reduced print times, a higher quality print, and enabling larger scale prints — all without supports!
Working with rubber, plastic, or foam, the printing material is injected by nozzle into a basin of industrial gel. That gel suspends the print throughout the process without bonding to it and the finished product is simply lifted out of the gel and rinsed off. Shown off at the Design Miami event earlier this month, onlookers could pick up finished lampshades and tote bags after mere minutes.
Traditional desktop 3D printing technology has effectively hit a wall. The line between a $200 and a $1000 printer is blurrier now than ever before, and there’s a fairly prevalent argument in the community that you’d be better off upgrading two cheap printers and pocketing the change than buying a single high-end printer if the final results are going to be so similar.
As anyone who’s pushed their 3D printer a bit too hard can tell you, the first thing that usually happens is the extruder begins to slip and grind the filament down. As the filament is ground down it starts depositing plastic on the hobbed gear, further reducing grip in the extruder and ultimately leading to under-extrusion or a complete print failure. To address this issue, MIT’s printer completely does away with the “pinch wheel” extruder design and replaces it with a screw mechanism that pulls special threaded filament down into the hot end. The vastly increased surface area between the filament and the extruder allows for much higher extrusion pressure.
An improved extruder doesn’t do any good if you can’t melt the incoming plastic fast enough to keep up with it, and to that end MIT has pulled out the really big guns. Between the extruder and traditional heater block, the filament passes through a gold-lined optical cavity where it is blasted with a pulse modulated 50 W laser. By closely matching the laser wavelength to the optical properties of the plastic, the beam is able to penetrate the filament and evenly bring it up to nearly the melting point. All without physically touching the filament and incurring frictional losses.
There are still technical challenges to face, but this research may well represent the shape of things to come for high-end printers. In other words, don’t expect a drop-in laser hot end replacement for your $200 printer anytime soon; the line is about to get blurry again.
If you’ve never been a patient at a sleep laboratory, monitoring a person as they sleep is an involved process of wires, sensors, and discomfort. Seeking a better method, MIT researchers — led by [Dina Katabi] and in collaboration with Massachusetts General Hospital — have developed a device that can non-invasively identify the stages of sleep in a patient.
Approximately the size of a laptop and mounted on a wall near the patient, the device measures the minuscule changes in reflected low-power RF signals. The wireless signals are analyzed by a deep neural-network AI and predicts the various sleep stages — light, deep, and REM sleep — of the patient, negating the task of manually combing through the data. Despite the sensitivity of the device, it is able to filter out irrelevant motions and interference, focusing on the breathing and pulse of the patient.
What’s novel here isn’t so much the hardware as it is the processing methodology. The researchers use both convolutional and recurrent neural networks along with what they call an adversarial training regime:
Our training regime involves 3 players: the feature encoder (CNN-RNN), the sleep stage predictor, and the source discriminator. The encoder plays a cooperative game with the predictor to predict sleep stages, and a minimax game against the source discriminator. Our source discriminator deviates from the standard domain-adversarial discriminator in that it takes as input also the predicted distribution of sleep stages in addition to the encoded features. This dependence facilitates accounting for inherent correlations between stages and individuals, which cannot be removed without degrading the performance of the predictive task.
Anyone out there want to give this one a try at home? We’d love to see a HackRF and GNU Radio used to record RF data. The researchers compare the RF to WiFi so repurposing a 2.4 GHz radio to send out repeating uniformed transmissions is a good place to start. Dump it into TensorFlow and report back.