Researchers over at MIT are hard at work upgrading their Robotic Cheetah. They are developing an algorithm for bounding movement, after researching how real cheetahs run in the wild.
Mach 2 is fully electric and battery-powered, can currently run at speeds of 10MPH (however they’re predicting it will be able to reach 30MPH in the future), and can even jump over obstacles 33cm tall.
We originally saw the first robotic Cheetah from Boston Dynamics in cooperation with DARPA two years ago — it could run faster than any human alive (28.3MPH) but in its tests it was tethered to its hydraulic power pack and running on a treadmill. It’s unclear if MIT’s Cheetah is a direct descendant from that one, but they are both supported by DARPA.
The technology in this project is nothing short of amazing — its electric motors are actually a custom part designed by one of the professors of Electrical Engineering at MIT, [Jeffrey Lang]. In order for the robot to run smoothly, its bounding algorithm is sending commands to each leg to exert a very precise amount of force during each footstep, just to ensure it maintains the set speed.
Continue reading “MIT’s Robotic Cheetah is Getting Even Scarier”
MIT engineers have developed a technique to address the challenges involved in manufacturing robots at a cheap and accessible level. Like a plant folding out its petals, a protein folding into shape, or an insect unveiling its wings, this autonomous origami design demonstrated the ability for a mechanical creature to assemble itself and walk away. The technique opens up the possibility of unleashing swarms of flat robots into hard to reach places. Once on site, the robots mobilize from the ground up.
The team behind the project used flexible print circuit boards made out of paper and polystyrene, which is a synthetic aromatic polymer typically found in the commercially sold children’s toy Shrinky Dinks™. Each hinge had embedded circuits that were mechanically programmed to fold at certain angles. Heat was applied to the composite structure triggering the folding process. After about four minutes, the hinges would cool allowing the polystyrene to harden. Some issues did arise though during the initial design phase due to the amount of electrical current running the robots, which was ten times that of a regular light bulb. This caused the original prototypes to burn up before the construction operation was completed.
In the long-term, Core Faculty Member [Robert] would like to have a facility that would provide everyday robotic assistance to anyone in the surrounding community. This place would be accessible to everyone in the neighborhood helping to solve whatever problems might arise, which sounds awfully like a hackerspace to us. Whether the person required a device to detect gas leaks or a porch sweeping robot, the facility would be there to aid the members living nearby.
A video of [Robert] and [Sam] describing the project comes up after the break:
Continue reading “Self-Assembling Origami Robots”
A Group of MIT, Microsoft, and Adobe researchers have managed to reproduce sound using video alone. The sounds we make bounce off every object in the room, causing microscopic vibrations. The Visual Microphone utilizes a high-speed video camera and some clever signal processing to extract an audio signal from these vibrations. Using video of everyday objects such as snack bags, plants, Styrofoam cups, and water, the team was able to reproduce tones, music and speech. Capturing audio from light isn’t exactly new. Laser microphones have been around for years. The difference here is the fact that the visual microphone is a completely passive device. No laser or special illumination is required.
The secret is in the signal processing, which the team explains in their SIGGRAPH paper (pdf link). They used a complex steerable pyramid along with wavelet filters to obtain local pixel motion values. These local values are averaged into a global motion value. From this global motion value the team is able to measure movement down to 1/1000 of a pixel. Plenty of resolution to decode audio data.
Most of the research is performed with high-speed video cameras, which are well outside the budget of the average hacker. Don’t despair though, the team did prove out that the same magic can be performed with consumer cameras, albeit with lower quality results. The team took advantage of the rolling shutter found in most of today’s CMOS imager based consumer cameras. Rolling shutter CMOS sensors capture images one row at a time. Each row can be processed in a similar fashion to the frames of the high-speed camera. There are some inter-frame gaps when the camera isn’t recording anything though. Even with the reduced resolution, it’s easy to pick out “Mary had a little lamb” in the video below.
We’re blown away by this research, and we’re sure certain organizations will be looking into it for their own use. Don’t pull out your tin foil hats yet though. Foil containers proved to be one of the best sound reflectors.
Continue reading “Focus Your Ears with The Visual Microphone”
Three MIT students decided that 3D printers just aren’t interesting enough on their own any more. They wanted to design a new type of printer that would really get young kids engaged. What’s more engaging to children than sugary treats? The team got together to develop a new 3d printer that prints ice cream.
The machine is built around a Solidoodle. The Solidoodle is a manufacturer of “accessible” 3d printers. The printer is enclosed inside of a small freezer to keep things cold during the printing process. On top of the machine is a hacked Cuisinart ice cream maker. The machine also contains a canister of liquid nitrogen. The nitrogen is used to blast the cream as it leaves the print head, keeping it frozen for the 15 minute duration of the print.
It sounds like the team ran into trouble with the ice cream melting, even with the liquid nitrogen added. For a single semester project, this isn’t a bad start. Be sure to watch the clip of the machine running below.
Continue reading “Print Tasty Treats With MIT’s Ice Cream Printer”
Have you ever heard of a Cryotron Computer before? Of course not. Silicon killed the radio star: this is a story of competing technologies back in the day. The hand above holds the two competitors, the bulkiest one is obviously the vacuum tube, and the three-legged device is what became a household name. But to the right of that tube is another technological marvel that can also be combined into computing machines: the cryotron.
[Dudley Allen Buck] and his contributions to early computing are a tale of the possible alternate universe that could have been cryotrons instead of silicon transistors. Early on we find that the theory points to exotic superconductive materials, but we were delighted to find that in the conception and testing stages [Buck] was hacking. He made his first experimental electronic switches using dissimilar metals and dunking them in liquid helium. The devices were copper wire wrapped around a tantalum wire. The tantalum is the circuit path, the copper wire acts as the switch via a magnetic field that alters the resistance of the tantalum.
The name comes from the low temperature bath necessary to make the switches work properly. Miniaturization was the key as it always is; the example above is a relatively small example of the wire-wound version of the Cryotron, but the end goal was a process very familiar to us today. [Buck] was searching for the thin film fabrication techniques that would let him shoe horn 75,000 or more into one single computing platform. Guess who came knocking on his door during this period of his career? The NSA. The story gets even more interesting from there, but lest we rewrite the article we leave you with this: the technology may beat out silicon in the end. Currently it’s one of the cool kids on the block for those companies racing to the quantum computing finish line.
Retrotechtacular is a weekly column featuring hacks, technology, and kitsch from ages of yore. Help keep it fresh by sending in your ideas for future installments.
Remember last week’s post on the inFORM, MIT’s morphing table? Well they just released a new video showing off what it can do, and it’s pretty impressive.
The new setup features two separate interfaces, and they’ve added a display so you can see the person who is manipulating the surface. This springs to life a whole new realm of possibilities for the tactile digital experience. The inFORM also has a projector shining on the surface, which allows the objects shown from the other side to be both visually and physically seen — they use an example of opening a book and displaying its pages on the surface. To track the hand movements they use a plain old Microsoft Kinect, which works extremely well. They also show off the table as a standalone unit, an interactive table — Now all they need to do is make the pixels smaller…
Stick around after the break to see some more awesome examples of the possibilities of this new tactile-digital interface. There are also some great clips near the end of the video showing off the complex linkage system that makes it all work.
Continue reading “inFORM the Morphing Table Gets Even More Interactive”
Have you ever wished your dinner table could pass the salt? Advancements at MIT may soon make this a reality — although it might spill the salt everywhere. Enter the inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation.
While the MIT paper doesn’t go into much detail of the hardware itself, there are a few juicy tidbits that explain how it works. There are 900 individually actuated white polystyrene pins that make up the surface, in an array of 30 x 30 pixels. An overhead projector provides visual guidance of the system. Each pin can actuate 100mm, exerting a force of up to 1.08 Newtons each. To achieve the actuation, push-pull rods are utilized to maximize the dense pin arrangement as seen, making the display independent of the size of the actuators. The actuation is achieved by motorized slide potentiometers grouped in sets of 6 using custom PCBs that are driven by ATMega2560s — this allows for an excellent method of PID feedback right off the actuators themselves. There is an excellent image of the entire system on page 8 of the paper that shows both the scale and complexity of the build. Sadly it does not look like something that could be easily built at home, but hey, we’d love for someone to prove us wrong!
Stick around after the break to see this fascinating piece of technology in action. The video has been posted by a random Russian YouTube account, and we couldn’t find the original source for it — so if you can, let us know in the comments!
Continue reading “inFORM: MIT’s Morphing Table”