Roomba Now Able To Hunt Arnold Schwarzenegger

Ever since the Roomba was invented, humanity has been one step closer to a Jetsons-style future with robots performing all of our tedious tasks for us. The platform is so ubiquitous and popular with the hardware hacking community that almost anything that could be put on a Roomba has been done already, with one major exception: a Roomba with heat vision. Thanks to [marcelvarallo], though, there’s now a Roomba with almost all of the capabilities of the Predator.

The Roomba isn’t just sporting an infrared camera, though. This Roomba comes fully equipped with a Raspberry Pi for wireless connectivity, audio in and out, video streaming from a webcam (and the FLiR infrared camera), and control over the motors. Everything is wired to the internal battery which allows for automatic recharging, but the impressive part of this build is that it’s all done in a non-destructive way so that the Roomba can be reverted back to a normal vacuum cleaner if the need arises.

If sweeping a just the right time the heat camera might be the key to the messy problem we discussed on Wednesday.

The only thing stopping this from hunting humans is the addition of some sort of weapons. Perhaps this sentry gun or maybe some exploding rope. And, if you don’t want your vacuum cleaner to turn into a weapon of mass destruction, maybe you could just turn yours into a DJ.

Atari Archaeology Without Digging Up Landfill Sites

We are fortunate to live in an age of commoditized high-power computer hardware and driver abstraction, in which most up-to-date computers have the ability to do more or less anything that requires keeping up with the attention of a human without breaking a sweat. Processors are very fast, memory is plentiful, and 3D graphics acceleration is both speedy and ubiquitous.

Thirty years ago it was a different matter on the desktop. Even the fastest processors of the day would struggle to perform on their own all the tasks demanded of them by a 1980s teenager who had gained a taste for arcade games. The manufacturers rose to this challenge by surrounding whichever CPU they had chosen with custom co-processors, ASICs that would take away the heavy lifting associated with 2D graphics acceleration, or audio and music synthesis.

One of the 1980s objects of computing desire was the Atari ST, featuring a Motorola 68000 processor, a then-astounding 512k of RAM, a GUI OS, high-res colour graphics, and 3.5″ floppy drive storage. Were you to open up the case of your ST you’d have found those ASICs we mentioned as being responsible for its impressive spec.

Jumping forward three decades, [Christian Zietz] found that there was frustratingly little information on the ST ASIC internal workings. Since a trove of backed-up data became available when Atari closed down he thought it would be worth digging through it to see what he could find. His write-up is a story of detective work in ancient OS and backup software archaeology, but it paid off as he found schematics for not only an ASIC from an unreleased Atari product but for the early ST ASICs he was looking for. He found hundreds of pages of schematics and timing diagrams which will surely take the efforts of many Atari enthusiasts to fully understand, and best of all he thinks there are more to be unlocked.

We’ve covered a lot of Atari stories over the years, but many of them have related to their other products such as the iconic 2600 console. We have brought you news of an open-source ST on an FPGA though, and more recently the restoration of an ST that had had a hard life. The title of this piece refers to the fate of Atari’s huge unsold stocks of 2600 console cartridges, such a disastrous marketing failure that unsold cartridges were taken to a New Mexico landfill site in 1983 and buried. We reported on the 2013 exhumation of these video gaming relics.

A tip of the hat to Hacker News for bringing this to our attention.

Atari ST image, Bill Bertram (CC-BY-2.5) via Wikimedia Commons.

Kinect And Raspberry Pi Add Focus Pulling To DSLR

Prosumer DSLRs have been a boon to the democratization of digital media. Gear that once commanded professional prices is now available to those on more modest budgets. Not only has this unleashed a torrent of online content, it has also started a wave of camera hacks and accessories, like this automatic focus puller based on a Kinect and a Raspberry Pi.

For [Tom Piessens], the Canon EOS 5D has been a solid platform but suffers from a problem. The narrow depth of field possible with DSLRs makes it difficult to maintain focus on subjects that are moving relative to the camera, making follow-focus scenes like this classic hard to reproduce. Aiming for a better system than the stock autofocus, [Tom] grafted a Kinect sensor and a stepper motor actuator to a Raspberry Pi, and used the Kinect’s depth map to drive the focus ring. Parts are laser-cut, including a nice enclosure for the Pi and display that makes the whole thing reasonably portable. The video below shows the focus remaining locked on a selected region of interest. It seems like movement along only one axis is allowed; we’d love to see this system expanded to follow a designated object no matter where it moves in the frame.

If you’re in need of a follow-focus rig but don’t have a geared lens, check out these 3D-printed lens gears. They’d be a great complement to this backwoods focus-puller.

Continue reading “Kinect And Raspberry Pi Add Focus Pulling To DSLR”

Much More Than A Desktop Mill: The DIY VMC Build

A VMC (vertical machining center) is essentially a CNC vertical milling machine on steroids. Many CNC mills are just manual milling machines that have been converted to CNC control. They work nicely, but have a number of drawbacks when it comes to real world CNC milling: manual tool changes, lack of chip collection, lack of coolant containment, and backlash issues (which a manual machinist normally compensates for).

These problems are solved with a VMC, which will usually have an automatic tool changer, and an enclosure to contain coolant and wash chips down into a collection pan. They are, however, very expensive, very big, and very heavy. Building one from scratch is a massive undertaking, but one which [Chris DePrisco] was brave enough to take on.

Continue reading “Much More Than A Desktop Mill: The DIY VMC Build”

EL Wire Gets Some Touching After Effects

If you thought glowy wearables have had their time, guess again! After a few years designing on the side, [Josh] and [his dad] have created a nifty feature for EL wire: they’ve made it touch sensitive. But, of course, rather than simply show it off to the world, they’ve launched a Kickstarter campaign to put touch-sensitive El Wire in the hands of any fashion-inspired electronics enthusiast.

El Wire (and tape) are composed of two conducting wires separated by a phosphor layer. (Starting to sound like a capacitor?) While the details are, alas, closed for now, the interface is Arduino compatible, making it wide open to a general audience of enthusiasts without needing years of muscled programming experience. The unit itself, dubbed the Whoaboard, contains the EL Wire drivers for four channels at about 10ft of wire length.

El Wire has always been a crowd favorite around these parts (especially in Russia). We love that [Josh’s] Whoaboard takes a conventional material that might already be lying around your shelves and transforms it into a fresh new interface. With touch-sensitivity, we can’t wait to see the community start rolling out everything from costumes to glowy alien cockpits.

Have a look at [Josh’s] creation after the break!

Continue reading “EL Wire Gets Some Touching After Effects”

Shop-built Inspection Camera Lends Optical Help On A Budget

As your builds get smaller and your eyes get older, you might appreciate a little optical assistance around the shop. Stereo microscopes and inspection cameras are great additions to your bench, but often command a steep price. So this DIY PCB inspection microscope might be just the thing if you’re looking to roll your own and save a few bucks.

It’s not fancy, and it’s not particularly complex, but [Saulius]’ build does the job, mainly because he thought the requirements through before starting the build. MDF is used for the stand because it’s dimensionally stable, easy to work, and heavy, which tends to stabilize motion and dampen vibration. The camera itself is an off-the-shelf USB unit with a CS mount that allows a wide range of lenses to be fitted. A $20 eBay macro slider allows for fine positioning, and a ring light stolen from a stereo microscope provides shadow-free lighting.

We’d say the most obvious area for improvement would be a linkage on the arm to keep the plane of the lens parallel to the bench, but even as it is this looks like a solid build with a lot of utility – especially for hackers looking to age in place at the bench.

Interactive Dynamic Video

If a picture is worth a thousand words, a video must be worth millions. However, computers still aren’t very good at analyzing video. Machine vision software like OpenCV can do certain tasks like facial recognition quite well. But current software isn’t good at determining the physical nature of the objects being filmed. [Abe Davis, Justin G. Chen, and Fredo Durand] are members of the MIT Computer Science and Artificial Intelligence Laboratory. They’re working toward a method of determining the structure of an object based upon the object’s motion in a video.

The technique relies on vibrations which can be captured by a typical 30 or 60 Frames Per Second (fps) camera. Here’s how it works: A locked down camera is used to image an object. The object is moved due to wind, or someone banging on it, or  any other mechanical means. This movement is captured on video. The team’s software then analyzes the video to see exactly where the object moved, and how much it moved. Complex objects can have many vibration modes. The wire frame figure used in the video is a great example. The hands of the figure will vibrate more than the figure’s feet. The software uses this information to construct a rudimentary model of the object being filmed. It then allows the user to interact with the object by clicking and dragging with a mouse. Dragging the hands will produce more movement than dragging the feet.

The results aren’t perfect – they remind us of computer animated objects from just a few years ago. However, this is very promising. These aren’t textured wire frames created in 3D modeling software. The models and skeletons were created automatically using software analysis. The team’s research paper (PDF link) contains all the details of their research. Check it out, and check out the video after the break.

Continue reading “Interactive Dynamic Video”