Portabilizing The Kinect

Way back when the Kinect was first released, there was a realization that this device would be the future of everything 3D. It was augmented reality, it was a new computer interface, it was a cool sensor for robotics applications, and it was a 3D scanner. When the first open source driver for the Kinect was released, we were assured that this is how we would get 3D data from real objects into a computer.

Since then, not much happened. We’re not using the Kinect for a UI, potato gamers were horrified they would be forced to buy the Kinect 2 with the new Xbox, and you’d be hard pressed to find a Kinect in a robot. 3D scanning is the only field where the Kinect hasn’t been over hyped, and even there it’s still a relatively complex setup.

This doesn’t mean a Kinect 3D scanner isn’t an object of desire for some people, or that it’s impossible to build a portabilzed version. [Mario]’s girlfriend works as an archaeologist, and having a tool to scan objects and places in 3D would be great for her. Because of this, [Mario] is building a handheld 3D scanner with a Raspberry Pi 2 and a Kinect.

This isn’t the first time we’ve seen a portablized Kinect. Way back in 2012, the Kinect was made handheld with the help of a Gumstix board. Since then, a million tiny ARM single board computers have popped up, and battery packs are readily available. It was only a matter of time until someone stepped up to the plate, and [Mario] was the guy.

The problem facing [Mario] isn’t hardware. Anyone can pick up a Kinect at Gamestop, the Raspberry Pi 2 should be more than capable of reading the depth sensor on the Kinect, and these parts can be tied together with 3D printed parts. The real problem is the software, and so far [Mario] has Libfreenect compiling without a problem on the Pi2. The project still requires a lot of additional libraries including some OpenCV stuff, but so far [Mario] has everything working.

You can check out his video of the proof of concept below.

Continue reading “Portabilizing The Kinect”

Fur Mirror

Interactive Fur Mirror Follows Your Every Move

We think artist [Daniel Rozin] spent a bit too much time wondering if he could make an interactive fur mirror, without wondering if he should. The result is… strange — to say the least.

It’s called the PomPom Mirror, and its one of many interactive installations in the Descent With Modification at Bitforms — there’s even a super cute flock of penguins which spin around to create the same effect.

The mirror is 4 by 4 feet and 18″ deep. It has 928 faux fur pom poms which are controlled by 464 motors, each effectively with an “on” and “off” state. A Microsoft Kinect tracks movement and creates a black and white binary image of what it sees. The artist also programmed in a few animation sequences which make the mirror come alive — like some weird furry alien / plant thing…

Continue reading “Interactive Fur Mirror Follows Your Every Move”

3D Scanning Rotary Table

3D Scanning Rig And DIY Turntable

It seems almost every day 3D scanning is becoming more and more accessible to the general DIYer. The hardware required is minimal and there are several scanning softwares and workflows to choose from. However, if you have slowly walked around a subject while holding a Kinect and trying to get a good scan, you know this is not an easy task. A quick internet search will result in several DIY scanning setup solutions that have been cobbled together and lack substantial documentation…. until now! [aldricnegrier] is fighting back and has designed and documented a rotary table that will spin at a constant speed while a subject is 3D scanned, making person scanning just that much easier.

The project starts off with a plywood base with a Lazy Susan bearing assembly attached to the top. The Lazy Susan supports the rotating platform for the subject person to stand on, but it’s not just a platform, it’s also a huge gear! The platform teeth mesh with a much smaller 3D printed gear mounted on the shaft of a DC motor and reduction gearbox assembly.

Another goal of the project was to make the rotary table autonomous. There is an ultrasonic sensor mounted to the base aimed above the rotating platform. The ultrasonic sensor is connected to an Arduino and if the system senses someone or something on the platform for 3 seconds, the Arduino will command a DC motor driver to start spinning the platform.

As cool as this project is so far, [aldricnegrier] wanted to make it even cooler: he added speech recognition. Using Microsoft’s Speech Toolkit, saying the words ‘Start Skanect‘ will start the scanning process on the PC. Now, a sole person can scan themselves easily and reliably.

[aldricnegrier] has made all of his CAD files, STL files and Arduino code available so anyone wanting to build this clearly capable setup can do so!

Printing Photorealistic Images On 3D Objects

Hydrographic Printing is a technique of transferring colored inks on a film to the surface of an object. The film is placed on water and activated with a chemical that allows it to adhere to an object being physically pushed onto it. Researchers at Zhejiang University and Columbia University have taken hydrographic printing to the next level (pdf link). In a technical paper to be presented at ACM SIGGRAPH 2015 in August, they explain how they developed a computational method to create complex patterns that are precisely aligned to the object.

Typically, repetitive patterns are used because the object stretches the adhesive film; anything complex would distort during this subjective process. It’s commonly used to decorate car parts, especially rims and grills. If you’ve ever seen a carbon-fiber pattern without the actual fiber, it’s probably been applied with hydrographic printing.

print_tThe physical setup for this hack is fairly simple: a vat of water, a linear motor attached to a gripper, and a Kinect. The object is attached to the gripper. The Kinect measures its location and orientation. This data is applied to a 3D-scan of the object along with the desired texture map to be printed onto it. A program creates a virtual simulation of the printing process, outputting a specific pattern onto the film that accounts for the warping inherent to the process. The pattern is then printed onto the film using an ordinary inkjet printer.

The tiger mask is our personal favorite, along with the leopard cat. They illustrate just how complex the surface patterns can get using single or multiple immersions, respectively. This system also accounts for objects of a variety of shapes and sizes, though the researchers admit there is a physical limit to how concave the parts of an object can be. Colors will fade or the film will split if stretched too thin. Texture mapping can now be physically realized in a simple yet effective way, with amazing results.

Continue reading “Printing Photorealistic Images On 3D Objects”

Take A Spin On This Voice-Controlled 3D Scanning Rig

[Aldric Negrier] wanted to make 3D-scanning a person streamlined and simple. To that end, he created this voice-controlled 3D-scanning rig.

[Aldric] used a variety of hacking skills to make this project, and his thorough Instructable illustrates this nicely. Everything from CNC milling to Arduino programming to 3D-printing was incorporated into the making of this rig. Plywood was used to construct the base and the large toothed gear. A 12″ Lazy Susan bearing was attached to this gear to allow smooth rotation. In order to automate the rig, a 12V DC geared motor was attached to a smaller 3D-printed gear and positioned on the base. When the motor is on, the smaller gear’s teeth take the larger gear for a spin. He used a custom dual H-bridge motor driver made by a friend, which is connected to an Arduino Nano. The Nano is also connected to a Bluetooth module and an ultrasonic range finder. When an object within 1-35cm is detected on the rig for 3 seconds, the motor starts to spin, stopping when the object is no longer detected. A typical scan takes about 60 seconds.

This alone would have been a great project, but [Aldric] did not stop there. He wanted to be able to step on the rig and issue commands while being scanned. It makes sense if you want to scan yourself – get on the rig, assume the desired position, and then initiate the scan. He used the Windows speech recognition SDK to develop an application that issues commands via Bluetooth to Skanect, a 3D-scanning software. The commands are as simple as saying “Start Skanect.” You can also tell the motor to switch on or off and change its speed or direction without breaking form. [Aldric] used an Asus Xtion for a 3D-scanner, but a Kinect will also work. Afterwards, he smoothed his scans using MeshMixer, a program featured in previous hacks.

Check out the videos of the rig after the break. Voice commands are difficult to hear due to the background music in one of the videos, but if you listen carefully, you can hear them. You can also see more of [Aldric’s] projects here or on this YouTube channel.

Continue reading “Take A Spin On This Voice-Controlled 3D Scanning Rig”

ANUBIS, A Natural User Bot Interface System

[Matt], [Andrew], [Noah], and [Tim] have a pretty interesting build for their capstone project at Ohio Northern University. They’re using a Microsoft Kinect, and a Leap Motion to create a natural user interface for controlling humanoid robots.

The robot the team is using for this project is a tracked humanoid robot they’ve affectionately come to call Johnny Five.  Johnny takes commands from a computer, Kinect, and Leap motion to move the chassis, arm, and gripper around in a way that’s somewhat natural, and surely a lot easier than controlling a humanoid robot with a keyboard.

The team has also released all their software onto Github under an open source license. You can grab that over on the Gits, or take a look at some of the pics and videos from the Columbus Mini Maker Faire.

Seeing The World Through Depth Sensing Cameras

The Oculus Rift and all the other 3D video goggle solutions out there are great if you want to explore virtual worlds with stereoscopic vision, but until now we haven’t seen anyone exploring real life with digital stereoscopic viewers. [pabr] combined the Kinect-like sensor in an ASUS Xtion with a smartphone in a Google Cardboard-like setup for 3D views the human eye can’t naturally experience like a third-person view, a radar-like display, and seeing what the world would look like with your eyes 20 inches apart.

[pabr] is using an ASUS Xtion depth sensor connected to a Galaxy SIII via the USB OTG port. With a little bit of code, the output from the depth sensor can be pushed to the phone’s display. The hardware setup consists of a VR-Spective, a rather expensive bit of plastic, but with the right mechanical considerations, a piece of cardboard or some foam board and hot glue would do quite nicely.

[pabr] put together a video demo of his build, along with a few examples of what this project can do. It’s rather odd, and surprisingly not a superfluous way to see in 3D. You can check out that video below.

Continue reading “Seeing The World Through Depth Sensing Cameras”