3D Popup Cards From 3D Photos

The world of 3D printing is growing rapidly. Some might say it’s growing layer by layer. But there was one aspect that [Ken] wanted to improve upon, and that was in the area of 3D photos. Specifically, printing a 3D pop-up-style photograph that collapses to save space so you can easily carry it around.

It’s been possible to take 3D scans of objects and render a 3D print for a while now, but [Ken] wanted something a little more portable. His 3D pop-up photographs are similar to pop-up books for children, in that when the page is unfolded a three-dimensional shape distances itself from the background.

The process works by taking a normal 3D photo. With the help of some software, sets of points that are equidistant from the camera are grouped into layers. From there, they can be printed in the old 2-dimensional fashion and then connected to achieve the 3D effect. Using a Kinect or similar device would allow for any number of layers and ways of using this method. So we’re throwing down the gauntlet — we want to see an arms-race of pop-up photographs. Who will be the one to have the most layers, and who will find a photograph subject that makes the most sense in this medium? Remember how cool those vector-cut topographical maps were? There must be a similarly impressive application for this!

[Ken] isn’t a stranger around these parts. He was previously featured for his unique weather display and his semi-real-life Mario Kart, so be sure to check those out as well.

Head Gesture Tracking Helps Limited Mobility Students

There is a lot of helpful technology for people with mobility issues. Even something that can help people do something most of us wouldn’t think twice about, like turn on a lamp or control a computer, can make a world of difference to someone who can’t move around as easily. Luckily, [Matt] has been working on using webcams and depth cameras to allow someone to do just that.

[Matt] found that using webcams instead of depth cameras (like the Kinect) tends to be less obtrusive but are limited in their ability to distinguish individual users and, of course, don’t have the same 3D capability. With either technology, though, the software implementation is similar. The camera can detect head motion and control software accordingly by emulating keystrokes. The depth cameras are a little more user-friendly, though, and allow users to move in whichever way feels comfortable for them.

This isn’t the first time something like a Kinect has been used to track motion, but for [Matt] and his work at Beaumont College it has been an important area of ongoing research. It’s especially helpful since the campus has many things on network switches (like lamps) so this software can be used to help people interact much more easily with the physical world. This project could be very useful to anyone curious about tracking motion, even if they’re not using it for mobility reasonsContinue reading “Head Gesture Tracking Helps Limited Mobility Students”

Augmented Reality Sandbox Using A Kinect

Want to make all your 5 year old son’s friends jealous? What if he told them he could make REAL volcanoes in his sandbox? Will this be the future of sandboxes, digitally enhanced with augmented reality?

It’s not actually that hard to set up! The system consists of a good computer running Linux, a Kinect, a projector, a sandbox, and sand. And that’s it! The University of California (UC Davis) has setup a few of these systems now to teach children about geography, which is a really cool demonstration of both 3D scanning and projection mapping. As you can see in the animated gif above, the Kinect can track the topography of the sand, and then project its “reality” onto it. In this case, a mini volcano.

Continue reading “Augmented Reality Sandbox Using A Kinect”

Virtual Physical Rehab With Kinect

Web sites have figured out that “gamifying” things increases participation. For example, you’ve probably boosted your postings on a forum just to get a senior contributor badge (that isn’t even really a badge, but a picture of one). Now [Yash Soni] has brought the same idea to physical therapy.

[Yash]’s father had to go through boring physical therapy to treat a slipped disk, and it prompted him into developing KinectoTherapy which aims to make therapy more like a video game. They claim it can be used to help many types of patients ranging from stroke victims to those with cerebral palsy.

Patients can see their onscreen avatar duplicate their motions and can provide audio and visual feedback when the player makes a move correctly or incorrectly. Statistical data is also available to the patient’s health care professionals.

Continue reading “Virtual Physical Rehab With Kinect”

Portabilizing The Kinect

Way back when the Kinect was first released, there was a realization that this device would be the future of everything 3D. It was augmented reality, it was a new computer interface, it was a cool sensor for robotics applications, and it was a 3D scanner. When the first open source driver for the Kinect was released, we were assured that this is how we would get 3D data from real objects into a computer.

Since then, not much happened. We’re not using the Kinect for a UI, potato gamers were horrified they would be forced to buy the Kinect 2 with the new Xbox, and you’d be hard pressed to find a Kinect in a robot. 3D scanning is the only field where the Kinect hasn’t been over hyped, and even there it’s still a relatively complex setup.

This doesn’t mean a Kinect 3D scanner isn’t an object of desire for some people, or that it’s impossible to build a portabilzed version. [Mario]’s girlfriend works as an archaeologist, and having a tool to scan objects and places in 3D would be great for her. Because of this, [Mario] is building a handheld 3D scanner with a Raspberry Pi 2 and a Kinect.

This isn’t the first time we’ve seen a portablized Kinect. Way back in 2012, the Kinect was made handheld with the help of a Gumstix board. Since then, a million tiny ARM single board computers have popped up, and battery packs are readily available. It was only a matter of time until someone stepped up to the plate, and [Mario] was the guy.

The problem facing [Mario] isn’t hardware. Anyone can pick up a Kinect at Gamestop, the Raspberry Pi 2 should be more than capable of reading the depth sensor on the Kinect, and these parts can be tied together with 3D printed parts. The real problem is the software, and so far [Mario] has Libfreenect compiling without a problem on the Pi2. The project still requires a lot of additional libraries including some OpenCV stuff, but so far [Mario] has everything working.

You can check out his video of the proof of concept below.

Continue reading “Portabilizing The Kinect”

Fur Mirror

Interactive Fur Mirror Follows Your Every Move

We think artist [Daniel Rozin] spent a bit too much time wondering if he could make an interactive fur mirror, without wondering if he should. The result is… strange — to say the least.

It’s called the PomPom Mirror, and its one of many interactive installations in the Descent With Modification at Bitforms — there’s even a super cute flock of penguins which spin around to create the same effect.

The mirror is 4 by 4 feet and 18″ deep. It has 928 faux fur pom poms which are controlled by 464 motors, each effectively with an “on” and “off” state. A Microsoft Kinect tracks movement and creates a black and white binary image of what it sees. The artist also programmed in a few animation sequences which make the mirror come alive — like some weird furry alien / plant thing…

Continue reading “Interactive Fur Mirror Follows Your Every Move”

Printing Photorealistic Images On 3D Objects

Hydrographic Printing is a technique of transferring colored inks on a film to the surface of an object. The film is placed on water and activated with a chemical that allows it to adhere to an object being physically pushed onto it. Researchers at Zhejiang University and Columbia University have taken hydrographic printing to the next level (pdf link). In a technical paper to be presented at ACM SIGGRAPH 2015 in August, they explain how they developed a computational method to create complex patterns that are precisely aligned to the object.

Typically, repetitive patterns are used because the object stretches the adhesive film; anything complex would distort during this subjective process. It’s commonly used to decorate car parts, especially rims and grills. If you’ve ever seen a carbon-fiber pattern without the actual fiber, it’s probably been applied with hydrographic printing.

print_tThe physical setup for this hack is fairly simple: a vat of water, a linear motor attached to a gripper, and a Kinect. The object is attached to the gripper. The Kinect measures its location and orientation. This data is applied to a 3D-scan of the object along with the desired texture map to be printed onto it. A program creates a virtual simulation of the printing process, outputting a specific pattern onto the film that accounts for the warping inherent to the process. The pattern is then printed onto the film using an ordinary inkjet printer.

The tiger mask is our personal favorite, along with the leopard cat. They illustrate just how complex the surface patterns can get using single or multiple immersions, respectively. This system also accounts for objects of a variety of shapes and sizes, though the researchers admit there is a physical limit to how concave the parts of an object can be. Colors will fade or the film will split if stretched too thin. Texture mapping can now be physically realized in a simple yet effective way, with amazing results.

Continue reading “Printing Photorealistic Images On 3D Objects”