3D Scanning Entire Rooms with a Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

Continue reading “3D Scanning Entire Rooms with a Kinect”

Laser Cut-and-Weld Makes 3D Objects

Everybody likes 3D printing, right? But it’s slow compared to 2D laser cutting. If only there were a way to combine multiple 2D slices into a 3D model. OK, we know that you’re already doing it by hand with glue and/or joints. But where’s the fun in that?

LaserStacker automates the whole procedure for you. They’ve tweaked their laser cutter settings to allow not just cutting but also welding of acrylic. This lets them build up 3D objects out of acrylic slices with no human intervention by first making a cutting pass at one depth and then selectively re-welding together at another. And they’ve also built up some software, along with a library of functional elements, that makes designing these sort of parts easier.

There’s hardly any detail on their website about how it works, so you’ll have to watch the video below the break and make some educated guesses. It looks like they raise the cutter head upwards to make the welding passes, probably spreading the beam out a bit. Do they also run it at lower power, or slower? We demand details!

Anyway, check out the demo video at 3:30 where they run through the slice-to-depth and heal modes through their paces. It’s pretty impressive.

Continue reading “Laser Cut-and-Weld Makes 3D Objects”

Converting Live 2D Video to 3D

Here’s some good news for all the fools who thought 3D TV was going to be the next big thing back in 2013. Researchers at MIT have developed a system that converts 2D video into 3D. The resulting 2D video can be played on an Oculus Rift, a Google Cardboard, or even that 3D TV sitting in the living room.

Right now, the system only works on 2D broadcasts of football, but this is merely a product of how the researchers solved this problem. The problem was first approached by looking at screencaps of the game FIFA 13. Using an analysis tool called PIX, the researchers both stored the display data and extracted the corresponding 3D map of the pitch, players, ball, and stadium. To generate 3D video of a 2D football broadcast, the system then looks at every frame of the 2D broadcast and searches for a 3D dataset that corresponds to the action on the field. This depth information is then added to the video feed, producing a 3D broadcast using only traditional 2D cameras.

Grab your red and blue filter shades and check out the product of their research below.

Continue reading “Converting Live 2D Video to 3D”

Teardown of Intel RealSense Gesture Camera Reveals Projector Details

[Chipworks] has just released the details on their latest teardown on an Intel RealSense gesture camera that was built into a Lenovo laptop. Teardowns are always interesting (and we suspect that [Chipworks] can’t eat breakfast without tearing it down), but this one reveals some fascinating details on how you build a projector into a module that fits into a laptop bezel. While most structured light projectors use a single, static pattern projected through a mask, this one uses a real projection mechanism to send different patterns that help the device detect gestures faster, all in a mechanism that is thinner than a poker chip.

mechanism1It does this by using an impressive miniaturized projector made of three tiny components: an IR laser, a line lens and a resonant micromirror. The line lens takes the point of light from the IR laser and turns it into a flat horizontal line. This is then bounced off the resonant micromirror, which is twisted by an electrical signal. This micromirror is moved by a torsional drive system, where an electrostatic signal twists the mirror, which is manufactured in a single piece. The system is described in more detail in this PDF of a presentation by the makers, ST Micro. This combination of lens and rapidly moving mirrors creates a pattern of light that is projected, and the reflection is detected by the IR camera on the other side of the module, which is used to create a 3D model that can be used to detect gestures, faces, and other objects. It’s a neat insight into how you can miniaturize things by approaching them in a different way.

3D Miniature Chess Pieces Made With A Laser Cutter

When you think of laser cutters, you generally don’t think of 3d parts. Well, at least not without using something like glue, nuts and bolts, or tabs and slots to hold multiple parts together. [Steve Kranz] shows you how to make these very tiny 3D chess pieces by making 2 passes at right angles to thick acrylic. The first pass cuts one side’s profile, then the part is rotated 90 degrees and a second pass is cut, giving the part more of a “real” 3D look, rather than something cut out of a flat sheet. If you’re having a hard time imagining how it works, his pictures do a great job of explaining the process. He even added some engraving to give the chess pieces for a selective frosted look. We think it’s a cool idea, and well executed too!

But that got us to thinking (always dangerous) that we’ve seen rotary attachments for laser cutters, but they are mainly for etching cylindrical objects like champagne flutes and beer bottle. What if you added a rotating “3rd” axis to a laser cutter that could hold a block of material and rotate it while being cut? (Much like a traditional 4th Axis on a CNC machine). Would the material also need to be raised and lowered to keep the laser focused? Surely software that is aimed at 3D CNC would be needed, something like Mach3 perhaps. A quick Google search show that there are some industrial machines that more-or-less do 3D laser cutting, but if you, or someone you know of, has attached a 3rd axis to a desktop laser, let us know in the comments, we would love to see it.

(via Adafruit)

Portabilizing The Kinect

Way back when the Kinect was first released, there was a realization that this device would be the future of everything 3D. It was augmented reality, it was a new computer interface, it was a cool sensor for robotics applications, and it was a 3D scanner. When the first open source driver for the Kinect was released, we were assured that this is how we would get 3D data from real objects into a computer.

Since then, not much happened. We’re not using the Kinect for a UI, potato gamers were horrified they would be forced to buy the Kinect 2 with the new Xbox, and you’d be hard pressed to find a Kinect in a robot. 3D scanning is the only field where the Kinect hasn’t been over hyped, and even there it’s still a relatively complex setup.

This doesn’t mean a Kinect 3D scanner isn’t an object of desire for some people, or that it’s impossible to build a portabilzed version. [Mario]’s girlfriend works as an archaeologist, and having a tool to scan objects and places in 3D would be great for her. Because of this, [Mario] is building a handheld 3D scanner with a Raspberry Pi 2 and a Kinect.

This isn’t the first time we’ve seen a portablized Kinect. Way back in 2012, the Kinect was made handheld with the help of a Gumstix board. Since then, a million tiny ARM single board computers have popped up, and battery packs are readily available. It was only a matter of time until someone stepped up to the plate, and [Mario] was the guy.

The problem facing [Mario] isn’t hardware. Anyone can pick up a Kinect at Gamestop, the Raspberry Pi 2 should be more than capable of reading the depth sensor on the Kinect, and these parts can be tied together with 3D printed parts. The real problem is the software, and so far [Mario] has Libfreenect compiling without a problem on the Pi2. The project still requires a lot of additional libraries including some OpenCV stuff, but so far [Mario] has everything working.

You can check out his video of the proof of concept below.

Continue reading “Portabilizing The Kinect”

Retrotechtacular: The Early Days of CGI

We all know what Computer-Generated Imagery (CGI) is nowadays. It’s almost impossible to get away from it in any television show or movie. It’s gotten so good, that sometimes it can be difficult to tell the difference between the real world and the computer generated world when they are mixed together on-screen. Of course, it wasn’t always like this. This 1982 clip from BBC’s Tomorrow’s World shows what the wonders of CGI were capable of in a simpler time.

In the earliest days of CGI, digital computers weren’t even really a thing. [John Whitney] was an American animator and is widely considered to be the father of computer animation. In the 1940’s, he and his brother [James] started to experiment with what they called “abstract animation”. They pieced together old analog computers and servos to make their own devices that were capable of controlling the motion of lights and lit objects. While this process may be a far cry from the CGI of today, it is still animation performed by a computer. One of [Whitney’s] best known works is the opening title sequence to [Alfred Hitchcock’s] 1958 film, Vertigo.

Later, in 1973, Westworld become the very first feature film to feature CGI. The film was a science fiction western-thriller about amusement park robots that become evil. The studio wanted footage of the robot’s “computer vision” but they would need an expert to get the job done right. They ultimately hired [John Whitney’s] son, [John Whitney Jr] to lead the project. The process first required color separating each frame of the 70mm film because [John Jr] did not have a color scanner. He then used a computer to digitally modify each image to create what we would now recognize as a “pixelated” effect. The computer processing took approximately eight hours for every ten seconds of footage. Continue reading “Retrotechtacular: The Early Days of CGI”