Building A Passive 3D Projection System In Your Attic

While the whole 3d movie/game craze seems to be ramping up, it really isn’t a new thing. We all recall those fancy red-blue glasses that were popular in theaters for a while, but I’m not talking about that. Passive 3d projection (using polarized glasses) has been around for a while too. Many people have figured out cheap ways to build these systems in their homes, but only recently have we seen media created for them in quantity.  Now that you can buy 3D games and movies at your local box store, the temptation to have a 3d system in your home is much higher.

Here’s a great read on how to put together a fairly simple projection system that uses two identical projectors with polarizing filters. Basically, all you need are two projectors, two filters, a screen, and the glasses. There are plenty of tips for mounting and setup in the thread to help alleviate any headaches you might encounter.

This system is primarily used with a PC, because it requires two video feeds to function. A cost breakdown might make you wonder why you wouldn’t just jump on amazon and get a 32″ 3d tv for under $400, but sitting in front of that giant screen might make you understand.

Can A Robot Be A Safe And Cost-effective Alternative To Guide Dogs?

[Tom Ladyman] is making the case that a robot can take the place of a guide dog. According to his presentation, guide dogs cost about £45,000 (around $70k) to train and their working life is only about six years. On the other hand, he believes that this robot can be put into service for about £1,000 (around $1500). The target group for the robots is blind and visually impaired people. This makes since, because the robot lacks a dog’s ability to assist in other ways (locating and returning items to their companion, etc.). The main need here is independent travel.

He starts with the base of an electric wheelchair — a time-tested and economy-of-scale platform. The robot navigates based on images from four downward facing cameras mounted on the pole seen above. The X on the top of the pole allows for a much wider range of sight. The robot identifies its companion via a tag on their shoe, but it’s got another trick up its sleeve. The cameras feed to a set of four BeagleBoards which work together to process them into a 3D map at about 12 FPS, allowing for obstacle avoidance.

Check out the video after the break for a bit more information. The 3D guidance system is also explained in detail at the link above.

Continue reading “Can A Robot Be A Safe And Cost-effective Alternative To Guide Dogs?”

Drag And Drop Images For 3D Printing

This piece of software called OmNomNom works with OpenSCAD to turn 2D images into 3D models. It’s literally a drag-and-drop process that renders almost instantly.

Here the example is a QR code, which is perfect for the software since it’s a well-defined black and white outline in the source image. But the video after the break shows several other examples that don’t rely on this simplicity. For instance, the Superman logo, which uses four different colors, is converted quite easily. There’s also a depth map of [Beethoven’s] bust that is converted into a 3D object. The same technique can be used to create terrain from topographic source images.

Once the file has been converted to a model it can still be tweaked like normal. This allows you to customize size and depth to suit your needs. This is where OpenSCD comes into play, but if you don’t use that program you can still export an STL file directly from OmNomNom for use on your 3D printer.

Continue reading “Drag And Drop Images For 3D Printing”

Multitouch Table Uses A Kinect For A 3D Display

[Bastian] sent in a coffee table he built. This isn’t a place to set your drinks and copies of Make, though: it’s a multitouch table with a 3D display. Since no description can do this table justice, take a look at the video.

The build was inspired by the subject of this Hackaday post where [programming4fun] was able to build a ‘holographic display’ using a regular 2D projector and a Kinect. Both builds work on the principle of redrawing the 3D space in relation to the user’s head – as [Bastian] moves his head around the coffee table, the Kinect tracks his location and moves the 3 dimensional grid of boxes in the opposite direction. It’s extremely clever, and looks to be a promising user interface.

In addition to a Kinect, the coffee table uses a Microsoft Surface-like display; four infrared lasers are placed at the corner and detected with a camera next to the projector in the base.

After the break you can see the demo video and a gallery of the images [Bastion] put up on the NUI group forum.

Continue reading “Multitouch Table Uses A Kinect For A 3D Display”

Scanning Turntable Digitizes Objects As 3D Models

This turntable can automatically digitize objects for use in 3D rendering software like Blender3D. [James Dalby] built it using a high-quality DSLR, and some bits and pieces out of his junk box. The turntable itself is a Lazy Susan turned on its head. The base for the spinning model is normally what sits on the table, but this way it gives him an area to rest the model, and the larger portion acts as a mounting surface for the drive mechanism.

He used the stepper motor from a scanner, as well as the belt and tension hardware from a printer to motorize the platform. This is driven by a transistor array (a ULN2003 chip) connected to an Arduino. The microcontroller also controls the shutter of the camera. We’ve included his code after the break; you’ll find his demo video embedded there as well.

The concept is the same as other turntable builds we’ve seen, But [James] takes the post-processing one step further. Rather than just make a rotating gif he is using Autodesk 123D to create a digital model from the set of images.

Continue reading “Scanning Turntable Digitizes Objects As 3D Models”

3D Whiteboard Without The Whiteboard

This one is so simple, and works so well, we’d call it a hoax if April 1st hadn’t already passed us by. But we’re confident that what [William Myers] and [Guo Jie Chin] came up with exists, and we want one of our own. The project is a method of drawing in 3 dimensions using ultrasonic sensors.

They call it 3D Paint, and that’s fitting since the software interface is much like the original MS Paint. It can show you the movements of the stylus in three axes, but it can also assemble an anaglyph — the kind of 3D that uses those red and blue filter glasses — so that the artists can see the 3D rendering as it is being drawn.

The hardware depends on a trio of sensors and a stylus that are all controlled by an ATmega644. That’s it for hardware (to be fair, there are a few trivial amplifier circuits too), making this an incredibly affordable setup. The real work, and the reason the input is so smooth and accurate, comes in the MATLAB code which does the trilateration. If you like to get elbow deep in the math the article linked above has plenty to interest you. If you’re more of a visual learner just skip down after the break for the demo video.

Continue reading “3D Whiteboard Without The Whiteboard”

View Gerber Files In 3d In Your Browser

[Mark] wrote in, eager to show off this new tool he’s created to view your gerber files in 3d. He also wrote an instructible to go along with it, to help you figure out how to use the tool. Being an in-browser tool also means you can shoot it to your friends for a quick 3d review as well. Some of you may not feel that the 3d view is that helpful to the process, but we think that this is a welcomed feature that just might get some use around here.

[Mark] points out that it is still being actively developed, so please shoot him bugs via the form on the website if you should encounter any.