Anyone can grab a projector, plug it in, and fire a movie at the wall. If, however, you want to add some depth to your work–both metaphorical and physical–you’d better start projection mapping. Intricate surfaces like these slabs of styrofoam are excellent candidates for a stunning display, but not without introducing additional complexity to your setup. [Grady] hopes to alleviate some tedium with the TightLight (Warning: “music”).
The video shows the entire mapping process of which the Arduino plays a specific role toward the end. Before tackling any projector calibration, [Grady] needs an accurate 3D model of the projection surface, and boy does it look complicated. Good thing he has a NextEngine 3D laser scanner, which you’ll see lighting the surface red as it cruises along.
Enter the TightLight: essentially 20 CdS photocells hooked up to a Duemilanove, each of which is placed at a previously-marked point on the 3D surface. A quick calibration scan scrolls light from the projector across the X then Y axis, hitting each sensor to determine its exact position. [Grady] then merges the photocell location data with the earlier 3D model using the TouchDesigner platform, and bam: everything lines up and plays nice.
This is a pretty intricate camera mount. Not only does it provide pan and tilt as the subtitles state, but it moves along a track and offers zoom and focus controls. Its great, but you’ll need an equally complex set of controls to do anything meaningful with it. That’s where the real hack comes into play. The entire system is controlled by its virtual model in Blender 3D.
You probably already know that Blender 3D is an open source 3-dimensional modeling suite. It’s got a mountain of features, which include a framework for animating virtual objects. The camera rig was replicated inside of the software, and includes a skeleton that moves just like the real thing. You can make an animation of how the camera should move, then export and play back those motions on the physical hardware.
Now if you need help making 3D models of your hardware perhaps you should try scanning them.
Continue reading “Complex camera rig controlled with Blender 3D”
Modeling simple objects in 3D can take some time. Modeling complex items… well you can get your college degree in that sort of thing. This method side-steps the artistic skill necessary to make the real virtual by using a laser and camera to map a three-dimensional object.
[Alessandro Grossi] is breaking the rules by using a 100mW laser for the project. He thinks that the Italian government prohibits anything over 5mW, but also mentions that the lens used to turn the laser dot into a vertical line drops the power dramatically. The beefy diode does still pay off, providing an incredibly intense line of light on the subject being mapped. The high-end DSLR camera mounted on the same arm as the laser captures a detailed image, which can be processed to dump everything other than the laser line itself. Because the two are mounted on different axes, the image provides plenty of perspective. That translates to the 3D coordinates used in the captured model shown in the inlaid image.
We’ve seen 3D scanners that move the subject; they usually rotate it to map every side. This method only captures one side, but the stepper motor moves in such small increments that the final resolution is astounding. See for yourself in the video after the break.
Continue reading “3D scanner with remarkable resolution”
Touch screens are nice — we still can’t live without a keyboard but they suffice when on the go. But it is becoming obvious that the end goal with user interface techniques is to completely remove the need to touch a piece of hardware in order to interact with it. One avenue for this goal is the use of voice commands via software like Siri, but another is the use of 3D processing hardware like Kinect or Leap Motion. This project uses the latter to control the image shown on the 3D display.
Continue reading “3D display controlled with the Leap Motion”
[Dino’s] hack this week seeks to create sunglasses that dim based on the intensity of ambient light. The thought is that this should give you the best light level even with changing brightness like when the sun goes behind a cloud or walking from inside to outside. He started with a pair of 3D shutter glasses. These have lenses that are each a liquid crystal pane. The glasses monitor an IR signal coming from a 3D TV, then alternately black out the lenses so that each eye is seeing a different frame of video to create the stereoscopic effect. In the video after the break he tears down the hardware and builds it back up with his own ambient light sensor circuit.
It only takes 6V to immediately darken one of the LCD panes. The interesting thing is that it takes a few seconds for them to become clear again. It turns out you need to bleed off the voltage in the pane using a resistor in order to have a fast response in both directions. Above you can see the light dependent resistor in the bridge of the frame that is used to trigger the panes. [Dino] shows at the end of his video that they work. But the main protective feature of sunglasses is that they filter out UV rays and he’s not sure if these have any ability to do that or not.
Continue reading “Turning 3D shutter glasses into automatic sunglasses”
While the whole 3d movie/game craze seems to be ramping up, it really isn’t a new thing. We all recall those fancy red-blue glasses that were popular in theaters for a while, but I’m not talking about that. Passive 3d projection (using polarized glasses) has been around for a while too. Many people have figured out cheap ways to build these systems in their homes, but only recently have we seen media created for them in quantity. Now that you can buy 3D games and movies at your local box store, the temptation to have a 3d system in your home is much higher.
Here’s a great read on how to put together a fairly simple projection system that uses two identical projectors with polarizing filters. Basically, all you need are two projectors, two filters, a screen, and the glasses. There are plenty of tips for mounting and setup in the thread to help alleviate any headaches you might encounter.
This system is primarily used with a PC, because it requires two video feeds to function. A cost breakdown might make you wonder why you wouldn’t just jump on amazon and get a 32″ 3d tv for under $400, but sitting in front of that giant screen might make you understand.
[Tom Ladyman] is making the case that a robot can take the place of a guide dog. According to his presentation, guide dogs cost about £45,000 (around $70k) to train and their working life is only about six years. On the other hand, he believes that this robot can be put into service for about £1,000 (around $1500). The target group for the robots is blind and visually impaired people. This makes since, because the robot lacks a dog’s ability to assist in other ways (locating and returning items to their companion, etc.). The main need here is independent travel.
He starts with the base of an electric wheelchair — a time-tested and economy-of-scale platform. The robot navigates based on images from four downward facing cameras mounted on the pole seen above. The X on the top of the pole allows for a much wider range of sight. The robot identifies its companion via a tag on their shoe, but it’s got another trick up its sleeve. The cameras feed to a set of four BeagleBoards which work together to process them into a 3D map at about 12 FPS, allowing for obstacle avoidance.
Check out the video after the break for a bit more information. The 3D guidance system is also explained in detail at the link above.
Continue reading “Can a robot be a safe and cost-effective alternative to guide dogs?”