Can a robot be a safe and cost-effective alternative to guide dogs?

[Tom Ladyman] is making the case that a robot can take the place of a guide dog. According to his presentation, guide dogs cost about £45,000 (around $70k) to train and their working life is only about six years. On the other hand, he believes that this robot can be put into service for about £1,000 (around $1500). The target group for the robots is blind and visually impaired people. This makes since, because the robot lacks a dog’s ability to assist in other ways (locating and returning items to their companion, etc.). The main need here is independent travel.

He starts with the base of an electric wheelchair — a time-tested and economy-of-scale platform. The robot navigates based on images from four downward facing cameras mounted on the pole seen above. The X on the top of the pole allows for a much wider range of sight. The robot identifies its companion via a tag on their shoe, but it’s got another trick up its sleeve. The cameras feed to a set of four BeagleBoards which work together to process them into a 3D map at about 12 FPS, allowing for obstacle avoidance.

Check out the video after the break for a bit more information. The 3D guidance system is also explained in detail at the link above.

[Read more...]

Drag and drop images for 3D printing

This piece of software called OmNomNom works with OpenSCAD to turn 2D images into 3D models. It’s literally a drag-and-drop process that renders almost instantly.

Here the example is a QR code, which is perfect for the software since it’s a well-defined black and white outline in the source image. But the video after the break shows several other examples that don’t rely on this simplicity. For instance, the Superman logo, which uses four different colors, is converted quite easily. There’s also a depth map of [Beethoven's] bust that is converted into a 3D object. The same technique can be used to create terrain from topographic source images.

Once the file has been converted to a model it can still be tweaked like normal. This allows you to customize size and depth to suit your needs. This is where OpenSCD comes into play, but if you don’t use that program you can still export an STL file directly from OmNomNom for use on your 3D printer.

[Read more...]

Multitouch table uses a Kinect for a 3D display

[Bastian] sent in a coffee table he built. This isn’t a place to set your drinks and copies of Make, though: it’s a multitouch table with a 3D display. Since no description can do this table justice, take a look at the video.

The build was inspired by the subject of this Hackaday post where [programming4fun] was able to build a ‘holographic display’ using a regular 2D projector and a Kinect. Both builds work on the principle of redrawing the 3D space in relation to the user’s head – as [Bastian] moves his head around the coffee table, the Kinect tracks his location and moves the 3 dimensional grid of boxes in the opposite direction. It’s extremely clever, and looks to be a promising user interface.

In addition to a Kinect, the coffee table uses a Microsoft Surface-like display; four infrared lasers are placed at the corner and detected with a camera next to the projector in the base.

After the break you can see the demo video and a gallery of the images [Bastion] put up on the NUI group forum.

[Read more...]

Scanning turntable digitizes objects as 3D models

This turntable can automatically digitize objects for use in 3D rendering software like Blender3D. [James Dalby] built it using a high-quality DSLR, and some bits and pieces out of his junk box. The turntable itself is a Lazy Susan turned on its head. The base for the spinning model is normally what sits on the table, but this way it gives him an area to rest the model, and the larger portion acts as a mounting surface for the drive mechanism.

He used the stepper motor from a scanner, as well as the belt and tension hardware from a printer to motorize the platform. This is driven by a transistor array (a ULN2003 chip) connected to an Arduino. The microcontroller also controls the shutter of the camera. We’ve included his code after the break; you’ll find his demo video embedded there as well.

The concept is the same as other turntable builds we’ve seen, But [James] takes the post-processing one step further. Rather than just make a rotating gif he is using Autodesk 123D to create a digital model from the set of images.

[Read more...]

3D whiteboard without the whiteboard

This one is so simple, and works so well, we’d call it a hoax if April 1st hadn’t already passed us by. But we’re confident that what [William Myers] and [Guo Jie Chin] came up with exists, and we want one of our own. The project is a method of drawing in 3 dimensions using ultrasonic sensors.

They call it 3D Paint, and that’s fitting since the software interface is much like the original MS Paint. It can show you the movements of the stylus in three axes, but it can also assemble an anaglyph — the kind of 3D that uses those red and blue filter glasses — so that the artists can see the 3D rendering as it is being drawn.

The hardware depends on a trio of sensors and a stylus that are all controlled by an ATmega644. That’s it for hardware (to be fair, there are a few trivial amplifier circuits too), making this an incredibly affordable setup. The real work, and the reason the input is so smooth and accurate, comes in the MATLAB code which does the trilateration. If you like to get elbow deep in the math the article linked above has plenty to interest you. If you’re more of a visual learner just skip down after the break for the demo video.

[Read more...]

View Gerber files in 3d in your browser

[Mark] wrote in, eager to show off this new tool he’s created to view your gerber files in 3d. He also wrote an instructible to go along with it, to help you figure out how to use the tool. Being an in-browser tool also means you can shoot it to your friends for a quick 3d review as well. Some of you may not feel that the 3d view is that helpful to the process, but we think that this is a welcomed feature that just might get some use around here.

[Mark] points out that it is still being actively developed, so please shoot him bugs via the form on the website if you should encounter any.

V-Synch detector lets you use 3D shutter glasses on Linux systems

This circuit is how [John Tsiombikas] makes his cheap 3D shutter glasses work with a Linux machine. It’s not that they were incompatible with Linux. The issue is that only certain video cards have the stereo port necessary to drive the head-mounted hardware.

Shutter glasses block light from one eye at a time, so that different renderings can be shown to create the stereoscopic effect. Since stimulating the muscles in the eye doesn’t actually work, you need to find a way to drive the glasses in perfect time with the video signal. His circuit watches for the V-Sync signal, then uses it to toggle the shutter glasses. Since the hardware has no way of knowing whether the left or right frame is being generated, he included the toggle switch as a user-controlled adjustment. If the 3D isn’t coming together, you’re probably viewing the frames with the wrong eye and need to flip the switch.

There’s really no way to show the effect without trying out the hardware in person. But [John] reports that it works like a charm when used with the OpenGL stereo wrapper.