Take a Spin on this Voice-Controlled 3D Scanning Rig

[Aldric Negrier] wanted to make 3D-scanning a person streamlined and simple. To that end, he created this voice-controlled 3D-scanning rig.

[Aldric] used a variety of hacking skills to make this project, and his thorough Instructable illustrates this nicely. Everything from CNC milling to Arduino programming to 3D-printing was incorporated into the making of this rig. Plywood was used to construct the base and the large toothed gear. A 12″ Lazy Susan bearing was attached to this gear to allow smooth rotation. In order to automate the rig, a 12V DC geared motor was attached to a smaller 3D-printed gear and positioned on the base. When the motor is on, the smaller gear’s teeth take the larger gear for a spin. He used a custom dual H-bridge motor driver made by a friend, which is connected to an Arduino Nano. The Nano is also connected to a Bluetooth module and an ultrasonic range finder. When an object within 1-35cm is detected on the rig for 3 seconds, the motor starts to spin, stopping when the object is no longer detected. A typical scan takes about 60 seconds.

This alone would have been a great project, but [Aldric] did not stop there. He wanted to be able to step on the rig and issue commands while being scanned. It makes sense if you want to scan yourself – get on the rig, assume the desired position, and then initiate the scan. He used the Windows speech recognition SDK to develop an application that issues commands via Bluetooth to Skanect, a 3D-scanning software. The commands are as simple as saying “Start Skanect.” You can also tell the motor to switch on or off and change its speed or direction without breaking form. [Aldric] used an Asus Xtion for a 3D-scanner, but a Kinect will also work. Afterwards, he smoothed his scans using MeshMixer, a program featured in previous hacks.

Check out the videos of the rig after the break. Voice commands are difficult to hear due to the background music in one of the videos, but if you listen carefully, you can hear them. You can also see more of [Aldric’s] projects here or on this YouTube channel.

Continue reading “Take a Spin on this Voice-Controlled 3D Scanning Rig”

ANUBIS, A Natural User Bot Interface System

[Matt], [Andrew], [Noah], and [Tim] have a pretty interesting build for their capstone project at Ohio Northern University. They’re using a Microsoft Kinect, and a Leap Motion to create a natural user interface for controlling humanoid robots.

The robot the team is using for this project is a tracked humanoid robot they’ve affectionately come to call Johnny Five.  Johnny takes commands from a computer, Kinect, and Leap motion to move the chassis, arm, and gripper around in a way that’s somewhat natural, and surely a lot easier than controlling a humanoid robot with a keyboard.

The team has also released all their software onto Github under an open source license. You can grab that over on the Gits, or take a look at some of the pics and videos from the Columbus Mini Maker Faire.

Seeing The World Through Depth Sensing Cameras

The Oculus Rift and all the other 3D video goggle solutions out there are great if you want to explore virtual worlds with stereoscopic vision, but until now we haven’t seen anyone exploring real life with digital stereoscopic viewers. [pabr] combined the Kinect-like sensor in an ASUS Xtion with a smartphone in a Google Cardboard-like setup for 3D views the human eye can’t naturally experience like a third-person view, a radar-like display, and seeing what the world would look like with your eyes 20 inches apart.

[pabr] is using an ASUS Xtion depth sensor connected to a Galaxy SIII via the USB OTG port. With a little bit of code, the output from the depth sensor can be pushed to the phone’s display. The hardware setup consists of a VR-Spective, a rather expensive bit of plastic, but with the right mechanical considerations, a piece of cardboard or some foam board and hot glue would do quite nicely.

[pabr] put together a video demo of his build, along with a few examples of what this project can do. It’s rather odd, and surprisingly not a superfluous way to see in 3D. You can check out that video below.

Continue reading “Seeing The World Through Depth Sensing Cameras”

Stereo Vision and Depth Mapping with Two Raspi Camera Modules

The Raspberry Pi has a port for a camera connector, allowing it to capture 1080p video and stream it to a network without having to deal with the craziness of webcams and the improbability of capturing 1080p video over USB. The Raspberry Pi compute module is a little more advanced; it breaks out two camera connectors, theoretically giving the Raspberry Pi stereo vision and depth mapping. [David Barker] put a compute module and two cameras together making this build a reality.

The use of stereo vision for computer vision and robotics research has been around much longer than other methods of depth mapping like a repurposed Kinect, but so far the hardware to do this has been a little hard to come by. You need two cameras, obviously, but the software techniques are well understood in the relevant literature.

[David] connected two cameras to a Pi compute module and implemented three different versions of the software techniques: one in Python and NumPy, running on an 3GHz x86 box, a version in C, running on x86 and the Pi’s ARM core, and another in assembler for the VideoCore on the Pi. Assembly is the way to go here – on the x86 platform, Python could do the parallax computations in 63 seconds, and C could manage it in 56 milliseconds. On the Pi, C took 1 second, and the VideoCore took 90 milliseconds. This translates to a frame rate of about 12FPS on the Pi, more than enough for some very, very interesting robotics work.

There are some better pictures of what this setup can do over on the Raspi blog. We couldn’t find a link to the software that made this possible, so if anyone has a link, drop it in the comments.

Super Smash Bros Gets a Revamp with the Microsoft Kinect

[Eric] just sent in this awesome Kinect hack that he and a few friends worked on. Playing Super Smash Bros with a Kinect.

The system makes use of two Kinects, and three PCs. The first Kinect records each individual players moves, while the second Kinect watches both players “fight” each other. The first PC runs an Nintendo 64 emulator to play the game.character selection

The second PC runs a camera with OpenCV to add another cool but perhaps unnecessary feature, you see, even the character selection is a physical process, adding to the idea of playing the entire game with your body. A glass table allows players to set their 3D printed token onto the glass, effectively placing it on the character they would like to use.

And when the match ends, a windshield wiper knocks off the losing player’s token from the table.

The third PC is responsible for running both Kinects, which then has to send the resulting commands back to first PC over a TCP connection for input into the game.

They introduced it to the public at MHacks Fall 2014, a hacking competition sponsored by Dell and Intel. Video Below.

Continue reading “Super Smash Bros Gets a Revamp with the Microsoft Kinect”

Using Kinect To Play Super Mario Bros 3 On NES Ensures Quick Death

Why do only the new game consoles get all the cool peripherals? Being a man of action, [Paul] set out to change that. He had a Kinect V2 and an original Nintendo and thought it would be fun to get the two to work together.

Thinking it would be easiest to emulate a standard controller, [Paul] surfed the ‘net a bit until he found an excellent article that explained how the NES controller works. It turns out that besides the buttons, there’s only one shift register chip and some pull up resistors in the controller. Instead of soldering leads to a cannibalized NES controller, he decided to stick another shift register and some resistors down on a breadboard with a controller cable connected directly to the chip.

Kinect4NES wiring

An Arduino is used to emulate the buttons presses. The Arduino is running the Firmata sketch that allows toggling of the Arduino pins from a host computer. That host computer runs an application that [Paul] wrote himself using the Kinect V2 SDK that converts the gestures of the player into controller commands which then tells the Arduino which buttons to ‘push’. This is definitely a pretty interesting and involved project, even if the video does make it look very challenging to rescue Princess Toadstool from Bowser and the Koopalings!

If you’d like to help the project or just build one for yourself, check out the source files on the Kinect4NES GitHub page.

Continue reading “Using Kinect To Play Super Mario Bros 3 On NES Ensures Quick Death”

Robotic Terminator Teddy Will Protect You While You Sleep

This animatronic teddy bear is the stuff of nightmares… or dreams if you’re into mutant robot toys. In either case, this project by [Erwin Ried] is charming and creepy, as he gives life to an unassuming stuffed animal by implanting it with motorized parts.

[Erwin] achieves several degrees of motion throughout the bear’s body by filling the skin with a series of 3D printed bones, conjoined by servo motors at its shoulders, elbows and neck. The motors are controlled via an Arduino running slave to a custom application written in C#. This application uses the motion tracking and facial recognition features of the Xbox Kinect, mapping the input from the puppeteer’s movement to the motors of the doll’s skeleton. Additionally, two red LEDs illuminate under the bear’s cheeks in response to the facial expression of the person controlling it, as an additional reminder that teddy feels what you feel.

bearSkeleton

In [Erwin’s] video, he demonstrates what his application sees through the Kinect’s camera side-by-side with the mechanical skeleton its controlling. The finished product isn’t something I’d soon cuddle up to at night, but looks amazing and is fun to watch in action :

Continue reading “Robotic Terminator Teddy Will Protect You While You Sleep”