The Raspberry Pi has a port for a camera connector, allowing it to capture 1080p video and stream it to a network without having to deal with the craziness of webcams and the improbability of capturing 1080p video over USB. The Raspberry Pi compute module is a little more advanced; it breaks out two camera connectors, theoretically giving the Raspberry Pi stereo vision and depth mapping. [David Barker] put a compute module and two cameras together making this build a reality.
The use of stereo vision for computer vision and robotics research has been around much longer than other methods of depth mapping like a repurposed Kinect, but so far the hardware to do this has been a little hard to come by. You need two cameras, obviously, but the software techniques are well understood in the relevant literature.
[David] connected two cameras to a Pi compute module and implemented three different versions of the software techniques: one in Python and NumPy, running on an 3GHz x86 box, a version in C, running on x86 and the Pi’s ARM core, and another in assembler for the VideoCore on the Pi. Assembly is the way to go here – on the x86 platform, Python could do the parallax computations in 63 seconds, and C could manage it in 56 milliseconds. On the Pi, C took 1 second, and the VideoCore took 90 milliseconds. This translates to a frame rate of about 12FPS on the Pi, more than enough for some very, very interesting robotics work.
There are some better pictures of what this setup can do over on the Raspi blog. We couldn’t find a link to the software that made this possible, so if anyone has a link, drop it in the comments.
The system makes use of two Kinects, and three PCs. The first Kinect records each individual players moves, while the second Kinect watches both players “fight” each other. The first PC runs an Nintendo 64 emulator to play the game.
The second PC runs a camera with OpenCV to add another cool but perhaps unnecessary feature, you see, even the character selection is a physical process, adding to the idea of playing the entire game with your body. A glass table allows players to set their 3D printed token onto the glass, effectively placing it on the character they would like to use.
And when the match ends, a windshield wiper knocks off the losing player’s token from the table.
The third PC is responsible for running both Kinects, which then has to send the resulting commands back to first PC over a TCP connection for input into the game.
Why do only the new game consoles get all the cool peripherals? Being a man of action, [Paul] set out to change that. He had a Kinect V2 and an original Nintendo and thought it would be fun to get the two to work together.
Thinking it would be easiest to emulate a standard controller, [Paul] surfed the ‘net a bit until he found an excellent article that explained how the NES controller works. It turns out that besides the buttons, there’s only one shift register chip and some pull up resistors in the controller. Instead of soldering leads to a cannibalized NES controller, he decided to stick another shift register and some resistors down on a breadboard with a controller cable connected directly to the chip.
An Arduino is used to emulate the buttons presses. The Arduino is running the Firmata sketch that allows toggling of the Arduino pins from a host computer. That host computer runs an application that [Paul] wrote himself using the Kinect V2 SDK that converts the gestures of the player into controller commands which then tells the Arduino which buttons to ‘push’. This is definitely a pretty interesting and involved project, even if the video does make it look very challenging to rescue Princess Toadstool from Bowser and the Koopalings!
If you’d like to help the project or just build one for yourself, check out the source files on the Kinect4NES GitHub page.
This animatronic teddy bear is the stuff of nightmares… or dreams if you’re into mutant robot toys. In either case, this project by [Erwin Ried] is charming and creepy, as he gives life to an unassuming stuffed animal by implanting it with motorized parts.
[Erwin] achieves several degrees of motion throughout the bear’s body by filling the skin with a series of 3D printed bones, conjoined by servo motors at its shoulders, elbows and neck. The motors are controlled via an Arduino running slave to a custom application written in C#. This application uses the motion tracking and facial recognition features of the Xbox Kinect, mapping the input from the puppeteer’s movement to the motors of the doll’s skeleton. Additionally, two red LEDs illuminate under the bear’s cheeks in response to the facial expression of the person controlling it, as an additional reminder that teddy feels what you feel.
In [Erwin’s] video, he demonstrates what his application sees through the Kinect’s camera side-by-side with the mechanical skeleton its controlling. The finished product isn’t something I’d soon cuddle up to at night, but looks amazing and is fun to watch in action :
Kyocera is vastly expanding their product lineup with the Shop Sink 3530. The perfect addition to your copiers, fax machines, and laser printers.
About a year and a half ago and with objections from the editorial staff, we did a Top 10 hacking fails in movies and TV post. The number one fail is, “Stupid crime shows like NCIS, CSI, and Bones.” A new show on CBS just topped this list. It’s named Scorpion, and wow. Dropping a Cat5 cable from an airplane doing an almost-touch-and-go because something is wrong with the computers in the tower. Four million adults age 18-49 watched this.
[Derek] found something that really looks like the Hackaday logo in a spacer of some kind. It’s been sitting on his shelf for a few months, and is only now sending it in. He picked it up in a pile of scrap metal, and he (and we) really have no idea what this thing is. Any guesses?
[Art] has another, ‘what is this thing’. He has two of them, and he’s pretty sure it’s some sort of differential, but other than that he’s got nothing. The only real clue is that [Art] lives near a harbor on the N. Cali coast. Maybe from a navigation system, or a governor from a weird diesel?
So you have a Kinect sitting on a shelf somewhere. That’s fine, we completely understand that. Here’s something: freeze yourself in carbonite. Yeah, it turns out having a depth sensor is exactly what you need to make a carbonite copy of yourself.
The people at Two Bit Circus are at it again; this time with a futuristic racing simulator where the user controls the experience. It was developed by [Brent Bushnell] and [Eric Gradman] along with a handful of engineers and designers in Los Angeles, California. The immersive gaming chair utilized an actual racing seat in the design, and foot petals were added to give the driver more of a feeling like they were actually in a real race. Cooling fans were placed on top for haptic feedback and a Microsoft Kinect was integrated into the system as well to detect hand gestures that would control what was placed on the various screens.
The team completed the project within in thirty days during a challenge from Best Buy who wanted to see if they could create the future of viewing experiences. Problems surfaced throughout the time frame though creating obstacles surrounding the video cards, monitors, and shipping dates. They got it done and are looking towards integrating their work into restaurants like Dave & Buster’s and other facilities like arcades and bars (at least that’s the rumor going around town). The 5 part mini-series that was produced around this device can be seen after the break:
Ever feel like someone is watching you? Like, somewhere in the back of your mind, you can feel the peering eyes of something glancing at you? Tapping into that paranoia, is this Computer Science graduate project that was created during a “Tangible Interactive Computing” class at the University of Maryland by two bright young students named [Josh] and [Richard], with the help of HCIL hackerspace.
Their Professor [Dr. Jon Froehlich] wanted the students to ‘seamlessly couple the dual worlds of bits and atoms’ and create something that would ‘explore the materiality of interactive computing.’ And this relatively simple idea does just that, guaranteeing some good reactions.
As you’ve probably gathered from the title, this project uses a Microsoft Kinect to track the movement of nearby people. The output is then translated into actionable controls of the mounted eyeballs producing a creepy vibe radiating out from the feline, robot poster.