Yesterday Microsoft announced their new cable box, the Xbox One. Included in the announcement is a vastly improved Kinect sensor. It won’t be available until next Christmas, but now the question is what are we going to do with it?
From what initial specs that can be found, the new version of the Kinect will output RGB 1080p video over a USB 3.0 connection to the new Xbox. The IR depth camera of the original Kinect has been replaced with a time of flight camera – a camera that is able to send out a pulse of light and time how long it takes for photons to be reflected back to the camera. While there have been some inroads into making low-cost ToF cameras – namely Intel and Creative’s Interactive Gesture Camera Development Kit and the $250 DepthSense 325 from SoftKinetic – the Kinect 2.0 will be the first time of flight camera you’ll be able to buy for a few hundred bucks at any Walmart.
We’ve seen a ton of awesome Kinect hacks over the years. Everything from a ‘holographic display’ that turns any TV into a 3D display, computer vision for robots, and a 3D scanner among others. A new Kinect sensor with better 3D resolution can only improve existing projects and the time of flight sensor – like the one found in Google’s driverless car – opens up the door for a whole bunch of new projects.
So, readers of Hackaday, assuming someone can write a driver in a few days like the Kinect 1.0, what are we going to do with it?
While we’re at it, keep in mind we made a call for Wii U controller hacks. If somebody can crack that nut, it’ll be an awesome remote for robots and FPV airplanes and drones.
The game of Anti-Tetris is played by standing in front of a monitor and watch falling Tetris pieces overlaid on a video image of your body. Each hand is used to make pieces disappear so that they don’t stack up to the top of the screen. We don’t see this as the next big indie game. What we do see are some very interesting techniques for hand tracking.
An FPGA drives the game, using a camera as input. To track your hands the Cornell students figured out that YUV images show a specific range of skin tones which can be coded as a filter to direct cursor placement. But they needed a bit of a hack to get at those values. They patched into the camera circuit before the YUV is converted to RGB for the NTSC output.
Registering hand movement perpendicular to the screen is also a challenge that they faced. Because the hand location has already been established they were able to measure distance between the upper and lower boundaries. If that distance changes fast enough it is treated as an input, making the current block disappear.
Continue reading “Anti-Tetris project is a study in hand tracking”
A while back we toyed with the idea of doing a look back on hackaday history. We weren’t sure how often to publish it, or what exactly to publish. Now, we’ve decided that this will be the main part of the Hackaday news letter. You can sign up here if you haven’t already, but hurry I’m sending out today’s newsletter in a couple hours!
Each email (1-2 a week) will have that day’s history going all the way back to roughly the beginning. It will also have a quick blurb about what video I’m working on or any other little hackaday news bits.
This camera rig uses a Raspberry Pi to send a camera down fifty meters (mirror on RPi blog) in order to spy on sharks. We got really excited at first thinking that it might be using the camera module from the Raspberry Pi Foundation but that isn’t the case. Do keep reading though, there’s a lot of cool stuff involved in this one.
The project used a collection of camera units spread over a large area to monitor shark activity. Each is mounted on an anchored buoy, using solar panels and a lead acid gel battery for power. The RPi itself remains topside in a waterproof box. It connects to the camera using a 50-foot Ethernet patch cable.
We figure the challenge of building the hardware parallels that of designing an underwater ROV. The camera needs an enclosure that can stand up to the pressure at that depth while allowing the cable to pass through it. There is also an interesting note in the project log about getting the camera exposure settings to behave.
A lot of awesome stuff happened up in [Bruce Land]’s lab at Cornell this last semester. Three students – [Pat], [Ed], and [Hanna] put in hours of work to come up with a few algorithms that are able to simulate stereo audio with monophonic sound. It’s enough work for three semesters of [Dr. Land]’s ECE 5030 class, and while it’s impossible to truly appreciate this project with a YouTube video, we’re assuming it’s an awesome piece of work.
The first part of the team’s project was to gather data about how the human ear hears in 3D space. To do this, they mounted microphones in a team member’s ear, sat them down on a rotating stool, and played a series of clicks. Tons of MATLAB later, the team had an average of how their team member’s heads heard sound. Basically, they created an algorithm of how binarual recording works.
To prove their algorithm worked, the team took a piece of music, squashed it down to mono, and played it through an MSP430 microcontroller. With a good pair of headphones, they’re able to virtually place the music in a stereo space.
The video below covers the basics of their build but because of the limitations of [Bruce]’s camera and YouTube you won’t be able to experience the team’s virtual stereo for yourself. You can, however, put on a pair of headphones and listen to this, a good example of what can be done with this sort of setup.
Continue reading “Adding stereo to monophonic audio”
The Vine app is all the rage these days. It lets you shoot six-second videos on your iPhone and easily post them on the Internet. The problem is that [Sean Hodgins] doesn’t find the time limit to be useful for traditional video. But you can cram a lot more info into a half-dozen seconds if you make it a time-lapse video. The rig above is his solution to making the Vine app act as a time-lapse recorder.
The trick is in how the app itself works. It only records video when you’re touching the screen. So you record one second of video, then remove your finger and it ‘pauses’ the recording until you’re ready for the next scene. [Sean] automated this by adding a servo motor and a stylus. An Arduino drives the servo, making quick taps on the screen to get as many different frames into the six seconds as possible. He had a bit of trouble registering quick taps at first. His solution was to inject 3.3V into the stylus he gutted for the project. Click through the link above to see some example videos, or watch this embedded video to see the hardware at work:
Continue reading “Vine app hack on iPhone makes time-lapse movies”