Kinect Visualizer Demo Gives Winamp a Run for Its Money

Winamp eat your heart out, because thanks to the Microsoft Kinect in the hands of [Samarth] there’s a new way to make your screen dance along with you. He created a music visualizer demo that takes advantage of the 3D depth camera on Kinect by outputting a fun pixelated silhouette and color changing strobe. When there are big high-hat hits or bass thumps the camera feed reacts accordingly (as any good visualizer would). He even uploaded his code for the project just in case anyone would like to take a look at it.

The visualizer utilizes the OpenKinect-Processing library which has provided the backbone to many other similar Kinect art projects. It was specifically created to provide a quicker way for coders to access the raw color and depth data output by Kinect. It’s creator, Daniel Shiffman, has posted a number of tutorials to aid anyone looking to create their own real-time animations as well.

The visualizer demo (see video below) was created as part of Maker Faire Hyderabad which is happening over the weekend. The expo is the city’s first Maker Faire and is set to feature over 200 maker exhibits across multiple disciplines. It’s always great to see maker communities outside of the ones that are closest to you geographically speaking, so hopefully we’ll see many more like [Samarth] taking part in more maker events in the future.

Continue reading “Kinect Visualizer Demo Gives Winamp a Run for Its Money”

Microsoft Kinect Episode IV: A New Hope

The history of Microsoft Kinect has been of a technological marvel in search of the perfect market niche. Coming out of Microsoft’s Build 2018 developer conference, we learn Kinect is making another run. This time it’s taking on the Internet of Things mantle as Project Kinect for Azure.

Kinect was revolutionary in making a quality depth camera system available at a consumer price point. The first and second generation Kinect were peripherals for Microsoft’s Xbox gaming consoles. They wowed the world with possibilities and, thanks in large part to an open source driver bounty spearheaded by Adafruit, Kinect found an appreciative audience in robotics, interactive art, and other hacking communities. Sadly its novelty never translated to great success in its core gaming market and Kinect as a gaming peripheral was eventually discontinued.

For its third-generation, Kinect retreated from gaming and found a role in Microsoft’s HoloLens AR headset running “backwards”: tracking user’s environment instead of user’s movement. The high cost of a HoloLens put it out of reach of most people, but as a head-mounted battery-powered device, it pushed Kinect technology to shrink in physical size and power consumption.

This upcoming fourth generation takes advantage of that evolution and the launch picture is worth a thousand words all on its own: instead of a slick end-user commercial product, we see a populated PCB awaiting integration. The quoted power draw of 225-950mW is high by modern battery-powered device standards but undeniably a huge reduction from previous generations’ household AC power requirement.

Microsoft’s announcement heavily emphasized how this module will work with their cloud services, but we hope it can be persuaded to run independently from Microsoft’s cloud just as its predecessors could run independent of game consoles. This will be a big factor for adoption by our community, second only to the obvious consideration of price.

[via Engadget]

Rejecting Microsoft’s Phaseout of the Kinect

You might not be aware unless you’re up on the latest gaming hardware, but Microsoft is trying to kill the Kinect. While the Xbox One famously included it as a mandatory pack-in accessory at launch (this was later abandoned to get the cost down), the latest versions of the system don’t even have the proprietary port to plug it in. For a while Microsoft was offering an adapter that would let you plug it into one of the console’s USB ports, but now even that has been discontinued. Owners of the latest Xbox One consoles who still want to use the Kinect are left to find an adapter on eBay, where the prices have naturally skyrocketed.

Recently [Eagle115] decided to open up his Kinect and see if he couldn’t figure out a way to hook it up to his new Xbox One. The port on the Kinect is a USB 3.0 B female, but it requires 12V to operate. The official Kinect adapter took the form of a separate AC adapter and a “tap” that provided the Kinect with 12V over USB, so he reasoned he could pop open the device and provide power directly to the pads on the PCB.

[Eagle115] bought a 12V wall adapter and a USB 3.0 B cable and got to work. Once the Kinect was popped open, he found that he needed to supply power on pin 10 (which is helpfully labeled on the PCB). There’s just enough room to snake the cable from the AC adapter through the same hole in the case where the the USB cable connects.

With the Kinect getting 12V from the AC adapter, the Xbox has no problem detecting it as if you were using the official adapter. At least for now, they haven’t removed support for the Kinect in the Xbox’s operating system.

The Kinect has always been extremely popular with hackers (it even has its own category here on Hackaday), so it’s definitely sad to see that Microsoft is walking away from the product. The community will no doubt continue pulling off awesome hacks with it; but it’s looking increasingly likely we won’t be getting a next generation Kinect.

[via /r/DIY]

Kinect and Raspberry Pi Add Focus Pulling to DSLR

Prosumer DSLRs have been a boon to the democratization of digital media. Gear that once commanded professional prices is now available to those on more modest budgets. Not only has this unleashed a torrent of online content, it has also started a wave of camera hacks and accessories, like this automatic focus puller based on a Kinect and a Raspberry Pi.

For [Tom Piessens], the Canon EOS 5D has been a solid platform but suffers from a problem. The narrow depth of field possible with DSLRs makes it difficult to maintain focus on subjects that are moving relative to the camera, making follow-focus scenes like this classic hard to reproduce. Aiming for a better system than the stock autofocus, [Tom] grafted a Kinect sensor and a stepper motor actuator to a Raspberry Pi, and used the Kinect’s depth map to drive the focus ring. Parts are laser-cut, including a nice enclosure for the Pi and display that makes the whole thing reasonably portable. The video below shows the focus remaining locked on a selected region of interest. It seems like movement along only one axis is allowed; we’d love to see this system expanded to follow a designated object no matter where it moves in the frame.

If you’re in need of a follow-focus rig but don’t have a geared lens, check out these 3D-printed lens gears. They’d be a great complement to this backwoods focus-puller.

Continue reading “Kinect and Raspberry Pi Add Focus Pulling to DSLR”

Using RealSense Cameras With OS X and Linux

The original Microsoft Kinect was a revolution in computer vision. For less than one hundred dollars, the Kinect gave everyone a webcam with a depth sensor. If you’re doing anything with robots, 3D scanning, or anything else where a computer needs to know where it is in 3D space, it’s awesome. These depth-mapping cameras have improved over the years, with the latest and most capable hardware being Intel’s RealSense 3D camera.

Despite being a very capable depth camera, support for Linux and OS X doesn’t exist. Researchers, roboticists and IoT developers are slightly miffed about this, and it seems like Intel doesn’t care about people using their hardware on platforms that aren’t Windows.

Now, finally, that’s changed. A few developers have taken it upon themselves to build a cross-platform library for the F200, SR300, and R200 Intel RealSense depth cameras.

The librealsense library features proper RealSense camera support for Linux, OS X, and Windows and provides all the functionality of the official Intel SDK. This functionality includes native depth, color, and infrared streams, synthetic streams for rectified images, calibration information, and the most interesting feature: multi-camera capture.

The hardware required to use the RealSense camera is somewhat lightweight – any recent laptop should be able to capture depth images with a RealSense camera. The camera itself requires USB 3, though, so you won’t be building a 3D scanner with a RealSense camera and a Raspberry Pi quite yet. Still, it’s the latest advancement for giving robots 3D vision and building cheap, portable 3D scanners.

Polarizing 3D Scanner Gives Amazing Results

What if you could take a cheap 3D sensor like a Kinect and increase its effectiveness by three orders of magnitude? The Kinect is great, of course, but it does have a limited resolution. To augment this, MIT researchers are using polarized measurements to deduce 3D forms.

The Fresnel equations describe how the shape of an object changes reflected light polarization, and the researchers use the received polarization to infer the shape. The polarizing sensor is nothing more than a DSLR camera and a polarizing filter, and scanning resolution is down to 300 microns.

The problem with the Fresnel equations is that there is an ambiguity so that a single measurement of polarization doesn’t uniquely identify the shape, and the novel work here is to use information from depth sensors like Kinect to select from the alternatives.

Continue reading “Polarizing 3D Scanner Gives Amazing Results”

3D Scanning Entire Rooms with a Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

Continue reading “3D Scanning Entire Rooms with a Kinect”