New Kinect Sensor Switch Focus From Gamers To Developers

Microsoft’s Kinect may not have found success as a gaming peripheral, but recognizing that a depth sensor is too cool to leave for dead, development continued even after Xbox gaming peripherals were discontinued. This week their latest iteration emerged and we can get it in the form of Azure Kinect DK. This is a developer’s kit focused on exploring new applications for this technology, not a gaming peripheral we had to hack before we could use in our own projects.

Packaged into a peripheral that plugs into a PC via USB-C, it is more than the core depth sensor module announced last year but less than a full consumer product. Browsing its 10-page specification (PDF) with comparisons to second generation Kinect sensor bar, we see how this technology has evolved. Physical size and weight has dropped, as has power consumption. Auxiliary capabilities has improved with an expanded microphone array, IMU with gyro in addition to accelerometer, and the RGB camera has been upgraded to 4K resolution.

But the star of the show is a new continuous-wave time-of-flight depth sensor, presented at the 2018 IEEE ISSCC conference. (Full text requires IEEE membership, but a digest form is available via ResearchGate.) Among its many advancements, we expect the biggest impact to be its field of view. Default of 75 x 65 degrees is already better than its predecessors (64 x 45 for first generation Kinect, 70 x 60 for second) but there is an option to trade resolution for coverage by switching to a wide-angle mode of 120 x 120 degrees. Significantly wider than other depth cameras like Intel’s RealSense D400 series or Occipital’s Structure.

Another interesting feature is built-in synchronization. Many projects using multiple Kinect sensors ran into problems because they interfered with each other. People hacked around the problem, of course, but now they don’t have to: commodity 3.5 mm jacks allow multiple Azure Kinect DK to be daisy chained together so they play nicely and take turns.

From its name we were worried this product would require Microsoft’s Azure cloud service in some way and be crippled without it. Based on information released so far, it appears developers have access to all the same data streams as previous sensors. Azure tie-in takes the form of optional SDKs that make it easier to do things like upload data for processing in Azure cloud-based recognition services.

And finally, Azure Kinect DK’s price tag of $399 is significantly higher than a Kinect game peripheral, but it is a low volume product for developers. Perhaps high volume consumer products built on this technology will cost less, but that remains to be seen. In the meantime, you have alternative tools for solving similar problems. For example if you are building your own AR headset, you might use Intel’s latest RealSense camera for vision based inside-out motion tacking.

Flying Human Head Lands Just In Time For Halloween

We love the fall here at Hackaday. The nights are cooler, the leaves are changing, and our tip line starts lighting up with some of the craziest things we’ve ever seen. Something about terrifying children of all ages just really speaks to the hacker mindset. That sounds bad, but we’re sure there’s a positive message in there someplace if you care to look hard enough.

Today’s abomination is a truly horrifying human head quadcopter, which exists for literally no other reason than to freak people out. We love it. Created by [Josh] and a few friends, the “HeadOCopter” is built around a meticulously detailed 3D print of his own head. This thing is so purpose-built that they didn’t even put landing gear on it: there’s no point sitting on the ground when you’re in the business of terrorizing people from above.

Sure, you could do this project with a cheap plastic skull. But there’s no way it would have the same effect. [Josh] created this monstrosity by scanning his own head with the Microsoft Kinect, cleaning the model up in ZBrush, adding in mounts for hardware, and 3D printing the result. After doing some smoothing and filling, the head got passed off to artist [Lisa Svingos] for the final painting. He even thought to include an FPV camera where one of his eyes should be, giving a whole new meaning to the term.

As for the quadcopter hardware itself, it uses a BrainFPV RADIX flight controller (get it?) and 12×5 props on Sunnysky V3508 motors with 30A BLHELI ESCs. Measuring 1 meter (3.2 feet) from motor to motor, it’s an impressive piece of hardware itself; head or no head.

This project reminds us of the flying ghost we saw years back, but we have to admit, this raises the bar pretty high. We’re almost afraid to see what comes next.

Continue reading “Flying Human Head Lands Just In Time For Halloween”

Microsoft Kinect Episode IV: A New Hope

The history of Microsoft Kinect has been of a technological marvel in search of the perfect market niche. Coming out of Microsoft’s Build 2018 developer conference, we learn Kinect is making another run. This time it’s taking on the Internet of Things mantle as Project Kinect for Azure.

Kinect was revolutionary in making a quality depth camera system available at a consumer price point. The first and second generation Kinect were peripherals for Microsoft’s Xbox gaming consoles. They wowed the world with possibilities and, thanks in large part to an open source driver bounty spearheaded by Adafruit, Kinect found an appreciative audience in robotics, interactive art, and other hacking communities. Sadly its novelty never translated to great success in its core gaming market and Kinect as a gaming peripheral was eventually discontinued.

For its third-generation, Kinect retreated from gaming and found a role in Microsoft’s HoloLens AR headset running “backwards”: tracking user’s environment instead of user’s movement. The high cost of a HoloLens put it out of reach of most people, but as a head-mounted battery-powered device, it pushed Kinect technology to shrink in physical size and power consumption.

This upcoming fourth generation takes advantage of that evolution and the launch picture is worth a thousand words all on its own: instead of a slick end-user commercial product, we see a populated PCB awaiting integration. The quoted power draw of 225-950mW is high by modern battery-powered device standards but undeniably a huge reduction from previous generations’ household AC power requirement.

Microsoft’s announcement heavily emphasized how this module will work with their cloud services, but we hope it can be persuaded to run independently from Microsoft’s cloud just as its predecessors could run independent of game consoles. This will be a big factor for adoption by our community, second only to the obvious consideration of price.

[via Engadget]

Rejecting Microsoft’s Phaseout Of The Kinect

You might not be aware unless you’re up on the latest gaming hardware, but Microsoft is trying to kill the Kinect. While the Xbox One famously included it as a mandatory pack-in accessory at launch (this was later abandoned to get the cost down), the latest versions of the system don’t even have the proprietary port to plug it in. For a while Microsoft was offering an adapter that would let you plug it into one of the console’s USB ports, but now even that has been discontinued. Owners of the latest Xbox One consoles who still want to use the Kinect are left to find an adapter on eBay, where the prices have naturally skyrocketed.

Recently [Eagle115] decided to open up his Kinect and see if he couldn’t figure out a way to hook it up to his new Xbox One. The port on the Kinect is a USB 3.0 B female, but it requires 12V to operate. The official Kinect adapter took the form of a separate AC adapter and a “tap” that provided the Kinect with 12V over USB, so he reasoned he could pop open the device and provide power directly to the pads on the PCB.

[Eagle115] bought a 12V wall adapter and a USB 3.0 B cable and got to work. Once the Kinect was popped open, he found that he needed to supply power on pin 10 (which is helpfully labeled on the PCB). There’s just enough room to snake the cable from the AC adapter through the same hole in the case where the the USB cable connects.

With the Kinect getting 12V from the AC adapter, the Xbox has no problem detecting it as if you were using the official adapter. At least for now, they haven’t removed support for the Kinect in the Xbox’s operating system.

The Kinect has always been extremely popular with hackers (it even has its own category here on Hackaday), so it’s definitely sad to see that Microsoft is walking away from the product. The community will no doubt continue pulling off awesome hacks with it; but it’s looking increasingly likely we won’t be getting a next generation Kinect.

[via /r/DIY]

Using RealSense Cameras With OS X And Linux

The original Microsoft Kinect was a revolution in computer vision. For less than one hundred dollars, the Kinect gave everyone a webcam with a depth sensor. If you’re doing anything with robots, 3D scanning, or anything else where a computer needs to know where it is in 3D space, it’s awesome. These depth-mapping cameras have improved over the years, with the latest and most capable hardware being Intel’s RealSense 3D camera.

Despite being a very capable depth camera, support for Linux and OS X doesn’t exist. Researchers, roboticists and IoT developers are slightly miffed about this, and it seems like Intel doesn’t care about people using their hardware on platforms that aren’t Windows.

Now, finally, that’s changed. A few developers have taken it upon themselves to build a cross-platform library for the F200, SR300, and R200 Intel RealSense depth cameras.

The librealsense library features proper RealSense camera support for Linux, OS X, and Windows and provides all the functionality of the official Intel SDK. This functionality includes native depth, color, and infrared streams, synthetic streams for rectified images, calibration information, and the most interesting feature: multi-camera capture.

The hardware required to use the RealSense camera is somewhat lightweight – any recent laptop should be able to capture depth images with a RealSense camera. The camera itself requires USB 3, though, so you won’t be building a 3D scanner with a RealSense camera and a Raspberry Pi quite yet. Still, it’s the latest advancement for giving robots 3D vision and building cheap, portable 3D scanners.

Polarizing 3D Scanner Gives Amazing Results

What if you could take a cheap 3D sensor like a Kinect and increase its effectiveness by three orders of magnitude? The Kinect is great, of course, but it does have a limited resolution. To augment this, MIT researchers are using polarized measurements to deduce 3D forms.

The Fresnel equations describe how the shape of an object changes reflected light polarization, and the researchers use the received polarization to infer the shape. The polarizing sensor is nothing more than a DSLR camera and a polarizing filter, and scanning resolution is down to 300 microns.

The problem with the Fresnel equations is that there is an ambiguity so that a single measurement of polarization doesn’t uniquely identify the shape, and the novel work here is to use information from depth sensors like Kinect to select from the alternatives.

Continue reading “Polarizing 3D Scanner Gives Amazing Results”

3D Scanning Entire Rooms With A Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

Continue reading “3D Scanning Entire Rooms With A Kinect”