Microsoft’s Kinect may not have found success as a gaming peripheral, but recognizing that a depth sensor is too cool to leave for dead, development continued even after Xbox gaming peripherals were discontinued. This week their latest iteration emerged and we can get it in the form of Azure Kinect DK. This is a developer’s kit focused on exploring new applications for this technology, not a gaming peripheral we had to hack before we could use in our own projects.
Packaged into a peripheral that plugs into a PC via USB-C, it is more than the core depth sensor module announced last year but less than a full consumer product. Browsing its 10-page specification (PDF) with comparisons to second generation Kinect sensor bar, we see how this technology has evolved. Physical size and weight has dropped, as has power consumption. Auxiliary capabilities has improved with an expanded microphone array, IMU with gyro in addition to accelerometer, and the RGB camera has been upgraded to 4K resolution.
But the star of the show is a new continuous-wave time-of-flight depth sensor, presented at the 2018 IEEE ISSCC conference. (Full text requires IEEE membership, but a digest form is available via ResearchGate.) Among its many advancements, we expect the biggest impact to be its field of view. Default of 75 x 65 degrees is already better than its predecessors (64 x 45 for first generation Kinect, 70 x 60 for second) but there is an option to trade resolution for coverage by switching to a wide-angle mode of 120 x 120 degrees. Significantly wider than other depth cameras like Intel’s RealSense D400 series or Occipital’s Structure.
Another interesting feature is built-in synchronization. Many projects using multiple Kinect sensors ran into problems because they interfered with each other. People hacked around the problem, of course, but now they don’t have to: commodity 3.5 mm jacks allow multiple Azure Kinect DK to be daisy chained together so they play nicely and take turns.
From its name we were worried this product would require Microsoft’s Azure cloud service in some way and be crippled without it. Based on information released so far, it appears developers have access to all the same data streams as previous sensors. Azure tie-in takes the form of optional SDKs that make it easier to do things like upload data for processing in Azure cloud-based recognition services.
And finally, Azure Kinect DK’s price tag of $399 is significantly higher than a Kinect game peripheral, but it is a low volume product for developers. Perhaps high volume consumer products built on this technology will cost less, but that remains to be seen. In the meantime, you have alternative tools for solving similar problems. For example if you are building your own AR headset, you might use Intel’s latest RealSense camera for vision based inside-out motion tacking.
31 thoughts on “New Kinect Sensor Switch Focus From Gamers To Developers”
The Sensor SDK has a Win32 C- API with preview support for Linux ……. Oh dear!
Also, have people seen the prices for using Azure cloud services? Last time I looked it was something like $300 per month versus AWS of $3 per hour. Considering one training routine only takes 3 hours on AWS …. WTF?
Azure is usually slightly cheaper. https://www.cloudhealthtech.com/blog/azure-vs-aws-pricing or google for more comparisons
“Microsoft’s Kinect may not have found success as a gaming peripheral, but recognizing that a depth sensor is too cool to leave for dead, development continued even after Xbox gaming peripherals were discontinued.”
Probably because of the others they continued development. It’s all a bit of a niche, but a profitable niche, especially if it leads to bigger things.
It’s not that much of a niche. The Leap Motion and iPhone X uses the same tech for it’s face unlock and face tracking features; IR dot projector with a camera tracking the dots. Fun fact: the original Kinect projects ~50,000 dots and tracks a whole room. The iPhone X projects ~30,000 to track just a persons face.
“System Requirements: Windows® 10 PC with 7th Generation Intel® CoreTM i3 Processor (Dual Core 2.4 GHz with HD620 GPU or faster), USB 3.0 Port, 4 GB RAM. Not available on Windows 10 in S mode.”
Pitching this as a sensor and development tool aimed at engineers for industrial computer vision embedded applications and then insisting you are required to use Windows is ridiculous.
From the specification PDF (link in article): section “3.1 Supported operating systems and architectures” includes “Linux Ubuntu 18.04 (x64) with OpenGLv4.4 or later GPU driver”
Like the other Kinects before it, this hardware if a clusterfuck. First some genius thought it was a good idea to put the color and spatial camera on USB3, and the microphones and IMU on USB2. This COMPLETELY screws over the use of fiber based USB extenders, which is a requirement by the vast majority of artists and scientists using the Kinect.
Second, it only works with TWO different USB 3.x chipsets. Ti and Renesas. It just doesn’t work with any other USB controller. Period.
If you ask me, the real reason why the Kinect failed is because Microsoft’s engineers just don’t get it. People don’t want to spend hundreds of dollars on a device that’s only going to work with a tiny subset of the computing hardware and software on the market.
Anyone know if this will work with AMD? I had the second gen kinect and found out it only worked with Intel USB3 controllers (https://support.xbox.com/en-US/xbox-on-windows/accessories/kinect-for-windows-v2-known-issues#e73780d4545543179d40ffcdab52e1c5).
All I want to do is be able to scan someone and make a 3D model of them. Doesn’t even have to be a kinect.
Very. @besenyeim: send an e-mail to firstname.lastname@example.org so you get credit for it.
I guess you don’t mean credit as cash. :-)
Seriously, I don’t think I deserve credit for commenting a youtube link. The first version of this Prusa video popped up in my suggestion list months ago. Reading Dustin’s problem just recalled the memory, because I thought that time, it can be used for that.
You know what? Here’s a business idea:
1. build a camera rig, to automate capture
2. build a hw/sw toolset to automate the conversion
3. gather skills and practices to fine tune the 3d models
4. use 3d printing services to print the models
5. get a buttload of cash from ppl who want small statues of their children, pets, other loved ones.
If anyone gets that money doing this, remember, I accept donations. :-D
Dammit. I’ve been fighting with this for days. Is this why?? Argh. Well thanks anyway, you saved me a few additional nights of headache. I wonder if I can just buy a USB 3 card and plug that in.
Yes you can–I’m using a Kinect v2 connected through a USB 3.0 PCIe card with a Renesas USB chip and it works perfectly.
Glad I saved you some pain. I spent around $200 in a weekend trying different sensors and stuff before I found out. Luckily, I was able to return most of the stuff to Amazon.
Calling marketing drivel on this one. The Intel RealSense T265 camera has 165 deg FOV yet this ad only compare with the Intel RealSense D200. And the T265 is $199 so 30% cheaper than this Microsoft product.
Make that the Intel RealSense D400, not D200.
Calling apples vs. oranges on this one. The Intel RealSense T265 is a motion tracking camera, not a depth camera. The very first item on their FAQ is “Is this a depth camera?” and their answer starts with “Intel® RealSense™ T265 is not a depth camera.”
The Intel RealSense D400 series are depth cameras, field of view comparisons against the Azure Kinect DK are thus valid apples vs. apples comparisons.
Make that the Intel RealSense D400, not D200.
$399 is really the only thing here that I needed to know.
From Ebay’s completed listings it looks like they can be had for about 1/10 that.
And if that were not true, if it had to be $399 or nothing
well, if my interest was developing open source / hobbyist hacks (and it would be) I would never pay that much. If I worked down my project list all the way to Kinect and the only option was a $399 piece of kit my project would be to try to design my own open source Kinect replacement with a goal of costing much less.
And if my interest was more commercial? Does anybody really think a commercial Kinect comeback is going to be the foundation of the next corporate empire? Scratch that, is it even going to support a mom and pop shop through to retirement? Past next year?
If anybody wants to produce a Kinect like object they need to do it at a hobbyist price level because that is their only real market for the forseeable future. Sure, some hobbyist might some day come up with the killer app but you have to get it to the hobbyists first and even then I wouldn’t bet my R&D budget on a Kinect clone that is anything more than a hobbyist toy.
In other words they have to get it down to Raspi prices.
Are you saying your make your own kinect? No way. If you dont understand how powerful this piece of hardware is, it’s not for you. 400 is not a bad price for a professional level depth sensor if its able to generate a robust point cloud out of box. At such a high resolution and if processing can be done onboard, you can build a very intense SLAM algorithm and deploy it for a relatively low computational cost. The D435 is good, but the quality of the onboard d4 asic chip and the SDK they give you makes a lot of post processing necessary before you have a clean point cloud and structured light sensor isnt the best outdoors. This rgb-d sensor is gonna give you lidar quality depth info in a 3D environment for a fraction of what the velodyne puck costs. This is gonna be amazing for robotics if it executes at the level it is projected to.
Sadly, I don’t think this works outdoors. While researching this post I found mention of an ambient light ceiling of 2500 lux,[*] above which the sensor starts having problem reading back its signals for ToF calculation. This is good enough for most indoor situations but not outdoors in full sunlight.
[*] citation needed. I knew I saw it, because I went and looked up lux level range for outdoor use, but I can’t find the source right this second.
Yeah you’re right about it being bad for outdoor usage. I must have been misleading with that, I was just mentioning flaws of the d435 but they exist with all sensors I’ve researched. The only sensor I’ve seen be relatively decent for full bright daylight is the structure Io sensor surprisingly but even that has its issues. They have made progress in using ICP algorithms to make up for the bad quality but it seems to require a bunch of frames collected. I’m looking into this problem for a project so hopefully I can find some kind of work around but I have noticed at least with the d435 that its outdoor capabailty for short range isnt too bad. The max range I’m using it for is 2.5 m but yeah, I figured the new kinect will have the same issues. I still cannot wait to try this out. I dunno about how the azure online computation affects it but the demo of this thing looks so beasty.
As I’ve said in the article, it’s a low volume product aimed at professional developers and aspiring entrepreneurs, their challenge is to build a successful business case to increase production volume and drop cost. If interested hobbyists want to spend money now, we are welcome to join the fun, but the DK is not for everyone as Kinect gaming peripheral tried to be.
Saying “It’s too expensive for me today” is totally valid, we can wait and hope prices drop in the future.
Saying “The whole thing will fail because they’re not catering to me today” is a stretch.
Hmm, interesting. I bet hooking up a few of these could make for a comparatively inexpensive motion capture studio.
It’s trivial to capture skeletal motion with Kinect/Kinect v2 and this new version should be the same.
Anyone know the max range?
Azure Kinect DK has not mentioning anything about their depth measurement details. As its going to provide 1024*1024 px @15fps depth video it’ll be great for capturing pointcloud map with more details for 3d scanning. But I need to know minimum distance & maximum distance it can be used for depth data capture in both indoor & outdoor environment. This camera is having 399$ price tag. There is another camera from stereolabs named as ZED Stereo camera which depth perception up to 20m as mentioned in there website. This also used stereo based algorithm’s advancement but as the new kinect consists of TOF Depth so i guess ZED Camera will be better for outdoor long distance depth measurement. Anyone with more details info about my query pls let me know.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)