Microsoft Kinect Episode IV: A New Hope

The history of Microsoft Kinect has been of a technological marvel in search of the perfect market niche. Coming out of Microsoft’s Build 2018 developer conference, we learn Kinect is making another run. This time it’s taking on the Internet of Things mantle as Project Kinect for Azure.

Kinect was revolutionary in making a quality depth camera system available at a consumer price point. The first and second generation Kinect were peripherals for Microsoft’s Xbox gaming consoles. They wowed the world with possibilities and, thanks in large part to an open source driver bounty spearheaded by Adafruit, Kinect found an appreciative audience in robotics, interactive art, and other hacking communities. Sadly its novelty never translated to great success in its core gaming market and Kinect as a gaming peripheral was eventually discontinued.

For its third-generation, Kinect retreated from gaming and found a role in Microsoft’s HoloLens AR headset running “backwards”: tracking user’s environment instead of user’s movement. The high cost of a HoloLens put it out of reach of most people, but as a head-mounted battery-powered device, it pushed Kinect technology to shrink in physical size and power consumption.

This upcoming fourth generation takes advantage of that evolution and the launch picture is worth a thousand words all on its own: instead of a slick end-user commercial product, we see a populated PCB awaiting integration. The quoted power draw of 225-950mW is high by modern battery-powered device standards but undeniably a huge reduction from previous generations’ household AC power requirement.

Microsoft’s announcement heavily emphasized how this module will work with their cloud services, but we hope it can be persuaded to run independently from Microsoft’s cloud just as its predecessors could run independent of game consoles. This will be a big factor for adoption by our community, second only to the obvious consideration of price.

[via Engadget]

31 thoughts on “Microsoft Kinect Episode IV: A New Hope

      1. I have never felt cheated by Microsoft hardware (mice and keyboards). Kinect seemed to be liked by many so as long as the Microsoft software marketing “redacted” haven’t got their “redacted” hands on it you may find it to be a nice toy. Here’s hoping.

      2. I got rid of Microsoft years ago, and will mostly get rid of Google soon (LineageOS without GAPPS), thanks to the Xiaomi Redmi Note 4 Qualcomm 3GB 32GB Global Version Snapdragon 625….

      3. Lately they’ve been making a lot of awful fecalware but they still have some promising bits here and there. I keep their products on a short leash, but my toolset wouldn’t be complete without a windows machine as well as a mac and multiple linux distros.

  1. I find this interesting. I’d love to see a bunch of these strapped to a drone and flown autonomously through abandoned mines. A lot less risky than exploring them yourself, but it still is able to give a sense of what is actually underground. Even if your flight time was < 20 minutes, I bet you could still cover quite a bit of ground. Subsequent flights could use the maps generated by prior flights to avoid exploring areas that have already been visited.

    1. A Kinect is kinda overkill for exploring a mine. Things like Structure from Motion will perform adequately in situations like that. A Kinect’s claim to fame is quickly detecting fairly fine details in low light. To explore a mine, it’d be a lot easier to just put a bright light on the drone.

  2. ” This will be a big factor for adoption by our community, second only to the obvious consideration of price.”

    I’d say price should be first since that’s what’s been repeatedly killing it.

    1. The nice thing about the xbox version was that a bunch of used ones suddenly appeared for very cheap for us to play with. A bunch of gamers had their mommies buy them up and then got bored of them. It’s a good system.

  3. Cloud processing the environment is great. It can capture details of where you live, what furniture you have, what brands you purchase, who lives with you and other details that can be sold to advertisers. Tracing static users in a static position doesn’t sell ad space.

    Selling the fact that the user lives in the basement, has a Samsung TV, 2 cats, drinks Jolt Cola, owns a PS4 and an XBOX, 3 bongs, etc, sells the adspace.

  4. It’s behind a paywall, but this should be the article describing the ToF imaging sensor itself.

    I really hope there will be more low level access to the sensor functions (i.e. control over the schedule of the different frames, their modulation frequency/pattern, …), but it seems more aimed at data mining, so they wont have much use for non-depth camera applications, so I suspect they won’t expose these facets…

  5. Have to hack this for stand alone if needed. Amazing the “cloud” constraints for surveillance devices like everyone had a bankrupt fiber optics company implement the internet in their community. Most have the worst.

    This will be interesting to use for “ghost hunting” and more-so directed energy weapons detection as I’m sure that is what the ghost hunters are really observing.

Leave a Reply to someoneCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.