New Kinect Sensor Switch Focus From Gamers To Developers

Microsoft’s Kinect may not have found success as a gaming peripheral, but recognizing that a depth sensor is too cool to leave for dead, development continued even after Xbox gaming peripherals were discontinued. This week their latest iteration emerged and we can get it in the form of Azure Kinect DK. This is a developer’s kit focused on exploring new applications for this technology, not a gaming peripheral we had to hack before we could use in our own projects.

Packaged into a peripheral that plugs into a PC via USB-C, it is more than the core depth sensor module announced last year but less than a full consumer product. Browsing its 10-page specification (PDF) with comparisons to second generation Kinect sensor bar, we see how this technology has evolved. Physical size and weight has dropped, as has power consumption. Auxiliary capabilities has improved with an expanded microphone array, IMU with gyro in addition to accelerometer, and the RGB camera has been upgraded to 4K resolution.

But the star of the show is a new continuous-wave time-of-flight depth sensor, presented at the 2018 IEEE ISSCC conference. (Full text requires IEEE membership, but a digest form is available via ResearchGate.) Among its many advancements, we expect the biggest impact to be its field of view. Default of 75 x 65 degrees is already better than its predecessors (64 x 45 for first generation Kinect, 70 x 60 for second) but there is an option to trade resolution for coverage by switching to a wide-angle mode of 120 x 120 degrees. Significantly wider than other depth cameras like Intel’s RealSense D400 series or Occipital’s Structure.

Another interesting feature is built-in synchronization. Many projects using multiple Kinect sensors ran into problems because they interfered with each other. People hacked around the problem, of course, but now they don’t have to: commodity 3.5 mm jacks allow multiple Azure Kinect DK to be daisy chained together so they play nicely and take turns.

From its name we were worried this product would require Microsoft’s Azure cloud service in some way and be crippled without it. Based on information released so far, it appears developers have access to all the same data streams as previous sensors. Azure tie-in takes the form of optional SDKs that make it easier to do things like upload data for processing in Azure cloud-based recognition services.

And finally, Azure Kinect DK’s price tag of $399 is significantly higher than a Kinect game peripheral, but it is a low volume product for developers. Perhaps high volume consumer products built on this technology will cost less, but that remains to be seen. In the meantime, you have alternative tools for solving similar problems. For example if you are building your own AR headset, you might use Intel’s latest RealSense camera for vision based inside-out motion tacking.

Microsoft Kinect Episode IV: A New Hope

The history of Microsoft Kinect has been of a technological marvel in search of the perfect market niche. Coming out of Microsoft’s Build 2018 developer conference, we learn Kinect is making another run. This time it’s taking on the Internet of Things mantle as Project Kinect for Azure.

Kinect was revolutionary in making a quality depth camera system available at a consumer price point. The first and second generation Kinect were peripherals for Microsoft’s Xbox gaming consoles. They wowed the world with possibilities and, thanks in large part to an open source driver bounty spearheaded by Adafruit, Kinect found an appreciative audience in robotics, interactive art, and other hacking communities. Sadly its novelty never translated to great success in its core gaming market and Kinect as a gaming peripheral was eventually discontinued.

For its third-generation, Kinect retreated from gaming and found a role in Microsoft’s HoloLens AR headset running “backwards”: tracking user’s environment instead of user’s movement. The high cost of a HoloLens put it out of reach of most people, but as a head-mounted battery-powered device, it pushed Kinect technology to shrink in physical size and power consumption.

This upcoming fourth generation takes advantage of that evolution and the launch picture is worth a thousand words all on its own: instead of a slick end-user commercial product, we see a populated PCB awaiting integration. The quoted power draw of 225-950mW is high by modern battery-powered device standards but undeniably a huge reduction from previous generations’ household AC power requirement.

Microsoft’s announcement heavily emphasized how this module will work with their cloud services, but we hope it can be persuaded to run independently from Microsoft’s cloud just as its predecessors could run independent of game consoles. This will be a big factor for adoption by our community, second only to the obvious consideration of price.

[via Engadget]

Intel’s Vision for Single Board Computers is to Have Better Vision

At the Bay Area Maker Faire last weekend, Intel was showing off a couple of sexy newcomers in the Single Board Computer (SBC) market. It’s easy to get trapped into thinking that SBCs are all about simple boards with a double-digit price tag like the Raspberry Pi. How can you compete with a $35 computer that has a huge market share and a gigantic community? You compete by appealing to a crowd not satisfied with these entry-level SBCs, and for that Intel appears to be targeting a much higher-end audience that needs computer vision along with the speed and horsepower to do something meaningful with it.

I caught up with Intel’s “Maker Czar”, Jay Melican, at Maker Faire Bay Area last weekend. A year ago, it was a Nintendo Power Glove controlled quadcopter that caught my eye. This year I only had eyes for the two new computing modules on offer, the Joule and the Euclid. They both focus on connecting powerful processors to high-resolution cameras and using a full-blown Linux operating system for the image processing. But it feels like the Joule is meant more for your average hardware hacker, and the Euclid for software engineers who are pointing their skills at robots but don’t want to get bogged down in first-principles of hardware. Before you rage about this in the comments, let me explain.

Continue reading “Intel’s Vision for Single Board Computers is to Have Better Vision”

Using RealSense Cameras With OS X and Linux

The original Microsoft Kinect was a revolution in computer vision. For less than one hundred dollars, the Kinect gave everyone a webcam with a depth sensor. If you’re doing anything with robots, 3D scanning, or anything else where a computer needs to know where it is in 3D space, it’s awesome. These depth-mapping cameras have improved over the years, with the latest and most capable hardware being Intel’s RealSense 3D camera.

Despite being a very capable depth camera, support for Linux and OS X doesn’t exist. Researchers, roboticists and IoT developers are slightly miffed about this, and it seems like Intel doesn’t care about people using their hardware on platforms that aren’t Windows.

Now, finally, that’s changed. A few developers have taken it upon themselves to build a cross-platform library for the F200, SR300, and R200 Intel RealSense depth cameras.

The librealsense library features proper RealSense camera support for Linux, OS X, and Windows and provides all the functionality of the official Intel SDK. This functionality includes native depth, color, and infrared streams, synthetic streams for rectified images, calibration information, and the most interesting feature: multi-camera capture.

The hardware required to use the RealSense camera is somewhat lightweight – any recent laptop should be able to capture depth images with a RealSense camera. The camera itself requires USB 3, though, so you won’t be building a 3D scanner with a RealSense camera and a Raspberry Pi quite yet. Still, it’s the latest advancement for giving robots 3D vision and building cheap, portable 3D scanners.

Head Gesture Tracking Helps Limited Mobility Students

There is a lot of helpful technology for people with mobility issues. Even something that can help people do something most of us wouldn’t think twice about, like turn on a lamp or control a computer, can make a world of difference to someone who can’t move around as easily. Luckily, [Matt] has been working on using webcams and depth cameras to allow someone to do just that.

[Matt] found that using webcams instead of depth cameras (like the Kinect) tends to be less obtrusive but are limited in their ability to distinguish individual users and, of course, don’t have the same 3D capability. With either technology, though, the software implementation is similar. The camera can detect head motion and control software accordingly by emulating keystrokes. The depth cameras are a little more user-friendly, though, and allow users to move in whichever way feels comfortable for them.

This isn’t the first time something like a Kinect has been used to track motion, but for [Matt] and his work at Beaumont College it has been an important area of ongoing research. It’s especially helpful since the campus has many things on network switches (like lamps) so this software can be used to help people interact much more easily with the physical world. This project could be very useful to anyone curious about tracking motion, even if they’re not using it for mobility reasonsContinue reading “Head Gesture Tracking Helps Limited Mobility Students”

AquaTop: a gaming touch display that looks like demon possessed water

AquaTop_touch_displayAre you ready to make a utility sink sized pool of water the location of your next living room game console? This demonstration is appealing, but maybe not ready for widespread adoption. AquaTop is an interactive display that combines water, a projector, and a depth camera.

The water has bath salts added to it which turn it a milky white. This does double duty, making it a reasonably reflective surface for the projector, and hiding your hands when below the surface. The video below shows several different games being played. But the most compelling demonstration involves individual finger tracking when your digits break the surface of the water (show on the right above).

There is also a novel feedback system. The researchers hacked some speakers so they could be submerged in the tank, adding a large speaker with LEDs on it in the same manner. When fed a 50 Hz signal they make the surface of the pool dance.

Continue reading “AquaTop: a gaming touch display that looks like demon possessed water”

Update: using your forearms as a UI

This image should look familiar to regular readers. It’s a concept that [Chris Harrison] has been working on for a while, and this hardware upgrade uses equipment which which we’re all familiar.

The newest rendition, which is named the Omnitouch, uses a shoulder-mounted system for both input and output. The functionality is the same as his Skinput project, but the goal is achieved in a different way. That used an arm cuff to electrically sense when and where you were touching your arm or hand. This uses a depth camera to do the sensing. In both cases, a pico projector provides the interactive feedback.

There’s a couple of really neat things about this upgrade. First, it has a pretty accurate multitouch capability. Second, it allows more surfaces to be used than just your arm. In fact, it can track moving surfaces and adjust accordingly. This is shown in the clip after the break when a printed document is edited in real time. Pretty neat stuff!

Continue reading “Update: using your forearms as a UI”