Microsoft Discontinues Kinect, Again

The Kinect is a depth-sensing camera peripheral originally designed as a accessory for the Xbox gaming console, and it quickly found its way into hobbyist and research projects. After a second version, Microsoft abandoned the idea of using it as a motion sensor for gaming and it was discontinued. The technology did however end up evolving as a sensor into what eventually became the Azure Kinect DK (spelling out ‘developer kit’ presumably made the name too long.) Sadly, it also has now been discontinued.

The original Kinect was a pretty neat piece of hardware for the price, and a few years ago we noted that the newest version was considerably smaller and more capable. It had a depth sensor with selectable field of view for different applications, a high-resolution RGB video camera that integrated with the depth stream, integrated IMU and microphone array, and it worked to leverage machine learning for better processing and easy integration with Azure. It even provided a simple way to sync multiple units together for unified processing of a scene.

In many ways the Kinect gave us all a glimpse of the future because at the time, a depth-sensing camera with a synchronized video stream was just not a normal thing to get one’s hands on. It was also one of the first consumer hardware items to contain a microphone array, which allowed it to better record voices, localize them, and isolate them from other noise sources in a room. It led to many, many projects and we hope there are still more to come, because Microsoft might not be making them anymore, but they are licensing out the technology to companies who want to build similar devices.

Screenshot of the demonstration video that shows the desktop being unlocked with face recognition, with a camera feed and a terminal showing how the software works.

Open-Source FaceID With RealSense

RealSense cameras have been a fascinating piece of tech from Intel — we’ve seen a number of cool applications in the hacker world, from robots to smart appliances. Unfortunately Intel did discontinue parts of the RealSense lineup at one point, specifically the LiDAR and face tracking-tailored models. Apparently, these haven’t been popular, and we haven’t seen these in hacks either. Until now, that is. [Lina] brings us a real-world application for the RealSense face tracking cameras, a FaceID application for Linux.

The project is as simple as it sounds: if the camera’s built-in face recognition module recognizes you, your lockscreen is unlocked. With the target being Linux, it has to tie into the Pluggable Authentication Modules (PAM) subsystem for authentication, and of course, there’s a PAM module for RealSense to go with it, aptly named pam_sauron. This module is written in Zig, a modern C-like language, so it’s both a good example of how to create your own PAM integrations, and a path towards doing that in a different language for once. As usual, there’s TODOs, like improving the UX and taking advantage of some security features RealSense cameras have, but it’s nevertheless a fun and self-sufficient application for one of the F4XX-series RealSense cameras in case you happen to own one.

Ever since the introduction of RealSense we’ve seen these cameras used in robotics and 3D scanning, thanks at least in part due to their ability to be used in Linux. Thankfully, Intel only discontinued the less popular RealSense cameras, which didn’t affect the main RealSense lineup, and the hacker-beloved depth cameras are still available for all of our projects. Wondering about the tech behind it? Here’s a teardown of a RealSense camera module intended for laptop use.

OAK-D Depth Sensing AI Camera Gets Smaller And Lighter

The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.

Animated face with small blue dots as 3D feature markers.
An example of real-time feature tracking, now in 3D thanks to integrated depth sensing.

The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.

The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.

So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.

There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.

Earlier this year we saw the OAK-D used in a system to visually identify weeds and estimate biomass in agriculture, and it’s exciting to see a new model being released. If you’re interested, the OAK-D Lite is available at a considerable discount during the Kickstarter campaign.

Intel RealSense D435 Depth Camera

RealSense No Longer Makes Sense For Intel

We love depth-sensing cameras and every neat hack they enabled, but this technological novelty has yet to break through to high volume commercial success. So it was sad but not surprising when CRN reported that Intel has decided to wind down their RealSense product line.

As of this writing, one of the better confirmations for this report can be found on the RealSense SDK GitHub repository README. The good news is that core depth-sensing RealSense products will continue business as usual for the foreseeable future, balanced by the bad news that some interesting offshoots (facial authentication, motion tracking) will be declared “End of Life” immediately and phased out over the next six months.

This information tells us while those living out on the bleeding edge will have to scramble, there is no immediate crisis for everyone else, whether they be researchers, hobbyists, or product planners. But this also means there will be no future RealSense cameras, kicking off many “What’s Next?” discussions in various communities. Like this thread on ROS (Robot Operating System) Discourse.

Three popular alternatives offer distinctly different tradeoffs. The “Been Around The Block” name is Occipital, with their more expensive Structure Pro sensor. The “Old Name, New Face” option is Microsoft Azure Kinect, the latest non-gaming-focused successor to the gaming peripheral that started it all. And let’s not forget OAK-D as the “New Kid On The Block” that started with a crowdfunding campaign and building an user community by doing things like holding contests. Each of these will appeal to a different niche, and we’ll keep our eye open in the future. Let’s see if any of them find the success that eluded the original Kinect, Google’s Tango, and now Intel’s RealSense.

[via Engadget]

Machine-Vision Archer Makes You The Target, If You Dare

We’ll state right up front that it’s a really, really bad idea to let a robotic archer shoot an apple off of your head. You absolutely should not repeat what you’ll see in the video below, and if you do, the results are all on you.

That said, [Kamal Carter]’s build is pretty darn cool. He wisely chose to use just about the weakest bows you can get, the kind with strings that are basically big, floppy elastic bands that shoot arrows with suction-cup tips and are so harmless that they’re intended for children to play with and you just know they’re going to shoot each other the minute you turn your back no matter what you told them. Target acquisition is the job of an Intel RealSense depth camera, which was used to find targets and calculate the distance to them. An aluminum extrusion frame holds the bow and adjusts its elevation, while a long leadscrew and a servo draw and release the string.

With the running gear sorted, [Kamal] turned to high school physics for calculations such as the spring constant of the bow to determine the arrow’s initial velocity, and the ballistics formula to determine the angle needed to hit the target. And hit it he does — mostly. We’re actually surprised how many on-target shots he got. And yes, he did eventually get it to pull a [William Tell] apple trick — although we couldn’t help but notice from his, ahem, hand posture that he wasn’t exactly filled with self-confidence about where the arrow would end up.

[Kamal] says he drew inspiration both from [Mark Rober]’s dart-catching dartboard and [Shane Wighton]’s self-dunking basketball hoop for this build. We’d say his results put in him good standing with the skill-optional sports community.

Continue reading “Machine-Vision Archer Makes You The Target, If You Dare”

New Part Day: Onion Tau LiDAR Camera

The Onion Tau LiDAR Camera is a small, time-of-flight (ToF) based depth-sensing camera that looks and works a little like a USB webcam, but with  a really big difference: frames from the Tau include 160 x 60 “pixels” of depth information as well as greyscale. This data is easily accessed via a Python API, and example scripts make it easy to get up and running quickly. The goal is to be an affordable and easy to use option for projects that could benefit from depth sensing.

When the Tau was announced on Crowd Supply, I immediately placed a pre-order for about $180. Since then, the folks at Onion were kind enough to send me a pre-production unit, and I’ve been playing around with the device to get an idea of how it acts, and to build an idea of what kind of projects it would be a good fit for. Here is what I’ve learned so far.

Continue reading “New Part Day: Onion Tau LiDAR Camera”

OpenCV And Depth Camera Spots Weeds

Using vision technology to identify weeds in agriculture is an area of active development, and a team of researchers recently shared their method of using a combination of machine vision plus depth information to identify and map weeds with the help of OpenCV, the open-source computer vision library. Agriculture is how people get fed, and improving weed management is one of its most important challenges.

Many current efforts at weed detection and classification use fancy (and expensive) multispectral cameras, but PhenoCV-WeedCam relies primarily on an OAK-D stereo depth camera. The system is still being developed, but is somewhat further along than a proof of concept. The portable setups use a Raspberry Pi, stereo camera unit, power banks, an Android tablet for interfacing, and currently require an obedient human to move and point them.

It’s an interesting peek at the kind of hands-on work that goes into data gathering for development. Armed with loads of field data from many different environments, the system can use the data to identify grasses, broad leaf plants, and soil in every image. This alone is useful, but depth information also allows the system to estimate overall plant density as well as try to determine the growth center of any particular plant. Knowing that a weed is present is one thing, but to eliminate it with precision — for example with a laser or mini weed whacker on a robot arm — knowing where the weed is actually growing from is an important detail.

PhenoCV-WeedCam (GitHub repository) is not yet capable of real-time analysis, but the results are promising and that’s the next step. The system currently must be carried by people, but could ultimately be attached to a robotic platform made specifically to traverse fields.