OAK-D Depth Sensing AI Camera Gets Smaller And Lighter

The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.

Animated face with small blue dots as 3D feature markers.
An example of real-time feature tracking, now in 3D thanks to integrated depth sensing.

The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.

The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.

So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.

There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.

Earlier this year we saw the OAK-D used in a system to visually identify weeds and estimate biomass in agriculture, and it’s exciting to see a new model being released. If you’re interested, the OAK-D Lite is available at a considerable discount during the Kickstarter campaign.

OpenCV And Depth Camera Spots Weeds

Using vision technology to identify weeds in agriculture is an area of active development, and a team of researchers recently shared their method of using a combination of machine vision plus depth information to identify and map weeds with the help of OpenCV, the open-source computer vision library. Agriculture is how people get fed, and improving weed management is one of its most important challenges.

Many current efforts at weed detection and classification use fancy (and expensive) multispectral cameras, but PhenoCV-WeedCam relies primarily on an OAK-D stereo depth camera. The system is still being developed, but is somewhat further along than a proof of concept. The portable setups use a Raspberry Pi, stereo camera unit, power banks, an Android tablet for interfacing, and currently require an obedient human to move and point them.

It’s an interesting peek at the kind of hands-on work that goes into data gathering for development. Armed with loads of field data from many different environments, the system can use the data to identify grasses, broad leaf plants, and soil in every image. This alone is useful, but depth information also allows the system to estimate overall plant density as well as try to determine the growth center of any particular plant. Knowing that a weed is present is one thing, but to eliminate it with precision — for example with a laser or mini weed whacker on a robot arm — knowing where the weed is actually growing from is an important detail.

PhenoCV-WeedCam (GitHub repository) is not yet capable of real-time analysis, but the results are promising and that’s the next step. The system currently must be carried by people, but could ultimately be attached to a robotic platform made specifically to traverse fields.