They say that a picture is worth a thousand words. But what is a picture exactly? One definition would be a perfect reflection of what we see, like one taken with a basic camera. Our view of the natural world is constrained to a bandwidth of 400 to 700 nanometers within the electromagnetic spectrum, so our cameras produce images within this same bandwidth.
For example, if I take a picture of a yellow flower with my phone, the image will look just about how I saw it with my own eyes. But what if we could see the flower from a different part of the electromagnetic spectrum? What if we could see less than 400 nm or greater than 700 nm? A bee, like many other insects, can see in the ultraviolet part of the spectrum which occupies the area below 400 nm. This “yellow” flower looks drastically different to us versus a bee.
In this article, we’re going to explore how images can be produced to show spectral information outside of our limited visual capacity, and take a look at the multi-spectral cameras used to make them. We’ll find that while it may be true that an image is worth a thousand words, it is also true that an image taken with a hyperspectral camera can be worth hundreds of thousands, if not millions, of useful data points.
The Data Cube
Spectroscopy is the study of how light interacts with materials. Generally, the light reflecting or being emitted from a material is passed through a prism in order to separate it into its spectral components. Then each component is analyzed for information. In high-end three-CCD digital cameras; each image is separated into its red, green and blue spectral components. Each RGB component is then given a value which is applied to a pixel. The resulting image produced is more or less a reflection of what you see.
It should be possible to follow this same process but look at other spectral components other than the visual RGB spectrum. And this is precisely what a hyperspectral camera does. Instead of acquiring 3 data points per pixel as with an RGB camera, a hyperspectral camera might have tens or hundreds of data points per pixel. The total hyperspectral image is three-dimensional and is represented in a Data Cube. The X and Y plane of the cube represent the spatial part of the image, and the spectral information is recorded in the Z axis of the cube: a full spectrum for every pixel.
For a practical example – the National Ecological Observatory Network [NEON] uses a hyperspectral camera with a range between 380 and 2510 nm, with a resolution of five nanometers. When the camera is 1,000 m off the ground, the resolution is about one meter per pixel, and each pixel will contain 26 data points. From all of this information, different types of vegetation, moisture content, and more can be gleaned.
Hyperspectral cameras are highly useful in ecological studies, geographic monitoring, and agriculture management. A farmer can use a hyperspectral camera to see what area of her crops need fertilizer, pesticides and/or water. This allows her to apply what is needed to a specific area of the crop instead of the whole, saving the farmer a lot of time and money.
Why Are You Telling Me This?
As you probably already know or have guessed – hyperspectral imaging has been around for a long time. But the setups and software needed to create a hyperspectral image used to be prohibitively expensive, and like many technologies of the past, costs have plummeted. Now anyone with a few thousand dollars to play with can get in on the multispectral game. And before you scoff at that price, consider the amount of valuable information you can acquire with a multispectral setup. Multispectral is similar to hyperspectral, but sees fewer bands.
The cheapest multispectral camera we can find is the Parrot Sequoia, which comes in with a $3,500 price tag. This particular camera is geared to the agricultural industry.
Its sensors look at red, green and two “invisible” infrared bands. It has its own GPS and other guidance hardware built in, so you don’t have to worry about keeping your drone’s altitude and speed fixed. The camera will adjust accordingly, within limits, of course. It can operate from an altitude of 30 ft to 500 ft and weighs just under 80 g, so putting one on just about any well-made drone is doable.
The other slightly more expensive camera is RedEdge by Micasense, which comes in at just north of five grand. It’s a bit heavier at 180 g, and comes with the same sensors as the Parrot, but adds blue to the mix. At a height of 400 ft, it can resolve to 8 cm per pixel. The software is cloud-based, and they have a well done online demonstration.
Hyperspectral sensing is not new. But just as 3D printing has been around for ages and is now a booming industry thanks to the expiring patents, hyperspectral imaging could follow the same path thanks to dropping prices. The obvious market is small farms. But hackers have a knack for pushing tech well beyond its intended audience, and stretching out the boundaries of what’s possible. What could you do with a hyperspectral or multispectral camera? Or better yet, how are you going to make your own?
Header Image via Vespadrones.
Thumbnail Image via NIST.
Hyperspectral Imaging Technology in Food and Agriculture ISBN 978-1-4939-2836-1