Hackaday Prize Entry: Multispectral Imaging For A UAV

At least part of the modern agricultural revolution that is now keeping a few billion people from starving to death can be attributed to remote sensing of fields and crops. Images from Landsat and other earth imaging satellites have been used by farmers and anyone interested in agriculture policy for forty years now, and these strange, false-color pictures are an invaluable resource for keeping the world’s population fed.

The temporal resolution of these satellites is poor, however; it may be a few weeks before an area can be imaged a second time. For some uses, that might be enough.

For his Hackaday Prize entry (and his university thesis), [David] is working on attaching the same kinds of multispectral imaging payloads found on Earth sensing satellites to a UAV. Putting a remote control plane up in the air is vastly cheaper than launching a satellite, and being able to download pictures from a thumb drive is much quicker than a downlink to an Earth station.

Right now, [David] is working with a Raspberry Pi and a camera module, but this is just experimental hardware. The real challenge is in the code, and for that, he’s simulating multispectral imaging using Minecraft. Yes, it’s just a simulation, but an extremely clever use of a video game to simulate flying over a terrain. You can see a video of that separated into red, green, and blue channels below.


The 2015 Hackaday Prize is sponsored by:

21 thoughts on “Hackaday Prize Entry: Multispectral Imaging For A UAV

  1. I’ve actually put some real footage through the system now (although it was taken from a panning tripod rather than a UAV) and the image at the top of this post actually shows the results of that. There are some issues at the moment. Errors in tracking tend to cause a kind of anaglyph effect in images, and the system can’t process footage with rotations in it but I’m hoping to come up with solutions to these problems in the next few weeks.

    1. David, Check out some of the stuff coming from Utah State University, USA. They have been putting multispectral imaging cameras on UAVs for many years now and have a great many publications surrounding their uses, application, and various solutions.

      1. The article is perhaps a little misleading, I’m aware that there are a fair number of small UAVs built specificly for remote sensing and multispectral imaging already. My real aim with this project is to make something that is cheap and easy to produce for someone with relatively limited resources. I’m trying to move all the complexity of the system out of the system out of the hardware and into the software (which anyone can simply download and run).

        1. I’ve had a look at your project, and I’m not entirely sure what the spectral imaging architecture is — from the video it looks like right now you’re placing separate R/G/B filters overtop of a b/w imager, but in an unusual way — placing (say) the R filter overtop of the entire top third of the field of view, green over the center field of view, and blue over the bottom field of view. But because the perspective is changing, and different areas become occluded as the UAV moves through the scene, I don’t see how it’s possible to use this method to accurately reconstruct a spectral image?

          The common way to avoid this problem is something conceptually similar called a “pushbroom” spectral imager, where you combine a slit and a disperser with a 2D CCD imager, such that one dimension of the imager is spatial, and the second is spectral. Since the pushbroom only captures one spatial dimension, you “sweep” it through the other spatial dimension to create the image, either by placing it on a pan head, or (in the case of a UAV) just flying in a straight line while capturing data. This avoids the perspective/occlusion issues with reconstruction (since you’re only ever scanning one line — say one line horizontally down the center of your video), and it also affords many more spectral channels than using only a couple of narrow-band filters.

          You likely already know about this, but there’s some publicly available data (data cubes — two spatial dimensions, one spectral dimension) from NASA’s AVRIS UAV spectral imager available here, incase it’s helpful to your project: http://aviris.jpl.nasa.gov/data/free_data.html

          1. You’ve hit on quite a serious issue here, and what is probably the biggest problem that I’ve run into so far. There are a few things that I’m hoping will mitigate this as the project develops. Firstly I want to use plenty of zoom on the camera a fly reasonably high, I’m pretty sure this will help to reduce the effects of perspective, Secondly I’d like to be a bit more selective about which rows of pixels I use to create the mosaics. At the moment I’m basically just using the bottom row of each filter region to build up my images, I think using pixels closer to the centre of the image will help greatly. Thirdly, I’m hoping (perhaps quite lazily) that my target scenes will be relatively flat! The terrain in the Minecraft test footage I uploaded has quite a lot of variation in height so I don’t think it’s particularly representative of the use case I have in mind (fields of wheat, or corn or other grains) and as such has far more perspective based alignment issues than real data would, but I’d be interested to know if this isn’t generally the case.

            I looked into pushbroom sensors while I was doing my background reading for the project. I can definitely see why they are used on high end systems and I think if you have the resources they are definitely going to be the most powerful and flexible multispectral sensors, especially with a disperser as you described. I’m really tempted to have a go at bodging something together some time to see if it can be done using something like a CD as a diffraction grating. For my dissertation I decided that the approach I took was probably the best mainly because I had access to a lot of computer vision expertise from the staff in my department, and because it seemed like something that no one else has really tried. I am open to the fact that there may be a reason for that, but I’d really like to give it a go and see how far I can take it!

          2. Unfortunately zoom will just increase the problem, since it will increase the perspective shifts. Aside from having a perfect spatial/height map of the scene that you’re measuring at each frame, or making unrealistic assumptions about the scene (that it’s completely flat, or completely stationary), I don’t think that there is a way for the spectra to be accurately reconstructed. It’s one of the reasons why spectral imaging architectures are designed as they are.

            On one hand this is idea is similar to a narrow-band camera, where a single narrow-band filter is placed over the entire field of view (or a multitude of filters, placed on a filter wheel, but the scene must be static for this to work). On the other end it’s inching towards spectral imaging architectures that are trying to place a bunch of narrow-band filters in a bayer-like pattern overtop of the CMOS imager, to target specific spectral classification or concentration estimation tasks — but with such patterns the different filters have to be on neighbouring pixels to minimize the perspective shift to near zero, rather than having them most of the imager apart. A bunch of folks have been working on the latter kind of devices for the last five years or so, but there are significant technical challenges with the fab process and I haven’t seen anyone get them to market yet.

            It likely wouldn’t be too bad to put together a simple pushbroom spectral imager, especially as a student project, and you could potentially cobble one together from inexpensive surplus optics (surplusshed has a wide variety). In a pinch, Amazon also has inexpensive diffraction gratings mounted on slides for educational experiments, so you wouldn’t have to chop down a CD or deal with it being a curved grating/having the plastic material in front of the grating. Even building a basic instrument with a dozen useful spectral channels that you’re confident in would be better than spending a bunch of time on a reconstruction algorithm for your camera and not having high confidence in the results. Or maybe for your application, you could get away with two narrow-band filtered cameras mounted beside each other, if the wavelengths of the different filters allowed you to infer the agricultural measures of interest.

            best of luck!

  2. If GPS co-ords are the only tracking points you’re using in your reconstruction you’re gonna run into issues when you get this on a moving platform. Without an IMU to refine it your gps isn’t accurate enough for your uses.

    1. GPS plays no role in image registration at the moment. I’m using a pure vision approach at the moment (which has its own issues). What I hope to do is cross check visual tracking with GPS over longer distances to try and correct the compounding error that builds up.

      1. At some point you’re going to import this into GIS though, whether it’s as a full quad mosaic or single stacked originals. Your data layer may be internally consistent but you’re introducing error if you hope to use this with existing LiDAR or otho datasets. Without a co-ord system for each photo I don’t see how you can automatically register the mosaic with any degree of useful accuracy. I suppose that’s not a deal breaker for farmers but any sort of NGO scale work tracking vegetation would probably need that ability.
        Maybe if there were some precisely known point in at least one of the composite photos (ie; a picture of a surveyed pin) but any transforms or attempts to make the mosaic 3d would obliterate/skew the useful data.

        1. In all honesty I haven’t given this a huge amount of thought yet, but I will. I’m not trying to make a super high end system, I’d be very happy to finish with something adequate for rudimentary crop monitoring and perhaps giving farmers a little bit of early warning about disease in their fields so they can deal with it early enough for it to (hopefully) not be too much of a problem.

          1. It’s a cool project that has potential to attract attention.
            As others have mentioned University research groups are working on similar problems. I’m sure Paper companies or BigAg would like cheaper spectral data even though they can afford dedicated LiDAR flights. The US Forestry service and a few reaserch institues put out software to manage this data, maybe they have some thoughts on how best to integrate.
            Or perhaps I’m just adding to feature creep. What’s the use planning for 2.0v when you’re still in the alpha stages.

  3. A thought perhaps on an alternative software, have you looked into Cycling 74′ MAX/MSP or Puredata(Its opensource cousin). Both offer real time video processing from a camera or webcamera, and from my own experimentation I think its entirely possible to achieve multi spectral imaging through it.

  4. I’m working on something similar with a different approach using [multiplexing instead of line scanning](http://www.khufkens.com/2015/04/24/tetrapi-a-well-characterized-multispectral-camera/). I think the multiplexer can handle a fairly fast (50ms) and continuous operation (switching) of up to 4 video streams. I haven’t played with it as I don’t have this need. The only problem I see with most of these cheaper solutions is the lack of a spectral response curve of most of the sensors. I’m trying to figure this out for the standard raspberry pi cameras (with and without IR cut filter) as well. More on this [here](http://www.khufkens.com/2015/04/19/raspberry-pi-camera-spectral-response-curve-intro/).

    I have a plant physiology background and this was a natural extension of a different, [phenology monitoring](http://www.khufkens.com/2015/04/16/phenopi-low-cost-phenology-monitoring/), project of mine. So I don’t necessarily intend to put it on a UAV, but I’m working on a field (mobile) version which should be light and flexible enough to handle such tasks.

    You can find some code I used on [my bitbucket page](https://bitbucket.org/khufkens/) (PhenoPi and TetraPi projects respectively), so use whatever is needed, or for those who are interested in these projects follow my blog or contact me.

    Hope it can be of use!

    1. I really like your approach. I looked at doing a multiple camera system when I was planning the project, and now that I’ve got to the end of my dissertation I think it probably is a better way of doing things when it comes to it. Having said that, I am definitely enjoying trying something different and seeing how far I can push it, and whether I can get it to a stage where it is a viable technique!

      1. It’s just another way of approaching the problem. It wouldn’t be possible for another hackaday submission. The multiplexer I use featured on last years list. From a consistency point of view you are probably better of with your approach (the spectral characteristics of your sensor will be easier to quantify, I have to do this 4 times over to be precise). Sadly I don’t have the computer science background or time to invest into matching the slivers of streaming footage. So I took a rather pragmatic approach to the problem, and I’m not sure how well this would work on a moving platform (it might be too slow). As I mentioned, my requirements are aimed at rather static plant science applications.

        Are you opening up the code to do the procesing? It’s an interesting approach and I would be interested to see if I can apply your method with my hardware setup (switching between cameras instead of taking pieces of the same image).

        1. Yes absolutely, the code is all on github here: https://github.com/thip/multispectral/. I have to say there are a lot of technical improvements to be made, but at the moment it’s literally in the state it was when I submitted it to my markers last week so it’s pretty tidy and well commented (at least I hope it come across that way to other people, or I might have a bit of a shock coming my way when I graduate in a couple of months…). If you want more details about how it works/what everything is then you can either wait for me to get around to writing it up on the project page (which may take a little while as I’m not the best at documenting things), or you can send me a message and I’d be happy to send you a copy of the report I wrote up for university.

Leave a Reply to DavidCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.