Set Phone To… Hyperspectral

While our eyes are miraculous little devices, they aren’t very sensitive outside of the normal old red, green, and blue spectra. The camera in your phone is far more sensitive, and scientists want to use those sensors in place of expensive hyperspectral ones. Researchers at Purdue have a cunning plan: use a calibration card.

The idea is to take a snap of the special card and use it to understand the camera’s exact response to different colors in the current lighting conditions. Once calibrated to the card, they can detect differences as small as 1.6 nanometers in light wavelengths. That’s on par with commercial hyperspectral sensors, according to the post.

You may wonder why you would care. Sensors like this are useful for medical diagnostic equipment, analysis of artwork, monitoring air quality, and more. Apparently, high-end whisky has a distinctive color profile, so you can now use your phone to tell if you are getting the cheap stuff or not.

We also imagine you might find a use for this in phone-based spectrometers. There is plenty to see in the hyperspectral world.

21 thoughts on “Set Phone To… Hyperspectral

  1. The actual paper’s here: https://ieeexplore.ieee.org/document/11125864

    I have no idea why media outlets feel the need to hyperlink random definitions in their text, making it virtually impossible to find the original source to translate their weird analogies.

    The odd “your eyes are RGB but camera sensors are better!” comment in the article is super-strange, because, uh, it’s… not true? As in, it’s entirely backwards – camera sensors are just RGB and your eyes aren’t (since you’ve got rods as well). The difference is the sensors output data you can do math on, that’s all. They’re deconvolving the wavelength response of the sensors.

    Overall, from the paper, it looks like it’s great for emission lines, okay-ish for wideband spectral shape, and basically no chance for narrow absorption lines.

    1. Regarding “camera sensors are just RGB and your eyes aren’t (since you’ve got rods as well)” – despite the four types of cells, the first few layers of neurons in the eye’s retina, in effect, transform the signal to three channels: luminance, blue/yellow, and red/green. (Diagrams of Oklab give a good idea of the transformation.) So the fourth dimension is lost by the time you get to retinal ganglion cells and the optic nerve. Thus, why we can approximate color with three channels.

      It seems to be, in effect, a regression fit on the three channels. I suspect in general, if there’s a situation with a strong prior and SNR (known monospectral or thermal output, for example), it might be good enough.

      I can imagine an ambitious science fair project to make an app that identifies metals by their flame test color. That raises the idea that you’d probably get better results from directly calibrating using exemplars and nearest-neighbor matching… But if one could use only reference spectral data with no exemplars, that would be a neat trick.

  2. yeah not so much hyper in that spectrum…. hyper by definition mean beyond: so beyond the normal spectra (as in IR=>VIS=>UV ) normal cameras are by design limited to visible light because that’s what we see. they have IR cutoff filters and the lenses take care of the UV part so no its not hyper-spectral its high resolution spectral at best. so yeah cool tech for sure but this is not hyper-spectral.

    1. “yeah not so much hyper in that spectrum”

      Hyperspectral imaging refers to spatially-resolved (as in per-pixel resolved) spectra. It’s “hyper” because the image information has 3 dimensions (intensity, x/y location) and once you add spectral information it’s now a 4-dimensional hypercube. You could also just call it multispectral imaging, but that usually implies a limited number of bands vs. a large number making it seem continuous.

      1. “You could also just call it multispectral imaging, but that usually implies a limited number of bands vs. a large number making it seem continuous.”
        thats kind of my point even leaving the IR/UV besides it is multispectral and not continuous. this might be pedantic but to have a continuous spectra you would need a continuous reference card for correction which is infeasible. somehow i also doubt the precision of the resulting spectral information since it comes from a bayer sensor but thats a different topic altogether. usefull cool tech yes but to call this hyperspectral is a strech.

        1. No, you can resolve finer than the number of calibration colors if you know the response of the sensor. It’s a deconvolution process.

          And yes, hyperspectral/multi spectral is just convention, but they’re resolving things way fine enough (at least for emission lines) to consider it hyperspectral.

      1. Not easy with phones (which is what we are talking about) due to tiny mechanisms, and especially with ones that have OIS.
        I wonder if it was ever tried with a phone and with one with OIS specifically. Theoretically possible but only one in a hundred million people would have the finesse and skill to even try I imagine. Those focus and OIS coils are so small with such thin wires, and delicate attachments I bet.
        Of course there are very old cheap phones with fixed-focus lenses.
        Or you can just sacrifice the focus system and OIS and turn it into a fixed focus camera. There are sellers on aliexpress that sell cheap laptop camera modules that have disabled focus systems and they just sell them as fixed focus after locking the assembly down with some PCB glue.

        But it’s easier to mod actioncams because they tend to have fixed-focus lenses, as your link attests.

        DiodeGoneWild on YT modded a cheap phone to go IR once, and then spied on what his cat was up to :)
        Oh and there is now some wild Chinese phone that has IR (not the FLIR kind) and a humongous battery and flashlight, for camping they suggest.
        But adding IR isn’t full spectrum of course.

  3. What’s the minimum frequency difference the human visual system can discern between? Kind of tough to measure, but I’ve seen statements of around 1-2 nanometers. Maybe we need a calibration card

    1. The “just noticeable difference” in the CIELAB color space is 2.3 according to CIE76. I’ll leave it as an exercise to the reader to find the two closest wavelengths that achieve this ΔE at equal intensity.

    2. where you see such claims ?Nobody know yet .CMF function is with 5 nm step and 1 nm CMF is INTERPOLATED. This article is complete bulshit.Of you get 320 dimensions (spectrum ) ton3 dimensions (RGB) you lost dimensity and as such can appear metamerism (same RGB response different spectrum ) as such you just cannot fit spectrum in 3 dimensions even more with such lot of unknowns. Also you cannot make prints with reflectance step 1 nm and FWHM 1.6 its already gratings ,prism or interference filters .Also ot cannot distinguish pure colors .Shortly ,fake bullshit “paper” .I really wandering how this paper passed peer review .If reviewer really knownthe matter ?

  4. “The camera in your phone is far more sensitive, and scientists want to use those sensors in place of expensive hyperspectral ones.”

    Let’s play “I didn’t read the paper, but”:

    They don’t attempt to replace a hyperspectral imager with an RGB / RGBW CMOS image sensor.
    One cannot “calibrate” an RGB image sensor to somehow retrieve hyperspectral image cubes. In a compressive imaging scheme, there’s another parameter being swept in the data set (e.g. https://hackaday.com/2014/05/18/hyperspectral-imaging-with-a-dslr/ where the reconstruction of a much smaller 2D hyperspectral image is performed from geometric projections of the hyperspectral cube laid out in a 2D diffraction pattern)
    What seems to happen here is that a 2D imager is being used to obtain 1D spectral information. For this to be possible, the color reference card needs to be printed with dyes or pigments that have spectral responses that are linearly independent from other patches on the chart.
    There is no “hidden information” in the image, it’s encoded as a snapshot measurement with an array of filters in the path between the ambient light source and dedicated sensor pixels.

    What’s more, the technique would be more useful if we had RGBW / IR image sensors, or better yet, QD SWIR sensors in our smartphones. Then the same technique described in their work could help identify fake pharmaceuticals that only produce visible-range “clear” solutions.

    1. “One cannot “calibrate” an RGB image sensor to somehow retrieve hyperspectral image cubes.”

      It’s just deconvolution. It’s going to have obvious limitations when you’ve got certain spectral features (obviously absorption lines, which you can see in the paper).

      But of course you can get spectral models that reproduce the observations. It’ll be probabilistic, but that’s good enough for plenty of situations.

      1. Perhaps a clearer demo would be to use the sample surface (including food, skin…) to bounce a light source (e.g. a Xenon photography flash) into a blackened shoe box, which would hold the calibration chart and potentially another diffuser.

        That way, it would be clear that the sample isn’t actually being imaged spatially, but the chart.

    1. That tends to require additional optical elements to work. Sure you can do the old “look at a slit via AOL CD surface” trick, but even that requires building a setup, and calibrating the position of the spectral lines using calibration light sources.

      Such effort is only really justified when one can get more out of it than just a spot measurement. Linked below is one such example which manages to maintain high light throughput and a 2D image, but does not excel at spectral resolution.

      “A notch-mask and dual-prism system for snapshot spectral imaging” (open access)
      https://doi.org/10.1016/j.optlaseng.2023.107544

  5. For all those folks saying “It’s just deconvolution”, I’d like to remind them that what they are saying is akin to talking to an audio engineer who’s using 44.1 kHz sampling to reconstruct an audio signal with 20 kHz bandwidth, and trying to tell them that they could use 10 kHz sampling — “It’s just deconvolution!”

    CPS is a sparsity-based solution. It builds in assumptions about the sparsity of the spectral content. It also requires the inclusion of the colour chart within the captured image (minimally to compensate for the variability of illumination spectrum in reflection mode). It’s not simply replacing hyperspectral cameras with calibrated RGB cameras.
    “In transmission mode, data acquisition involves photographing the spectral color chart through the sample of interest. In reflection mode, placing the sample of interest alongside the spectral color chart enables recovery of the sample’s spectral hypercube without the use of a hyperspectral imaging system.”

    In transmission mode, the different colours in the colour chart can be viewed conceptually as providing a set of diverse colour filters applied to the content. In a simple spectrometer, one might sweep a sample with a narrow spectral band of illumination and record the output with a monochromatic sensor, directly providing a high resolution spectrum. With the CPS approach, each patch on the color chart has its own spectral mix, with sufficiently varied spectral content that the single RGB image when broken up into a set of patches provides sufficient information for decoding the high resolution spectrum. (They note the requirement for even illumination across the colour chart.)

    In short, you can’t derive a high resolution visible spectrum from just 3 samples using deconvolution. There is simply not enough information. You need to (a) build in some a priori knowledge of the spectrum or its characteristics, and (b) provide additional high resolution spectral information in the images (i.e. a carefully constructed colour chart). When you do that, you can get reasonably reliable sparse reconstructions of the content spectra, albeit with far more uncertainties than you’d get with a hyperspectral camera.

Leave a Reply to PatCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.