DIY Thermal Imager Uses DIY Gaussian Blur

Under the right circumstances, Gaussian blurring can make an image seem more clearly defined. [DZL] demonstrates exactly this with a lightweight and compact Gaussian interpolation routine to make the low-resolution thermal sensor data display much better on a small OLED.

[DZL] used an MLX90640 sensor to create a DIY thermal imager with a small OLED display, but since the sensor is relatively low-resolution at 32×24, displaying the data directly looks awfully blocky. Gaussian interpolation to improve the display looks really good, but it turns out that the full Gaussian interpolation isn’t a trivial calculation write on your own. Since [DZL] wanted to implement it on a microcontroller, the lightweight implementation was born. The project page walks through the details of Gaussian interpolation and how some effective shortcuts were made, so be sure to give it a look.

The MLX90640 sensor also makes an appearance in the Open Thermal Camera, one of the entries for the 2019 Hackaday Prize. If you’re interested in thermal imaging, don’t miss this teardown of a thermal imaging camera.

18 thoughts on “DIY Thermal Imager Uses DIY Gaussian Blur

  1. 3×3 kernel is pretty fast, but if one needs a wider blur, CIC filters will work well for this. Needs only addition and subtraction, no multiplication. 3rd level or higher CIC filter has a response close to gaussian. Gaussian blur is separable, so one can filter first in X direction and then in Y direction, needing N+N instead on N*N calculations.

  2. I recently implemented a 2D Gaussian blur as a composition of two 1D Gaussian blurs, making it ~O(n) rather than ~O(n^2). I also used a look-up table for the 0…255 possible rgb values to avoid the need for multiplies when applying the kernel, but a micro controller might be a bit too resource constrained to allow this method with sizeable kernels, although symmetry can be taken advantage of.

    1. If I recall correctly, God of War on PS3 used a co processor to do a post processing Gaussian blur and where the horizontal operation in memory using SIMD was much faster. So they did it in several steps. A 1D horizontal. A complete rotate by 90 degrees. Another 1D horizontal. A rotate 90 degrees back. A key to this obviously was a fast rotate but I feel like they were able to do that with hardware assist like with programmable DMA.

      1. If you start with an image h x w with pixels (x, y)

        Apply 1D kernel to columns (or rows), and output results to w x h array as pixel (y, x) = (x’, y’)

        Apply 1D kernel to columns (or rows) of this intermediate w x h array with pixels (x’, y’), and output results to h x w array pixel (y’, x’) = (x, y)

        For extra points, divide columns being done into n stripes, for distribution across n threads or processes.

  3. So the programmable refresh rate on that sensor is up to 64Hz. I wonder if MEMS mirror arrays are reflective in the far infrared? If so you could conceivably increase the resolution a fair bit through scanning.

    1. Interesting. Was also thinking about the ol’ fad of removing the IR filters from regular cameras. Was thinking it might be possible to subtract visible-light from the [nearby?] pixels to reveal only the IR part… but how? Was thinking two cameras, one lens, but there may be an answer here.

      1. You can replace the IR blocking filter with a visible blocking ( and IR passing) filter, either on the sensor or on the lens (or anywhere in between, if that can work with your lens/camera). Low cost absorbing filters (look like black glass) are available in standard lens filter sizes. Higher cost interference filters (look like mirrors) are available from scientific optics suppliers (Edmund, Thor Labs, Newport). The higher cost ones offer much better transmission and rejection, but have some other tradeoffs.

        1. That’d be far too easy… that’s what I get for reading HaD before coffee.
          But, now that I have, I take it normal ccds don’t pick up a wide range in the IR spectrum, otherwise IR/thermal cams would be more common, eh?

    2. I was looking at creating a single pixel imager in the IR at one point, and I went so far as to contact TI about their DMD devices. The mirrors will reflect fine until the wavelength gets to be on the order of the mirror dimensions. However, the problem is that the windows on the packages are not IR transparent, so the light won’t make it to the mirror anyway.

    1. Maybe because bilinear (and Gaussian to an extant) just blur the image.
      Here, this technique kind of uses weighted neighbor scaling. So you keep some sharpness while enlarging your image.

Leave a Reply to Ted YapoCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.