Seeing the unseen is one of the great things about using an infrared (IR) camera, and even the cheap-ish ones that plug into a smartphone can dramatically improve your hardware debugging game. But even fancy and expensive IR cameras have their limits, and may miss subtle temperature changes that indicate a problem. Luckily, there’s a trick that improves the thermal resolution of even the lowliest IR camera, and all it takes is a little tweak to the device under test and some simple math.
According to [Dmytro], “lock-in thermography” is so simple that his exploration of the topic was just a side quest in a larger project that delved into the innards of a Xinfrared Xtherm II T2S+ camera. The idea is to periodically modulate the heat produced by the device under test, typically by ramping the power supply voltage up and down. IR images are taken in synch with the modulation, with each frame having a sine and cosine scaling factor applied to each pixel. The frames are averaged together over an integration period to create both in-phase and out-of-phase images, which can reveal thermal details that were previously unseen.
With some primary literature in hand, [Dmytro] cobbled together some simple code to automate the entire lock-in process. His first test subject was a de-capped AD9042 ADC, with power to the chip modulated by a MOSFET attached to a Raspberry Pi Pico. Integrating the images over just ten seconds provided remarkably detailed images of the die of the chip, far more detailed than the live view. He also pointed the camera at the Pico itself, programmed it to blink the LED slowly, and was clearly able to see heating in the LED and onboard DC-DC converter.
The potential of lock-in thermography for die-level debugging is pretty exciting, especially given how accessible it seems to be. The process reminds us a little of other “seeing the unseeable” techniques, like those neat acoustic cameras that make diagnosing machine vibrations easier, or even measuring blood pressure by watching the subtle change in color of someone’s skin as the capillaries fill.
Love this content, thank you!
That’s a good technic. However, don’t be fooled by what it shows, this doesn’t match the physics. It’s not because you get a better resolution that the image meaning is correlated with the underlying heat emitter.
The “high resolution” image falsely pinpoint multiple area as hotter but those are simply hotter for a small duration (which can be completely expected, when a MOSFET is switching). If you had to debug such board, the obvious conclusion that the shortcut is there would be wrong, since a perfectly working circuit will emit heat for small duration in normal operations. However, the heat accumulated on a shortcut trace or component isn’t visible in such technic because it ramps up slowly, isn’t correlated to the phase of the power source but by the thermal law of physics (conduction & convection) which will be orders of magnitude slower to propagate.
More over, component don’t really like having their power source varying while operating, they might start heating when they are just close to their brownout state (power is just enough to switch them on, but as soon as they do, the current draw means that the source voltage drop, causing the circuit to switch off, rinse, repeat).
The heat going into the traces can be observed in the quadrature component image. If you look at some of the images on my website, where I connected a Pi Pico to this setup, you’ll see in the quadrature image that the traces connected to a voltage divider appear to get hotter than the component itself. This occurs because the quadrature component is weighted by a cos() function relative to the modulation signal, highlighting the dynamics around the point where the power supply is turned off, allowing heat to propagate through the DUT.
If you’re looking at raw heat sources, such as in cases with higher-than-expected leakage current, then the in-phase image would be used, as it shows, as you mentioned, the raw heat output when the supply is on.
The original paper mentions the same:
“For non-destructive testing purposes, the phase (quadrature) image is often more informative than the amplitude (in-phase) one, which strongly depends on the local IR emissivity. For the detection of local heat sources in electronic devices, on the other hand, the amplitude signal is the more informative one, since it is directly related to the locally dissipated power.”
“Burn!” -Kelso
How the results are different from oversampling, like in deep space photography?
The basic principle is that of a Lock-in amplifier (https://en.wikipedia.org/wiki/Lock-in_amplifier).
An overly simplified explanation:
The device is driven with a periodic signal (here: the power supply), in order to make the measured signal (temperature) periodic as well. Each measured sample is multiplied by sin(t * frequency), and accumulated. The desired signal will be accumulated over a large number of periods, and any noise not correlated to the drive signal will be averaged away.
What I find curious is that it seems to work with a “slowly propagating” signal such as heat transfer.
There’s a similar situation when it comes to particle physics. Take lots of samples to swap temporal resolution for something else to enhance a weak signal. But, you need to factor variability in behaviour of unknowns. Things quickly became a black box scenario and you need to be careful with assumptions.
That probably has something to do with the low modulation frequency of 1 Hz.
In theory this could be applied to a larger scale too.
If this modulation technique was used for passive WiFi radar, the resolution for seeing through the walls of buildings could be increased considerably.
If applied from orbit, being able to see more details of say, the heat from underground magma chambers would be useful in determining the likelihood of a volcanic eruption.
Something tells me there will be a few who’ll increase the resolution to view individual memory location changes on running devices or to more effectively target power glitching processors to skip security checks.
You have to be able to modulate the source and the receiver (camera.)
I don’t see anyone being able to modulate magma chamber temperatures in the near future.
Similarly, good luck modulating the temperature of a microprocessor.
Perhaps you can, if you use the difference in density. Taking images at the same time of day, using the sun to modulate the local gravity.
That’s one valid point.
do you mean “at different times of day”?
because if it’s always the same time of day, isn’t the sun always in the same distance away and hence the local gravity stays the same?
Obviously gravity varies over time, that’s why there are ocean tides and why there are neap tides when the sun and moon pulling at right angles vs spring tides when they are in line..
One must take into account varying coefficients of thermal conductivity. Even a slight difference in values due to material variation or entirely different material altogether will yield misleading results. In short, the variable not considered here is q dot (heat transfer rate).