For years [Edward] has been building professional grade underwater sensing nodes at prices approachable for an interested individual without a government grant. An important component of these is temperature, and he has been on a quest to get the highest accuracy temperature readings from whatever parts hit that sweet optimum between cost and complexity. First there were traditional temperature sensor ICs, but after deploying numerous nodes [Edward] was running into the limit of their accuracy. Could he use clever code and circuitry to get better results? The short answer is yes, but the long answer is a many part series of posts starting in 2016 detailing [Edward]’s exploration to get there.
The first step is a thermistor, a conceptually simple device: resistance varies with temperature (seriously, how much more simple can a sensor get?). You can measure them by tapping the center of a voltage divider the same way you’d measure any other resistance, but [Edward] had discarded this idea because the naive approach combined with his Arduino’s 10 bit ADC yielded resolution too poor to be worthwhile for his needs. But by using the right analog reference voltage and adjusting the voltage divider he could get a 20x improvement in resolution, down to 0.05°C in the relevant temperature range. This and more is the subject of the first post.
What comes next? Oversampling. Apparently fueled by a project featured on Hackaday back in 2015 [Edward] embarked on a journey to applying it to his thermistor problem. To quote [Edward] directly, to get “n extra bits of resolution, you need to read the ADC four to the power of n times”. Three bits gives about an order of magnitude better resolution. This effectively lets you resolve signals smaller than a single sample but only if there is some jitter in the signal you’re measuring. Reading the same analog line with no perturbation gives no benefit. The rest of the post deals with the process of artificially perturbing the signal, which turns out to be significantly complex, but the result is roughly 16 bit accuracy from a 10 bit ADC!
What’s the upside? High quality sensor readings from a few passives and a cheap Arduino. If that’s your jam check out this excellent series when designing your next sensing project!
23 thoughts on “An Epic Tale Of Thermistors: Tricks For Much Better Temperature Sensing”
It’s commonly called “dithering”. I.e. standard in some NI-Cards.
Dithering is adding noise in to make oversampling work, when there’s not enough to support the required extra bit depth.
if you building “building professional grade underwater sensing nodes” you should not be trying to do it on the cheap.
Get a dedicated sensor that has its own thermally compensated power, and is calibrated at the factor. Maxim has many parts in this realm. A “professional grade” unit should never be using a thermistor.
Also “professional grade” device should not be based around Arduino, its internal ADC and poor quality bypass capacitors that are on the board. I did two designs where I needed high accuracy measurements. In both cases I used LC filters on the supply for ADC/op-amp section, quality references for ADC, quality 12- or 20-bit delta-sigma ADCs and even considered resistor orientation in one of those projects, because sensor would heat up the PCB a bit and thus cause thermoelectric effects across solder joints…
For temperature sensing I’d go with either thermocouple or PT100 resistive sensor. If I didn’t want to spend money on additional ADC, I’d use CTMU module from one of many PICs. And I’d rather program in COBOL than put Arduino in anything for “pro” market…
Thermocouples aren’t very good for high accuracy temperature sensing. They only tell you the temperature difference between two junctions, so you still need a cold side junction compensation circuit. A good PT100 or properly characterised thermistor will out perform a thermocouple in nearly all cases. Thermocouples are best for harsh environments; temperature extremes, primarily.
Use an integrator and feed back PWM to make a cheap deltsig.
Then you get 9 dB per octave rather than just 3. Easy to get to more than 16 bits.
So if you are changing the measuring devices how do you compare the meas
Respectfully Ed’s WordPress post is great exampleprojrct’ shared in this manner is my preffeance. So many projects posted to hackaday.io. Often projects posted on hackday.io, are difficult to sort through, than some posts at indestructible.com.
Measurements? I’m sure they calibrate ALL the devices the same way.
“but the result is roughly 16 bit accuracy from a 10 bit ADC!” : no way on God’s green earth.
You’re confusing accuracy and precision.
You might get 16 bits of data. That’s trivial. You might even be able to reliably distinguish a real change in the measurand in that least significant bit (though that’s highly unlikely).
But 16 bits of accuracy? No way, no how.
Many, many articles and app notes can be found to support this assertion, like https://e2e.ti.com/blogs_/archives/b/precisionhub/archive/2014/10/07/adc-accuracy-part-1-is-accuracy-different-from-resolution
And, btw: quick and not-so-dirty way to linearize a thermistor over a decent range: use a ballast resistor about three times the thermistor value at the middle of the intended temperature range. This linearizes it good to a couple of degrees (actual) accuracy over 10-50C, for example, and tenth-degree precision with just a normal microcontroller ADC: no other components needed, no curve fitting needed, and dithering optional.
Nice method to linearise a thermistor! I have to try it out so it sticks in my mind! Thanks a lot.
It is little nuggets of information that keep me reading hack a day comments.
Correction (and more detail) to my above comment: I misremembered exactly how it went.
To linearize a thermistor with a single resistor: Wire the thermistor between the voltage reference and the ADC input, and a ballast resistor from the ADC input to ground. Use a ballast resistor of *one third* the resistance of the thermistor.
The ADC reading will be very close to linear with temperature over a reasonable range.
Tweak the ballast resistor higher for lower temperatures and/or better precision.
There’s enough noise on a typical 10 or even 8-bit micro’s ADC to reliably eke tenth-degree precision out of it by averaging a few samples.
If linearization is important, a 2nd order poly takes out most of the residual nonlinearity, and the square term is 6 orders of magnitude less than the linear term, so there’s not much nonlinearity to remove.
In practice, on a Teensy 3.2, set to 12 bit ADC resolution, with a 2nd-order polyfit, I get linearity and precision to a tenth of a degree over the 10-50C range, with a single ADC sample. Absolute accuracy is harder to measure, but it’s around a degree, even unit-to-unit: I’ve made around a dozen of these so far.
It would be better to use a stable current source, and turn it on only when performing the measurement to avoid self-heating. Also how can [Edward] be sure, he has both precision and accuracy, if for calibration of any instrument one needs an instrument at least one order of magnitude more precise?
Of course the other option is to keep the current source on all the time and compensate for the self heating by by proper calibration or similar means. I suppose it depends on how the timescale for self heating compares to the time scale for your measurement and how often you want to make measurements.
Having been down this road myself, perhaps I can offer a couple useful points…
The ADC: Consider using a delta-sigma ADC. While not terribly fast, they provide many bits of resolution. For example: I’m using an MCP3551, runs off 3V or 5V, has high impedance inputs for connecting directly across a bridge, provides 22-bits of results, but requires 80msec/conversion.
What they didn’t mention in the datasheet was that the first conversion after power-up provides very poor quality results. The ADC responds correctly to commands after POR delays, but the data from the first conversion is crap. The second conversion is much more stable. I needed to figure that out as the ADC/bridge is unpowered for 15 seconds, powered up long enough to do 2 conversions (ignore the first result), then back to power-off. I needed to power-off the ADC/bridge because 1) battery powered system, and 2) thermistor self heating.
The bridge: I tried using 1% tolerance resistors, but the results were not repeatable between duplicate circuits. 0.1% tolerance resistors in the bridge means I can just build the circuit and get a result within about 1C of every other duplicate circuit. The bridge is balanced at 25C, and uses resistors the same value as the thermistor at 20 or 25C. The raw data is NOT LINEAR, but that’s taken care of in firmware.
Firmware: While the ADC produces 22-bit data, I use just the 16 msb for calculations. I also ignored the Steinhart-Hart equation. Instead, I pump the raw ADC data into a 6th-order polynomial to produce the uncalibrated temperature in degrees C. When first prototyping the circuit, I built a dozen or so copies. I subjected them to various temperatures in the range of 0C to 100C, recorded the raw data and the actual temperature. The raw data for all circuits was averaged together for each temperature. Excel was used to compute the polynomial constants (graphed data on X-axis, temperature on Y-axis, used polynomial trendline (6th order) to determine constants to at least 6 significant places). These constants are then hard-coded into the firmware for a rough calibration. Firmware pumps the ADC raw data into the polynomial, and out pops a temperature in degC. At this point, the temperature measurements are consistently within a degree or so of the actual temperature across all duplicate circuits across the temperature range of 0-100C. Outside that temperature range, measurements deviate more the further you go.
Calibration: Each sensor requires a 2 point calibration. One of those points is 37C (my desired target temperature). The other is 5-10 degrees away. These temperatures need only be approximate, but stable and measured by your reference thermometer as well as possible. Soak the sensor at these temperatures, capture the actual (thermometer) and reported (sensor) temperatures. In Excel, plot the reported temperature (X-axis) vs the actual temperature (Y-axis), add a linear trendline, capture the trendline constants (Y=mX+b form). Apply this to the uncalibrated temperature to calibrate your sensor. Repeat for each sensor.
Done well, this gets all my sensors to match to much better than 0.1C in the 25-40C range.
The next step: replace thermistors with Pt RTDs, and adjust the bridge resistors to match. Repeat the firmware rough calibration when developing the circuit/firmware. The advantages: fine calibration of each device may not be required (TBD, depending on the bridge resistors), and sensors can probably be replaced if/when damaged without requiring recalibration (TBD). If a fine calibration is required, consider connecting various precision resistors in place of the RTD instead of actual temperature soaks (Pt RTDs are that repeatable.)
Wow this is an awesome comment, thanks for writing up your process in such detail!
Your first bad conversion might be because the Sinc filter, which I’m assuming it has, is being slow to react to the big step change between whatever the power on state is and the real world.
Plat RTDs are the cat’s meow, and a lot cheaper than they used to be. But why the 6th order after a long sigma-delta? Do you need a quieter interface or shielding?
I would try measuring the voltage drop across a diode or transistor junction. That works really well and you can treat it as linear over those temperatures, or correct in software, or even model the Ebers-Moll equation. I like the quantum-temperature relation better than the resistance style. Plus you can do it with practically zero current with the converter input impedance.
That seems like a lot of work getting extra precision from something that is inherently not accurate. Thermistors are not linear, suffer from self-heating, and have manufacturing variances that need to be calibrated/compensated for.
When I first read the blurb, I was thinking to myself that over a wide range a semiconductor sensor will outperform a thermister. However over a very narrow range and the right thermister, I can see the thermister being more sensitive. I would suspect this would be over a band of perhaps 10 degrees or less. I suspect the semi sensor over would also have a linear output over that range, but not as exaggerated. By carefully changing the reference voltage I can see the thermister being much easier to use than a semi sensor and an amp.
All of that being said, when you get down to shaving hairs, you have to be very careful in every aspect. Little changes in the reference voltage will make big changes in your readings. Thermal effects on parts become critical, and even the self heating of the thermister from the current flowing through it becomes more of an issue.
For every digit you eke out, everything gets 10x more critical This is why when I see people doing things like display hacking on a 6.5 digit dvm I think it is a bad idea. You are splitting hairs at about the practical limit of hair splitting and everything is critical.
Yep. We are currently trying to decide between thermistors & diodes for temperature sensing, as the two devices are ‘close enough’ that I can even use the same calculations on the MCU over about 30-40C:
And as I get more practice, I’m also learning about how dependent everything is on those little changes you mention. For environmental temp monitoring with our DIY loggers, there are lots of options that are good enough. But every time I think I’ve developed a ‘better method’ to measure temperature, it usually ends up being ‘about the same’ in reality due to those dependencies.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)