Researchers at UCLA recently developed what they are calling a thermal transistor: a solid-state device able to control the flow of heat with an electric field. This opens the door to controlling the transfer of heat in some of the same ways we are used to controlling electronics.
Heat management can be a crucial task, especially where electronics are involved. The usual way to manage heat is to draw it out with things like heat sinks. If heat isn’t radiating away fast enough, a fan can be turned on (or sped up) to meet targets. Compared to the precision and control with which modern semiconductors shuttle electrons about, the ability to actively manage heat seems lacking.
This new device can rapidly adjust thermal conductivity of a channel based on an electrical field input, which is very similar to what a transistor does for electrical conductivity. Applying an electrical field modifies the strength of molecular bonds in a cage-like array of molecules, which in turn adjusts their thermal conductivity.
It’s still early, but this research may open the door to better control of heat within semiconductor systems. This is especially interesting considering that 3D chips have been picking up speed for years (stacking components is already a thing, it’s called Package-on-Package assembly) and the denser and deeper semiconductors get, the harder it is to passively pull heat out.
Thanks to [Jacob] for the tip!
I wonder if their new transistor works for phonons.
I clicked on the link in the article, and then from there went to this link just by happenstance:
https://spectrum.ieee.org/thermal-transistor-the-worlds-tiniest-refrigerator
in which they mention phonons:
> In a separate development, a team of physicists at the National University of Singapore have come up with a plan for a device that could use phonons, vibrations in a crystal lattice that carry heat, as bits of information
Let’s see, we have a photo transistor for photon to electricity. And we have an LED for electricity to photons. So, yes these transducers do exists. Oh, yeah, that’s right. LEDs can also be used as a photo detector (yes they are sensitive to light). So, the LED fills the bill here.
Maybe I’m being dumb, but wouldn’t you want the best heat transfer path on ALL THE TIME?
Not if you are trying to get your car engine to an optimal running temperature on a cold morning.
I don’t think a thing using an electric field that is compared to a transistor would be able to be used in a car since you’d need a huge expensive device and a huge electric field, which would have to be developed for cars made in the 70’s – since modern cars (that still use combustion engines) use electronics to adjust the performance already.
I had the same thought as [Heat Miser]: what the hell would you use this for? Maybe in laboratories where you need precise control for some experiment or for producing some rare compound/material.
Or maybe in a new version of a quantum computer in some way? Perhaps thermals can play a part somehow when viewed on an atomic level.
No, some devices have heaters to stabilize the circuit. Look at oscillators for example. Also thermal changes can cause components to drift around in their tolerance. All kinds of passive and active devices suffer from this. Keeping this just cool enough all the time would increase stability when it’s needed.
Not sure if this would be of any use for cooling other electronic devices. It sounds like it can modulate the thermal conductivity but I’d be really surprised if it were able to drive any material to a point where it was better than existing (less expensive) materials
If you combine thermal switches with a phase change (not necessarily solid/liquid/gas — magnetic ordering can work), you can do active refrigeration. Fast switches could allow the use of phase changes that would be uselessly weak with slow switching.
I don’t know if it would be useful to do local refrigeration with CPUs, since it adds to the total heat load that has to be removed from the system. But giving a small device with high power density, like an RF amplifier, a sink below ambient temperature could be worthwhile.
I agree with Dave Ladd, this has no value in cooling chips except where you want to thermostatically control temperature e.g crystals, but would have great value for “lab on a chip” solutions.
I disagree. Imagine any one of the numerous deep space satellites that have little to no contact to generate their own heat other than with a local power source. These circuits are made to be as efficient as possible to ensure the longevity of the tool.
Now imagine that the ideal working conditions of those circuits could be met by using waste heat generated by other components that are less efficient at converting electrons to logic.
This is what it will be used on, but nobody wants to hear about that.
Except that’s not what the article espoused as a potential application. The article pretended that the concept would make cooling for mainstream terrestrial applications more efficient, and that isn’t true.
As for cooling electronics, why would you ever need to turn this transistor off? In which case, why would you need a transistor at all? I’ve never seen a case of an electronic device encountering trouble because it was cooling itself too well.
I think it’s a question of efficiency. Running a fan to cool a *something* consume X watt. Using that transistor consume 1/10 X W (maybe). So if you can stop the fan earlier, you could gain some energy.
A fan, even at the minimum speed, consume Z watt, is able to extract Y watt of thermal energy. What if the *something* generates Y/10 watt of thermal energy ?
Currently, you’d run the fan early, to extract that energy so consuming Z/2 watt or something (the only alternative is not to run the fan and let the generated energy heats up the dissipation device so it can accumulate heat, but the once the dissipation device has reach its expected temperature, the fan must run at full load to reduce it)
Yet, in Z/2 watt mode the fan is able to dissipate up to Y/4 watt of thermal energy, so it’s wasted energy here.
So instead of “run the fan, extract Y/10 W of thermal energy, consume Z/2 watt” with wasted electrical energy for the fan, you can stop the propagation of heat to a smaller dissipator (can be the *something* itself) until the temperature is high enough that the thermal energy to dissipate is Y/4 watt. Then you’ll run the fan, gaining a much better efficiency.
Please notice that this is completely moot with current technologies like liquid/gas vacuum vapor chamber used nowadays that does exactly this: accumulating latent heat until we reach a regime that fits the fan’s optimal working range.
Could be really interesting in battery applications in the real world – easy solid state toggling on of the heatsinking path to keep the battery cool as you would normally require but lets you disconnect the heatsink so the battery keeps itself warm enough in the really cold environments more effectively.
Thermal management for electronics in general could be quite interesting with these concepts – that phone/laptop that knows where and when you are holding/touching it so toggles those zones off for instance. Probably would net you the ability to run a little hotter on the CPU while the surfaces you use stay cool enough.
I kinda want to see what logical AND, OR, NAND, NOR, XOR, and NOT would look like with heat transistors…
Then of course we’ll need to build an adder with it. Backronym that puppy and call it the MAD HAdder (Molten Asymptotic Design Heat Adder)
Perhaps it could be used for solid state thermostatic control of heat output from an RTG heater on a spacecraft?
One step closer to Maxwell’s Demon. Free Energy!
Seriously, what’s an example of a practical use for this? Pretty sure cooling chips ain’t on the list, despite the UCLA PR fluff.
This would be great to manage unequal heat distribution. Imagine a large processor chip that has a few hot spots. Reduce the cooling above the cold areas so they maintain a similar temperature and minimize thermal stress across the large chip. Different compute loads heat up different areas of the chip at different times, requiring dynamic management of the heat extraction.
Current chip designs are driven by thermal considerations to try to equally heat the chip but that can only ever work for one benchmark load.
On a larger scale, managing heat flow in carbon fiber epoxy is a constraint on making aircraft parts with thick and thin sections.
No. That only works if the lateral heat flow through a chip is efficient, and it isn’t. Trying to dynamically move heat from one region of the chip to another requires that the transfer path has to be more heat conductive than the path to the header, and that will never be the case. A simple thermal model will show you your error.
You don’t need to heat one end of the chip from the other, just slightly impede the cooling above the small patches that are currently idle.
This tech could imply that Dyson sphere detection may be harder that people imagined. It would not necessarily radiate heat uniformly but rather just away from the galactic plane where there are far less observers.
More easily done with passive radiators. You sure don’t need parlor tricks like this to do that stunt.
Heck, even the International Space Station does that: it’s much brighter in the infrared in the directions right angles to the sun. It’s radiators are directed toward cold space, not facing the sun, so they preferentially radiate in those directions rather than collect solar radiation.
But a DS surrounds an entire star, you need something to move that radiation around to make the total stellar emission globe anisotropic.
Sometimes it’s useful (nay critical) that a compartment runs as close to a specific temperature as possible because of inherent thermoelectric properties.
For example, frequency stability of a temperature-compensated crystal oscillator (TXCO) could be improved if you maintained the crystal’s temperature more precisely. Of course the entire point of a TXCO is to compensate for temperature drift, but the better you can regulate the temperature, the less adjustment would be needed.
And then there are OXCOs…
https://medium.com/@xesey/difference-between-tcxo-and-ocxo-b45fdf5500d7
…and then CSACs!
If this transistor can rapidly throttle thermal conductivity, then it might lead to faster and smaller PCR machines.
I wonder what the applications for this are. Let’s take a CPU. Generally you want maximum heat conductivity to minimize temperature to reduce wear and keep the CPU stable.
But temperature changes also cause wear. So by over engineering the cooling solution you could use these thermal switches to keep the temperature stable without adding extra heat and at a faster rate than ramping fans up or down. But will that improve the life of the CPU or is a lower average temperature better?
Lots of questions here about how this would be useful.
If no satisfactory immediate answer is found it’s still an interesting phenomena. And it would hardly be the first time if an interesting phenomena stayed just that for a long period before suddenly the use is found and it becomes the basis of a major technological breakthrough.
I’m not saying every interesting phenomena does that or that this one necessarily will. But it’s good to note them and keep the knowledge of them around for when it does.
I guess if this did the job of switching conductivity a bajillion times better than any other solution, like turning on and off heatsink fans or pumps or mechanically moving heat conductors away from each other, then you could maybe make a better cup of coffee, by switching conductivity states once the liquid has cooled to the target temperature and now you want it to be very well insulated. Maybe. But you might also just constantly heat it to maintain the target temperature, which seems cheaper.