As electronics rely more and more on ICs, subtle details about discrete components get lost because we spend less time designing with them. For example, a relay seems like a simple component, but selecting the contact material optimally has a lot of nuance that people often forget. Another case of this is the Miller effect, explained in a recent video by the aptly named [Old Hack EE].
Put simply, the Miller effect — found in 1919 by [John Milton Miller] — is the change in input impedance of an inverting amplifier due to the gain’s effect on the parasitic capacitance between the amplifier’s input and output terminals. The parasitic capacitance acts like there is an additional capacitor in parallel with the parasitic capacitance that is equivalent to the parasitic capacitance multiplied by the gain. Since capacitors in parallel add, the equation for the Miller capacitance is C-AC where C is the parasitic capacitance, and A is the voltage gain which is always negative, so you might prefer to think of this as C+|A|C.
The example uses tubes, but you get the same effect in any inverting amplification device, even if it is solid state or an op amp circuit. He does make some assumptions about capacitance due to things like tube sockets and wiring.
The effect can be very pronounced. For example, a chart in the video shows that if you had an amplifier with gain of -60 based around a tube, a 10 kΩ input impedance could support 2.5 MHz, in theory. But in practice, the Miller effect will reduce the usable frequency to only 81.5 kHz!
The last part of the video explains why you needed compensation for old op amps, and why modern op amps have compensation capacitors internally. It also shows cases where designs depend on the Miller effect and how the cascade amplifier architecture can negate the effect entirely.
This isn’t our first look at Miller capacitance. If you look at what’s inside a tube, it is a wonder there isn’t more parasitic capacitance.
Its a very interesting and a relatively simple to identify and explore effect if you’re doing any kind of high frequency switching. Its pretty intuitive as well, even if you don’t go into all the math, and experiencing it really opens your mind to the considerations of parasitic capacitance.
Its also the bane of anyone making a high speed FET drivers.
“…how the cascade amplifier architecture can negate the effect entirely…”
Autokorrupt? You mean cascode amplifier
I feel like more time could be spent on this idea because to an engineer, it’s good to know about the effect, but it’s more important to know how to combat it. But maybe that gets into amplifier design too much.
In a stroll through Wikipedia, I discovered an interesting historical linkage. The term “cascode” is from Hunt and Hickman in 1939. Fastforward to today, and there is a multigate transistor (or multigate device) that was invented, in part, to address the Miller Effect, but the multigate device is integral to the FinFET, tri-gate, and GAA (gate all-around) transistors in the newest small-nanometer digital chips (sub-30nm). This is just an observation about old ideas becoming new and about convergent evolution.