A very good question came up on The EEVBlog forum that I thought deserved an in depth answer. The poster asked why would amplifier companies in the heyday of tube technology operate tubes in mass produced circuits well in excess of their published manufacturers recommended limits. The simple answer is: because the could get away with it. So the real question worth exploring is how did they get away with operating outside of their own published limitations? Let’s jump in and take a look at the collection of reasons.
Materials-Based Tube Performance
First a brief discussion of what the limiting factors are in tube performance. Heat and Gas are the two foes of tube design. Additionally both are dynamic in that heat will liberate residual gas from internal tube components and gas will interrupt proper operation and in the worst case scenario ionize and cause a gas short. In some tubes this can cause a runaway condition resulting in a complete failure. Remaining gas has everything to do with how the materials and parts used in the tubes construction were cleaned, processed, and how the evacuation was carried out, which in the case of most receiving tubes was marginal at best. It still is at most factories. Additionally if materials in the tube are not held to a tight tolerance (particularly in cathodes and grids), you can have leakage paths from metals evaporating from hot bodies and depositing on cooler ones. You don’t want this. Magnesium and Silicon which were common cathode reducing agents particularly in cheaper tubes is one of the largest culprits. It was used to speed the age in and stabilization of the cathode, therefore getting it out the factory door faster at the cost of possible tube leakage early in life. Cathode base alloys with these reducing agents were what were called “Active” cathodes. Other materials were used with greater safety margin later on but that is for another article.
The next consideration for heat is its removal. This has three facets. First radiating, or conducting heat from the anode to the container, in this case glass. Second is the carrying away of this heat from that surface via forced air or convection, radiation, or in some instances liquid or heat sink conduction. When many of these tubes were designed, blackened nickel anodes were the standard and the specifications were based on this. This type of material typically has about 70-80% equivalent of black body radiation and a relatively low heat conduction meaning you have concentrated hot spots in the anode radiating at counterpart hot spots in the glass, and glass being a poor conductor of heat you now have a situation of potential stress building up from heat differential in the bulb. Third you have heat removal in a more direct manner via the lead pins particularly in miniature tubes that are conducting heat directly to the glass/metal seal which is the weakest point in the tube, and then on to the socket contacts themselves.
In the 1960’s better multi layer materials became available that would allow greater anode heat conduction therefore spreading the heat around more evenly before radiation. These materials typically had copper plated onto them at some point adding to its better heat conduction distribution properties.
Aluminized layers taking advantage of special heat treatment made for more efficient heat radiation. This is the grey material used in anodes for the later half of the tube years. Many flavors of this kind of material were used with the highest performers belonging to Texas Instruments materials division and selling until the mid 90’s under the name AlNiFer. It was a multi layer steel, nickel, copper, and aluminum clad product made much the same way quarters are here in the US. Many tubes today use a anode of steel plated in nickel with an aluminum based coating forgoing the copper layer. These, along with other improvements in the areas of cleanliness and processing techniques led to greater tube performance as the years went on. As these technologies and processes were applied to the older tube designs their qualities were improved as well.
Another major factor was the propensity of the tube companies to conservatively rate their tubes. Many engineers knew this and would use it to the advantage of their own designs. I have been told by former General Electric engineers that they looked at this as a selling advantage in so much as they would boast about the extended capabilities in their products and would in fact distribute engineering only data sheets and data sets telling the design engineers how to tweak their designs for maximum performance.
Now picture yourself as trying to have your product stand out in the market. As we all know in the audio business power sells and nothing is better than getting more power for the same price. Sure you could redesign your product with more robust transformers and larger output tubes, but that would cost money. Let’s not forget that specialty tubes such as the 7591, EL34, 6550, 5881, and 8417 were an order of magnitude more expensive than pedestrian 6BQ5, 6V6, and 6AQ5’s that were made by the train car load. You can take that 15 watt amp and have it crank out 20 watts by simply adding a few turns to the secondary of the power transformer costing practically nothing and when the tubes do fail those cheaper replacements will only be as far away as the nearest store. This was the game played.
Operating Outside the Curve
Those who I work with and talk to know I have never really bought into the myth and lore of the audiophile realm. I only base what I do on measurable fact. That is not to say there is no room for the subjective view point in my analysis but everything must be taken in context. There are what some would call “Sweet spots” in audio designs. I have found in certain designs these spots are often where certain parts and tubes are being used outside of their recommended ranges. Designs from Fender and Music Man come to mind. A Deluxe Reverb typically can place more than 400 plate volts on poor unsuspecting 6V6 tubes when the highest design maximum rating in any major datasheet (General Electric) is 350 volts. Empirical evidence shows us from the thousands of these units out there (many with their original tubes still working) that they could in fact handle this service. At a certain point we must conclude that the published maximum recommended specs are just that. Recommendations.
This Goes Beyond Tubes
In any product the surest way to become a failure, is to have failures. MTTF or “Mean Time To Failure is a mantra to the Q.C. engineer and it does not take a company long to realize it is much cheaper to fix problems before you build them rather than after the device is in the field. There is always a rainy day though. Bogey outlier parts must be accounted for and assumptions must be made in regard to how tolerant your product must be to external stimuli in field.
This is where we encounter the philosophy of what is sometimes called “head room” or “safety margin” in design. What is the best way to make sure a 1/4 watt resistor will always work and stay on spec? Ensure it can do the same at 1/2 watt. The idea is simple. Put a little more effort into the robustness of the design and save yourself a significant amount of early failures in the future. Engineers being the inquisitive sorts will often explore these extra safety margins and exploit them. This is usually dependent however on the market the device is being sold into.
You can literally bet your life a medical or airbag sensor device has been qualified and everything in it is within those recommended specs because when that lawsuit happens you can be assured any variance will be questioned again and again. Meanwhile those 1000 hour life electrolytic caps in that computer power supply stay on 24 hours a day 7 days a week as long as it is plugged in and seem to do just fine. It is all a matter of how much liability can be assumed in your particular application.