If we cast our minds back to the early years of the transistor, the year that is always quoted is 1947, during which a Bell Labs team developed the first practical germanium point-contact transistor. They would go on to be granted the Nobel Prize for their work in 1956, but the universal adoption of their invention was not an instantaneous process. Instead there would be a gradual change from vacuum to solid state that would span the 1950s and the 1960s, and even in the 1970s you might still have found mainstream devices on sale containing vacuum tubes.
To speed up this process, Bell Labs made every effort to publicize their invention. Thus we come to our subject today, their 1953 publicity film The Transistor, in which the electronics industry of the era is described and how each part of it might revolutionize by the transistor is laid out.
We start with a look at a selection of electronic components, among which are a few transistors. The point contact device is already described as superceded by the junction transistor, but as well as those two we are shown a phototransistor and a junction tetrode, a now-obsolete design that had two base connections.
Unexpectedly we don’t dive straight into the world of transistors, but take a look back at the earlier years of the century to the development of vacuum electronics. We’re taken through the early development and operation of vacuum tubes, then their use in long-distance radio communications, through the advent of electronics in mass entertainment, and finally into the world of radar and microwave links. Only then do we return to the transistor, with a posed shot of [John Bardeen], [William Shockley], and [Walter Brattain] hard at work in a lab. The merits of the transistor as opposed to the tube are then set out, though we can’t help wondering whether they have confused a milliwatt and a microwatt when they describe the transistor as requiring only a millionth of a watt to operate.
Like any Moore’s Law-inspired race, the megapixel race in digital cameras in the late 1990s and into the 2000s was a harsh battleground for every manufacturer. With the development of the smartphone, it became a war on two fronts, with Samsung eventually cramming twenty megapixels into a handheld. Although no clear winner of consumer-grade cameras was ever announced (and Samsung ended up reducing their flagship phone’s cameras to sixteen megapixels for reasons we’ll discuss) it seems as though this race is over, fizzling out into a void where even marketing and advertising groups don’t readily venture. What happened?
The Technology
A brief overview of Moore’s Law predicts that transistor density on a given computer chip should double about every two years. A digital camera’s sensor is remarkably similar, using the same silicon to form charge-coupled devices or CMOS sensors (the same CMOS technology used in some RAM and other digital logic technology) to detect photons that hit it. It’s not too far of a leap to realize how Moore’s Law would apply to the number of photo detectors on a digital camera’s image sensor. Like transistor density, however, there’s also a limit to how many photo detectors will fit in a given area before undesirable effects start to appear.
Image sensors have come a long way since video camera tubes. In the ’70s, the charge-coupled device (CCD) replaced the cathode ray tube as the dominant video capturing technology. A CCD works by arranging capacitors into an array and biasing them with a small voltage. When a photon hits one of the capacitors, it is converted into an electrical charge which can then be stored as digital information. While there are still specialty CCD sensors for some niche applications, most image sensors are now of the CMOS variety. CMOS uses photodiodes, rather than capacitors, along with a few other transistors for every pixel. CMOS sensors perform better than CCD sensors because each pixel has an amplifier which results in more accurate capturing of data. They are also faster, scale more readily, use fewer components in general, and use less power than a comparably sized CCD. Despite all of these advantages, however, there are still many limitations to modern sensors when more and more of them get packed onto a single piece of silicon.
While transistor density tends to be limited by quantum effects, image sensor density is limited by what is effectively a “noisy” picture. Noise can be introduced in an image as a result of thermal fluctuations within the material, so if the voltage threshold for a single pixel is so low that it falsely registers a photon when it shouldn’t, the image quality will be greatly reduced. This is more noticeable in CCD sensors (one effect is called “blooming“) but similar defects can happen in CMOS sensors as well. There are a few ways to solve these problems, though.
First, the voltage threshold can be raised so that random thermal fluctuations don’t rise above the threshold to trigger the pixels. In a DSLR, this typically means changing the ISO setting of a camera, where a lower ISO setting means more light is required to trigger a pixel, but that random fluctuations are less likely to happen. From a camera designer’s point-of-view, however, a higher voltage generally implies greater power consumption and some speed considerations, so there are some tradeoffs to make in this area.
Another reason that thermal fluctuations cause noise in image sensors is that the pixels themselves are so close together that they influence their neighbors. The answer here seems obvious: simply increase the area of the sensor, make the pixels of the sensor bigger, or both. This is a good solution if you have unlimited area, but in something like a cell phone this isn’t practical. This gets to the core of the reason that most modern cell phones seem to be practically limited somewhere in the sixteen-to-twenty megapixel range. If the pixels are made too small to increase megapixel count, the noise will start to ruin the images. If the pixels are too big, the picture will have a low resolution.
There are some non-technological ways of increasing megapixel count for an image as well. For example, a panoramic image will have a megapixel count much higher than that of the camera that took the picture simply because each part of the panorama has the full mexapixel count. It’s also possible to reduce noise in a single frame of any picture by using lenses that collect more light (lenses with a lower f-number) which allows the photographer to use a lower ISO setting to reduce the camera’s sensitivity.
Gigapixels!
Of course, if you have unlimited area you can make image sensors of virtually any size. There are some extremely large, expensive cameras called gigapixel cameras that can take pictures of unimaginable detail. Their size and cost is a limiting factor for consumer devices, though, and as such are generally used for specialty purposes only. The largest image sensor ever built has a surface of almost five square meters and is the size of a car. The camera will be put to use in 2019 in the Large Synoptic Survey Telescope in South America where it will capture images of the night sky with its 8.4 meter primary mirror. If this was part of the megapixel race in consumer goods, it would certainly be the winner.
With all of this being said, it becomes obvious that there are many more considerations in a digital camera than just the megapixel count. With so many facets of a camera such as physical sensor size, lenses, camera settings, post-processing capabilities, filters, etc., the megapixel number was essentially an easy way for marketers to advertise the claimed superiority of their products until the practical limits of image sensors was reached. Beyond a certain limit, more megapixels doesn’t automatically translate into a better picture. As already mentioned, however, the megapixel count can be important, but there are so many ways to make up for a lower megapixel count if you have to. For example, images with high dynamic range are becoming the norm even in cell phones, which also helps eliminate the need for a flash. Whatever you decide, though, if you want to start taking great pictures don’t worry about specs; just go out and take some photographs!
(Title image: VISTA gigapixel mosaic of the central parts of the Milky Way, produced by European Southern Observatory (ESO) and released under Creative Commons Attribution 4.0 International License. This is a scaled version of the original 108,500 x 81,500, 9-gigapixel image.)
Every time we say “We’ve seen it all”, along comes a project that knocks us off. 60 year old [Mark Nesselhaus] likes to learn new things and he’s never worked with hardware at the gate level. So he’s building himself a 4-bit Computer, using only Diode-Transistor Logic. He’s assembling the whole thing on “card board” perf-board, with brass tacks for pads. Why — because he’s a thrifty guy who wants to use what he has lying around. Obviously, he’s got an endless supply of cardboard, tacks and Patience. The story sounds familiar. It started out as a simple 4-bit full adder project and then things got out of hand. You know he’s old school when he calls his multimeter an “analog VOM”!
It’s still work in progress, but he’s made a lot of it in the past year. [Mark] started off by emulating the 4-bit full adder featured on Simon Inns’ Waiting for Friday blog. This is the ALU around which the rest of his project is built. With the ALU done, he decided to keep going and next built a 4-to-16 line decoder — check out the thumbnail image to see the rats nest of jumbled wires. Next on his list were several flip flops — R-S, J-K and D types, which would be useful as program counters. This is when he bumped into problems with signal levels, timing and triggering. He decided to allow himself the luxury of adding one IC to his build — a 555 based clock generator. But he still needed some pulse shaping circuitry to make it work consistently.
[Mark] also built a finite-state-machine sequencer based on the work done by Rory Mangles TinyTim project. He finished building some multiplexers and demultiplexers, and it appears he may be using a whole bank of 14 wall switches for address, input and control functions. For the output display, he assembled a panel using LED’s recovered from a $1 Christmas light string. Something seems amiss with his LED driver, though — 2mA with LED on and >2.5mA with LED off. The LED appears to be connected across the collector and emitter of the PNP transistor. Chime in with your comments.
This build seems to be shaping along the lines of the Megaprocessor that we’ve swooned over a couple of times in the past. Keep at it, [Mark]!
Since the 1940s when the first transistor was created, transistors have evolved from ornery blocks of germanium wrangled into basic amplifiers into thousands and thousands of different devices made of all kinds of material that make any number of electrical applications possible, cheap, and reliable. MOSFETs can come in at least four types: P- or N-channel, and enhancement or depletion mode. They also bear different power ratings. And some varieties are more loved than others; for instance, depletion-mode, N-channel power MOSFETs are comparatively scarce. [DeepSOIC] was trying to find one before he decided to make his own by hacking a more readily available enhancement-mode transistor.
For those not intimately familiar with semiconductor physics, the difference between these two modes is essentially the difference between a relay that is normally closed and one that’s normally open. Enhancement-mode transistors are “normally off” and are easy to obtain and (for most of us) useful for almost all applications. On the other hand, if you need a “normally on” transistor, you will need to source a depletion mode transistor. [DeepSOIC] was able to create a depletion mode transistor by “torturing” the transistor to effectively retrain the semiconductor junctions in the device.
If you’re interested in semiconductors and how transistors work on an atomic level, [DeepSOIC]’s project will keep you on the edge of your seat. On the other hand, if you’re new to the field and looking to get a more basic understanding, look no further than these DIY diodes.
There are a number of ways to measure the speed of light. If you’ve got an oscilloscope and a few spare parts, you can build your own apparatus for just a few bucks. Don’t believe the “lies” that “they” tell you: measure it yourself!
The apparatus starts off with a very quickly pulsed IR LED, a lens, and a beam-splitter. One half of the beam takes a shortcut, and the other bounces off a mirror that is farther away. A simple op-amp circuit amplifies the resulting pulses after they are detected by a photodiode. The delay is measured on an oscilloscope, and the path difference measured with a tape measure.
If you happen to have a photomultiplier tube in your junk box, you can do away with the amplifier stage. Or if you have some really fast logic circuits, here’s another project that might interest you. But if you just want the most direct measurement we can think of that’s astoundingly accurate for something lashed up on breadboards, you can’t beat [Michael]’s lash-up.
I learned some basic electronics in high school physics class: resistors, capacitors, Kirchhoff’s law and such, and added only what was required for projects as I did them. Then around 15 years ago I decided to read some books to flesh out what I knew and add to my body of knowledge. It turned out to be hard to find good ones.
The electronics section of my bookcase has a number of what I’d consider duds, but also some gems. Here are the gems. They may not be the electronics-Rosetta-Stone for every hacker, but they are the rock on which I built my church and well worth a spot in your own reading list.
Grob’s Basic Electronics
Grob’s Basic Electronics by Mitchel E Schultz and Bernard Grob is a textbook, one that is easy to read yet very thorough. I bought mine from a used books store. The 1st Edition was published in 1959 and it’s currently on the 12th edition, published in 2015. Clearly this one has staying power.
I refer back to it frequently, most often to the chapters on resonance, induction and capacitance when working on LC circuits, like the ones in my crystal radios. There are also things in here that I couldn’t find anywhere else, including thoroughly exhaustive online searches. One such example is the correct definitions and formulas for the various magnetic units: ampere turns, field intensity, flux density…
I’d recommend it to a high school student or any adult who’s serious about knowing electronics well. I’d also recommend it to anyone who wants to reduce frustration when designing or debugging circuits.
Series-Resonance calculations
Series-Resonance schematic
You can find the table of contents here but briefly it has all the necessary introductory material on Ohm’s and Kirchhoff’s laws, parallel and series circuits, and so on but to give you an idea of how deep it goes it also has chapters on network theorems and complex numbers for AC circuits. Interestingly my 1977 4th edition has a chapter on vacuum tubes that’s gone in the current version and in its place is a plethora of new ones devoted to diodes, BJTs, FETs, thyristors and op-amps.
You can also do the practice problems and self-examination, just to make sure you understood it correctly. (I sometimes do them!) But also, being a textbook, the newest edition is expensive. However, a search for older but still recent editions on Amazon turns up some affordable used copies. Most of basic electronics hasn’t changed and my ancient edition is one of my more frequent go-to books. But it’s not the only gem I’ve found. Below are a few more.
Transistors have come a long way. Like everything else electronic, they’ve become both better and cheaper. According to a recent IEEE article, a transistor cost about $8 in today’s money back in the 1960’s. Consider the Regency TR-1, the first transistor radio from TI and IDEA. In late 1954, the four-transistor device went on sale for $49.95. That doesn’t sound like much until you realize that in 1954, this was equivalent to about $441 (a new car cost about $1,700 and a copy of life magazine cost 20 cents). Even at that price, they sold about 150,000 radios.
Part of the reason the transistors cost so much was that production costs were high. But another reason is that yields were poor. In some cases, 4 out of 5 of the devices were not usable. The transistors were not that good even when they did work. The first transistors were germanium which has high leakage and worse thermal properties than silicon.
Early transistors were subject to damage from soldering, so it was common to use an alligator clip or a specific heat sink clip to prevent heat from reaching the transistor during construction. Some gear even used sockets which also allowed the quick substitution of devices, just like the tubes they replaced.
When the economics of transistors changed, it made a lot of things practical. For example, a common piece of gear used to be a transistor tester, like the Heathkit IT-121 in the video below. If you pulled an $8 part out of a socket, you’d want to test it before you spent more money on a replacement. Of course, if you had a curve tracer, that was even better because you could measure the device parameters which were probably more subject to change than a modern device.
Of course, germanium to silicon is only one improvement made over the years. The FET is a fundamentally different kind of transistor that has many desirable properties and, of course, integrating hundreds or even thousands of transistors on one integrated circuit revolutionized electronics of all types. Transistors got better. Parameters become less variable and yields increased. Maximum frequency rises and power handling capacity increases. Devices just keep getting better. And cheaper.
A Brief History of Transistors
The path from vacuum tube to the Regency TR-1 was a twisted one. Everyone knew the disadvantages of tubes: fragile, power hungry, and physically large, although smaller and lower-power tubes would start to appear towards the end of their reign. In 1925 a Canadian physicist patented a FET but failed to publicize it. Beyond that, mass production of semiconductor material was unknown at the time. A German inventor patented a similar device in 1934 that didn’t take off, either.
Bell labs researchers worked with germanium and actually understood how to make “point contact” transistors and FETs in 1947. However, Bell’s lawyers found the earlier patents and elected to pursue the conventional transistor patent that would lead to the inventors (John Bardeen, Walter Brattain, and William Shockley) winning the Nobel prize in 1956.
Two Germans working for a Westinghouse subsidiary in Paris independently developed a point contact transistor in 1948. It would be 1954 before silicon transistors became practical. The MOSFET didn’t appear until 1959.
Of course, even these major milestones are subject to incremental improvements. The V channel for MOSFETs, for example, opened the door for FETs to be true power devices, able to switch currents required for motors and other high current devices.