If you open up the perennial favourite electronics textbook The Art Of Electronics and turn to the section on transistors, you will see a little cartoon. A transistor is shown as a room in which “transistor man” stands watching a dial showing the base current, while adjusting a potentiometer that limits the collector current. If you apply a little more base current, he pushes up the collector a bit. If you wind back the base current, he drops it back. It’s a simple but effective way of explaining the basic operation of a transistor, but it stops short of some of the nuances of how a transistor works.
Of course the base-emitter junction is a diode and it is not a simple potentiometer that sits between collector and emitter. The “better” description of these aspects of the device fills the heads of first-year electronic engineering students until they never want to hear about an h-paramater or the Ebers-Moll model of transistor function again in their entire lives. Fortunately it is possible to work with transistors without such an in-depth understanding of their operation, but before selecting the components surrounding a device it is still necessary to go a little way beyond transistor man.
A flip-flop is one of the most basic digital electronic circuits. It can most easily be built from just two transistors, although they can and have been built out of vacuum tubes, NAND and NOR gates, and Minecraft redstone. Conventional wisdom says you can’t build a flip-flop with just one transistor, but here we are. [roelh] has built a flip-flop circuit using only one transistor and some bizarre logic that’s been slowly developing over on hackaday.io.
[roelh]’s single transistor flip-flop is heavily inspired by a few of the strange logic projects we’ve seen over the years. The weirdest, by far, is [Ted Yapo]’s Diode Clock, a digital clock made with diode-diode logic. This is the large-scale proof of concept for the unique family of logic circuits [Ted] came up with that only uses bog-standard diodes to construct arbitrary digital logic.
The single-transistor flip-flop works just like any other flip-flop — there are set and reset pulses, and a feedback loop to keep the whatever state the output is in alive. The key difference here is the addition of a clock signal. This clock, along with a few capacitors and a pair of diodes, give this single transistor the ability to store a single bit of information, just like any other flip-flop.
This is, without a doubt, a really, really weird circuit but falls well into territory that is easily understood despite being completely unfamiliar. The key question here is, ‘why?’. [roelh] says this could be used for homebrew CPUs, although this circuit is trading two transistors for a single transistor, two diodes, and a few more support components. For vacuum tube-based computation, this could be a very interesting idea that someone at IBM in the 40s had, then forgot to write down. Either way, it’s a clever application of diodes and an amazing expression of the creativity that can be found on a breadboard.
One way to understand how the 555 timer works and how to use it is by learning what the pins mean and what to connect to them. A far more enjoyable, and arguably a more useful way to learn is by looking at what’s going on inside during each of its modes of operation. [Dejan Nedelkovski] has put together just such a video where he walks through how the 555 timer IC works from the inside.
We especially like how he immediately removes the fear factor by first showing a schematic with all the individual components but then grouping them into what they make up: two comparators, a voltage divider, a flip-flop, a discharge transistor, and an output stage. Having lifted the internals to a higher level, he then walks through examples, with external components attached, for each of the three operating modes: bistable, monostable and astable. If you’re already familiar with the 555 then you’ll enjoy the trip down memory lane. If you’re not familiar with it, then you soon will be. Check out his video below.
[Kevin Darrah] wanted to make a simple 3.3V regulator without using an integrated circuit. He wound up using two common NPN transistors and 4 1K resistors. The circuit isn’t going to beat out a cheap linear regulator IC, but for the low component count, it is actually pretty good.
In all fairness, though, [Kevin] may have two transistors, but he’s only using one of them as a proper transistor. That one is a conventional pass regulator like you might find in any regulator circuit. The other transistor only has two connections. The design reverse biases the base-emitter junction which results in a roughly 8V breakdown voltage. Essentially, this transistor is being used as a poor-quality Zener diode.
One of the most versatile tools on anyone’s work bench, at least as far as electrical projects are concerned, is a power supply. Often we build our own, but after we’ve cobbled together some banana jacks with a computer’s PSU or dead-bug soldered a LM317 voltage regulator to a wall wart, how will that power supply perform? Since it’s not desirable to use a power supply that’ll let the smoke out of everything it powers (or itself, for that matter) a constant current sink, or load, can help determine the operating limits of the power supply.
[electrobob] built this particular current sink from parts he had lying around. The theory of a constant current sink is relatively straightforward so it’s easily possible to build one from parts out of the junk drawer, provided you can find a few transistors, fuses, an op amp, and some heat sinks. The full set of schematics that [electrobob] designed can be found on his main project page. He’s also gone a step further with this build as well, since he shorted out his first prototype and destroyed some of the transistors. But, using a few extra transistors in his design also improves the safety and performance of the load, so it’s a win-win.
This constant current load also has the added feature of being able to interface with a waveform generator (an Analog Discovery, specifically) and as a result can connect and disconnect the load quickly. If you aren’t in need of an industrial-grade constant current sink and you have some spare parts lying around, this would be a great one to have around the work bench.
If we cast our minds back to the early years of the transistor, the year that is always quoted is 1947, during which a Bell Labs team developed the first practical germanium point-contact transistor. They would go on to be granted the Nobel Prize for their work in 1956, but the universal adoption of their invention was not an instantaneous process. Instead there would be a gradual change from vacuum to solid state that would span the 1950s and the 1960s, and even in the 1970s you might still have found mainstream devices on sale containing vacuum tubes.
To speed up this process, Bell Labs made every effort to publicize their invention. Thus we come to our subject today, their 1953 publicity film The Transistor, in which the electronics industry of the era is described and how each part of it might revolutionize by the transistor is laid out.
We start with a look at a selection of electronic components, among which are a few transistors. The point contact device is already described as superceded by the junction transistor, but as well as those two we are shown a phototransistor and a junction tetrode, a now-obsolete design that had two base connections.
Unexpectedly we don’t dive straight into the world of transistors, but take a look back at the earlier years of the century to the development of vacuum electronics. We’re taken through the early development and operation of vacuum tubes, then their use in long-distance radio communications, through the advent of electronics in mass entertainment, and finally into the world of radar and microwave links. Only then do we return to the transistor, with a posed shot of [John Bardeen], [William Shockley], and [Walter Brattain] hard at work in a lab. The merits of the transistor as opposed to the tube are then set out, though we can’t help wondering whether they have confused a milliwatt and a microwatt when they describe the transistor as requiring only a millionth of a watt to operate.
Like any Moore’s Law-inspired race, the megapixel race in digital cameras in the late 1990s and into the 2000s was a harsh battleground for every manufacturer. With the development of the smartphone, it became a war on two fronts, with Samsung eventually cramming twenty megapixels into a handheld. Although no clear winner of consumer-grade cameras was ever announced (and Samsung ended up reducing their flagship phone’s cameras to sixteen megapixels for reasons we’ll discuss) it seems as though this race is over, fizzling out into a void where even marketing and advertising groups don’t readily venture. What happened?
The Technology
A brief overview of Moore’s Law predicts that transistor density on a given computer chip should double about every two years. A digital camera’s sensor is remarkably similar, using the same silicon to form charge-coupled devices or CMOS sensors (the same CMOS technology used in some RAM and other digital logic technology) to detect photons that hit it. It’s not too far of a leap to realize how Moore’s Law would apply to the number of photo detectors on a digital camera’s image sensor. Like transistor density, however, there’s also a limit to how many photo detectors will fit in a given area before undesirable effects start to appear.
Image sensors have come a long way since video camera tubes. In the ’70s, the charge-coupled device (CCD) replaced the cathode ray tube as the dominant video capturing technology. A CCD works by arranging capacitors into an array and biasing them with a small voltage. When a photon hits one of the capacitors, it is converted into an electrical charge which can then be stored as digital information. While there are still specialty CCD sensors for some niche applications, most image sensors are now of the CMOS variety. CMOS uses photodiodes, rather than capacitors, along with a few other transistors for every pixel. CMOS sensors perform better than CCD sensors because each pixel has an amplifier which results in more accurate capturing of data. They are also faster, scale more readily, use fewer components in general, and use less power than a comparably sized CCD. Despite all of these advantages, however, there are still many limitations to modern sensors when more and more of them get packed onto a single piece of silicon.
While transistor density tends to be limited by quantum effects, image sensor density is limited by what is effectively a “noisy” picture. Noise can be introduced in an image as a result of thermal fluctuations within the material, so if the voltage threshold for a single pixel is so low that it falsely registers a photon when it shouldn’t, the image quality will be greatly reduced. This is more noticeable in CCD sensors (one effect is called “blooming“) but similar defects can happen in CMOS sensors as well. There are a few ways to solve these problems, though.
First, the voltage threshold can be raised so that random thermal fluctuations don’t rise above the threshold to trigger the pixels. In a DSLR, this typically means changing the ISO setting of a camera, where a lower ISO setting means more light is required to trigger a pixel, but that random fluctuations are less likely to happen. From a camera designer’s point-of-view, however, a higher voltage generally implies greater power consumption and some speed considerations, so there are some tradeoffs to make in this area.
Another reason that thermal fluctuations cause noise in image sensors is that the pixels themselves are so close together that they influence their neighbors. The answer here seems obvious: simply increase the area of the sensor, make the pixels of the sensor bigger, or both. This is a good solution if you have unlimited area, but in something like a cell phone this isn’t practical. This gets to the core of the reason that most modern cell phones seem to be practically limited somewhere in the sixteen-to-twenty megapixel range. If the pixels are made too small to increase megapixel count, the noise will start to ruin the images. If the pixels are too big, the picture will have a low resolution.
There are some non-technological ways of increasing megapixel count for an image as well. For example, a panoramic image will have a megapixel count much higher than that of the camera that took the picture simply because each part of the panorama has the full mexapixel count. It’s also possible to reduce noise in a single frame of any picture by using lenses that collect more light (lenses with a lower f-number) which allows the photographer to use a lower ISO setting to reduce the camera’s sensitivity.
Gigapixels!
Of course, if you have unlimited area you can make image sensors of virtually any size. There are some extremely large, expensive cameras called gigapixel cameras that can take pictures of unimaginable detail. Their size and cost is a limiting factor for consumer devices, though, and as such are generally used for specialty purposes only. The largest image sensor ever built has a surface of almost five square meters and is the size of a car. The camera will be put to use in 2019 in the Large Synoptic Survey Telescope in South America where it will capture images of the night sky with its 8.4 meter primary mirror. If this was part of the megapixel race in consumer goods, it would certainly be the winner.
With all of this being said, it becomes obvious that there are many more considerations in a digital camera than just the megapixel count. With so many facets of a camera such as physical sensor size, lenses, camera settings, post-processing capabilities, filters, etc., the megapixel number was essentially an easy way for marketers to advertise the claimed superiority of their products until the practical limits of image sensors was reached. Beyond a certain limit, more megapixels doesn’t automatically translate into a better picture. As already mentioned, however, the megapixel count can be important, but there are so many ways to make up for a lower megapixel count if you have to. For example, images with high dynamic range are becoming the norm even in cell phones, which also helps eliminate the need for a flash. Whatever you decide, though, if you want to start taking great pictures don’t worry about specs; just go out and take some photographs!
(Title image: VISTA gigapixel mosaic of the central parts of the Milky Way, produced by European Southern Observatory (ESO) and released under Creative Commons Attribution 4.0 International License. This is a scaled version of the original 108,500 x 81,500, 9-gigapixel image.)