[JBumstead] didn’t want an ordinary microscope. He wanted one that would show the big picture, and not just in a euphemistic sense, either. The problem though is one of resolution. The higher the resolution in an image — typically — the narrower the field of view given the same optics, which makes sense, right? The more you zoom in, the less area you can see. His solution was to create a microscope using a conventional camera and building a motion stage that would capture multiple high-resolution photographs. Then the multiple photos are stitched together into a single image. This allows his microscope to take a picture of a 90x60mm area with a resolution of about 15 μm. In theory, the resolution might be as good as 2 μm, but it is hard to measure the resolution accurately at that scale.
As an Arduino project, this isn’t that difficult. It’s akin to a plotter or an XY table for a 3D printer — just some stepper motors and linear motion hardware. However, the base needs to be very stable. We learned a lot about the optics side, though.
Like any Moore’s Law-inspired race, the megapixel race in digital cameras in the late 1990s and into the 2000s was a harsh battleground for every manufacturer. With the development of the smartphone, it became a war on two fronts, with Samsung eventually cramming twenty megapixels into a handheld. Although no clear winner of consumer-grade cameras was ever announced (and Samsung ended up reducing their flagship phone’s cameras to sixteen megapixels for reasons we’ll discuss) it seems as though this race is over, fizzling out into a void where even marketing and advertising groups don’t readily venture. What happened?
The Technology
A brief overview of Moore’s Law predicts that transistor density on a given computer chip should double about every two years. A digital camera’s sensor is remarkably similar, using the same silicon to form charge-coupled devices or CMOS sensors (the same CMOS technology used in some RAM and other digital logic technology) to detect photons that hit it. It’s not too far of a leap to realize how Moore’s Law would apply to the number of photo detectors on a digital camera’s image sensor. Like transistor density, however, there’s also a limit to how many photo detectors will fit in a given area before undesirable effects start to appear.
Image sensors have come a long way since video camera tubes. In the ’70s, the charge-coupled device (CCD) replaced the cathode ray tube as the dominant video capturing technology. A CCD works by arranging capacitors into an array and biasing them with a small voltage. When a photon hits one of the capacitors, it is converted into an electrical charge which can then be stored as digital information. While there are still specialty CCD sensors for some niche applications, most image sensors are now of the CMOS variety. CMOS uses photodiodes, rather than capacitors, along with a few other transistors for every pixel. CMOS sensors perform better than CCD sensors because each pixel has an amplifier which results in more accurate capturing of data. They are also faster, scale more readily, use fewer components in general, and use less power than a comparably sized CCD. Despite all of these advantages, however, there are still many limitations to modern sensors when more and more of them get packed onto a single piece of silicon.
While transistor density tends to be limited by quantum effects, image sensor density is limited by what is effectively a “noisy” picture. Noise can be introduced in an image as a result of thermal fluctuations within the material, so if the voltage threshold for a single pixel is so low that it falsely registers a photon when it shouldn’t, the image quality will be greatly reduced. This is more noticeable in CCD sensors (one effect is called “blooming“) but similar defects can happen in CMOS sensors as well. There are a few ways to solve these problems, though.
First, the voltage threshold can be raised so that random thermal fluctuations don’t rise above the threshold to trigger the pixels. In a DSLR, this typically means changing the ISO setting of a camera, where a lower ISO setting means more light is required to trigger a pixel, but that random fluctuations are less likely to happen. From a camera designer’s point-of-view, however, a higher voltage generally implies greater power consumption and some speed considerations, so there are some tradeoffs to make in this area.
Another reason that thermal fluctuations cause noise in image sensors is that the pixels themselves are so close together that they influence their neighbors. The answer here seems obvious: simply increase the area of the sensor, make the pixels of the sensor bigger, or both. This is a good solution if you have unlimited area, but in something like a cell phone this isn’t practical. This gets to the core of the reason that most modern cell phones seem to be practically limited somewhere in the sixteen-to-twenty megapixel range. If the pixels are made too small to increase megapixel count, the noise will start to ruin the images. If the pixels are too big, the picture will have a low resolution.
There are some non-technological ways of increasing megapixel count for an image as well. For example, a panoramic image will have a megapixel count much higher than that of the camera that took the picture simply because each part of the panorama has the full mexapixel count. It’s also possible to reduce noise in a single frame of any picture by using lenses that collect more light (lenses with a lower f-number) which allows the photographer to use a lower ISO setting to reduce the camera’s sensitivity.
Gigapixels!
Of course, if you have unlimited area you can make image sensors of virtually any size. There are some extremely large, expensive cameras called gigapixel cameras that can take pictures of unimaginable detail. Their size and cost is a limiting factor for consumer devices, though, and as such are generally used for specialty purposes only. The largest image sensor ever built has a surface of almost five square meters and is the size of a car. The camera will be put to use in 2019 in the Large Synoptic Survey Telescope in South America where it will capture images of the night sky with its 8.4 meter primary mirror. If this was part of the megapixel race in consumer goods, it would certainly be the winner.
With all of this being said, it becomes obvious that there are many more considerations in a digital camera than just the megapixel count. With so many facets of a camera such as physical sensor size, lenses, camera settings, post-processing capabilities, filters, etc., the megapixel number was essentially an easy way for marketers to advertise the claimed superiority of their products until the practical limits of image sensors was reached. Beyond a certain limit, more megapixels doesn’t automatically translate into a better picture. As already mentioned, however, the megapixel count can be important, but there are so many ways to make up for a lower megapixel count if you have to. For example, images with high dynamic range are becoming the norm even in cell phones, which also helps eliminate the need for a flash. Whatever you decide, though, if you want to start taking great pictures don’t worry about specs; just go out and take some photographs!
(Title image: VISTA gigapixel mosaic of the central parts of the Milky Way, produced by European Southern Observatory (ESO) and released under Creative Commons Attribution 4.0 International License. This is a scaled version of the original 108,500 x 81,500, 9-gigapixel image.)
The presence of a camera in this image may be a bit confusing since we’re calling it a scanner. What’s actually going on is that macro-images this piece of art are being captured automatically. The multiple shots will later be assembled into one fascinatingly high-resolution image. The CNC scanner rig is [Charlie Romer’s] summer project.
Unfortunately [Charlie] hasn’t yet collected all the information on the project into one place. After the break you’ll find more images, as well as a few demo videos. The best place to start is probably his proof-of-concept from this Spring. He shows a single-axis CNC mount for the camera. It takes an entire row of images. The assembled photo from that test is shown below. We believe the faint yellow dots in the macro part of the example are fingerprints purposefully left by the printer called printer stenography to help prevent forgery.
The larger rig uses movement on two axes. The idea is that the artwork will be perfectly positioned so that manual focus set at one point will work along all points in the capture routine. He’s using a lamp for a light source but we’re sure he will upgrade so something like a ring light as the project continues.
[Daniel] wrote in to show us the project his group has been working on. It is a massive display wall consisting of 28 projectors and 30 computers. With a resolution of 7168×3072, viewing a 13.3 gigapixel image is a treat. That treat is made even stronger by the fact that navigating the image is done multitouch style with a touchless system built from web cams. We’ve seen lots of projects come out of the NUI group with similar interfaces, but none that used the webcams like this. Usually, the webcam is detecting some kind of interaction between the person and an infra red light source. Maybe that is happening here and we just don’t see it.