Recording Video In The Era Of CRTs: The Video Camera Tube

We have all watched videos of concerts and events dating back to the 1950s, but probably never really wondered how this was done. After all, recording moving images on film had been done since the late 19th century. Surely this is how it continued to be done until the invention of CCD image sensors in the 1980s? Nope.

Although film was still commonly used into the 1980s, with movies and even entire television series such as Star Trek: The Next Generation being recorded on film, the main weakness of film is the need to move the physical film around. Imagine the live video feed from the Moon in 1969 if only film-based video recorders had been a thing.

Let’s look at the video camera tube: the almost forgotten technology that enabled the broadcasting industry.

It All Starts With Photons

The principle behind recording on film isn’t that much different from that of photography. The light intensity is recorded in one or more layers, depending on the type of film. Chromogenic (color) film for photography generally has three layers, for red, green and blue. Depending on the intensity of the light in that part of the spectrum, it will affect the corresponding layer more, which shows up when the film is developed. A very familiar type of film which uses this principle is Kodachrome.

While film was excellent for still photography and movie theaters, it did not fit with the concept of television. Simply put, film doesn’t broadcast. Live broadcasts were very popular on radio, and television would need to be able to distribute its moving images faster than spools of film could be shipped around the country, or the world.

An image dissector tube

Considering the state of art of electronics back in the first decade of the 20th century, some form of cathode-ray tube was the obvious solution to somehow convert photons to an electric current that could be interpreted, broadcast, and conceivably stored. This idea for a so-called video camera tube became the focus of much research during these decades, leading to the invention of the image dissector in the 1920s.

The image dissector used a lens to focus an image on a layer of photosensitive material (e.g. caesium oxide) which emits photoelectrons in an amount relative to that of the intensity of the number of photons. The photoelectrons from a small area are then manipulated into an electron multiplier to gain a reading from that section of the image striking the photosensitive material.

Cranking Up the Brightness

Iconoscope diagram, from Vladimir Zworykin’s 1931 US patent.

Although image dissectors basically worked as intended, the low light sensitivity of the device resulted in poor images. Only with extreme illumination could one make out the scene, rendering it unusable for most scenes. This issue would not be fixed until the invention of the iconoscope, which used the concept of a charge storage plate.

The iconoscope added a silver-based capacitor to the photosensitive layer, using mica as the insulating layer between small globules of silver covered with the photosensitive material and a layer of silver on the back of the mica plate. As a result, the silver globules would charge up with photoelectrons after which each of these globule ‘pixels’ could be individually scanned by the cathode ray. By scanning these charged elements, the resulting output signal was much improved compared to the image dissector, making it the first practical video camera upon its introduction in the early 1930s.

It still had a rather noisy output, however, with analysis by EMI showing that it had an efficiency of only around 5% because secondary electrons disrupted and neutralized the stored charges on the storage plate during scanning. The solution was to separate the charge storage from the photo-emission function, creating what is essentially a combination of an image dissector and iconoscope.

 

In this ‘image iconoscope’, or super-Emitron as it was also called, a photocathode would capture the photons from the image, with the resulting photoelectrons directed at a target that generates secondary electrons and amplifies the signal. The target plate in the UK’s super-Emitron is similar in construction to the charge storage plate of the iconoscope, with a low-velocity electron beam scanning the stored charges to prevent secondary electrons. The super-Emitron was first used by the BBC in 1937 for an outdoor event during the filming of the wreath laying by the King during Armistice Day.

The image iconoscope’s target plate omits the granules of the super-Emitron, but is otherwise identical. It made its big debut during the 1936 Berlin Olympic Games, with subsequent commercialization by the German company Heimann making the image iconoscope (‘Super-Ikonoskop’ in German) leading to it being the broadcast standard until the early 1960s. A challenge with the commercialization of the Super-Ikonoskop was that during the 1936 Berlin Olympics, each tube would last only a day before the cathode would wear out.

Commercialization

Schematic diagram of an orthicon video camera tube.

American broadcasters would soon switch from the iconoscope to the image orthicon. The image orthicon shared many properties with the image iconoscope and super-Emitron and would be used in American broadcasting from 1946 to 1968. It used the same low-velocity scanning beam to prevent secondary electrons that was previously used in the orthicon and an intermediate version of the Emitron (akin to the iconoscope), called the Cathode Potential Stabilized (CPS) Emitron.

Between the image iconoscope, super-Emitron, and image orthicon, television broadcasting had reached a point of quality and reliability that enabled its skyrocketing popularity during the 1950s, as more and more people bought a television set for watching TV at home, accompanied by an ever increasing amount of content, ranging from news to various types of entertainment. This, along with new uses in science and research, would drive the development of a new type of video camera tube: the vidicon.

The vidicon was developed during the 1950s as an improvement on the image orthicon. They used a photoconductor as the target, often using selenium for its photoconductivity, though Philips would use lead(II) oxide in its Plumbicon range of vidicon tubes. In this type of device, the charge induced by the photons in the semiconductor material would transfer to the other side, where it would be read out by a low-velocity scanning beam, not unlike in an image orthicon or image iconoscope.

Although cheaper to manufacture and more robust in use than non-vidicon video camera tubes, vidicons do suffer from latency due to the time required for the charge to make its way through the photoconductive layer. It makes up for this by having better image quality in general and no halo effect caused by the ‘splashing’ of secondary electrons caused by points of extreme brightness in a scene.

The video cameras that made it to the Moon during the US Apollo Moon landing program would be RCA-developed vidicon-based units, using a custom encoding, and eventually a color video camera. Though many American households still had black-and-white television sets at the time, Mission Control got a live color view of what their astronauts were doing on the Moon. Eventually color cameras and color televisions would become common place back on Earth as well.

To Add Color

Video transmission from the Apollo 10 spacecraft on 18 May 1969.

Bringing color to both film and video cameras was an interesting challenge. After all, to record a black and white image, one only has to record the intensity of the photons at that point in time. To record the color information in the scene, one has to record the intensity of photons with a particular wavelengths in the scene.

In the Kodachrome film, this was solved by having three layers, one for each color. In terrestrial video cameras, a dichroic prism split the incoming light into these three ranges, and each was recorded separately by its own tube. For the Apollo missions, the color cameras used a mechanical field-sequential color system that employed a spinning color wheel, capturing a specific color whenever its color filter was in place, using only a single tube.

So Long and Thanks for All the Photons

Eventually a better technology comes along. In the case of the vidicon, this was the invention of first the charge-coupled device (CCD) sensor, and later the CMOS image sensor. These eliminated the need for the cathode ray tube, using silicon for the photosensitive layer.

But the CCD didn’t take over instantly. The early mass-produced CCD sensors of the early 1980s weren’t considered of sufficient quality to replace TV studio cameras, and were relegated to camcorders where the compact size and lower cost was more important. During the 1980s, CCDs would massively improve, and with the advent of CMOS sensors in the 1990s, the era of the video camera tube would quickly draw to a close, with just one company still manufacturing Plumbicon vidicon tubes.

Though mostly forgotten by most, there is no denying that video camera tubes have a lasting impression on today’s society and culture, enabling much of what we consider to be commonplace today.

Tangential Oscillating Cutting Knife Makes Parts From The Ups And Downs

If you thought using a utility knife manually was such a drag, you’re not alone. [luben111] took some initiative to take the wear and tear off your hands and put it into a custom machine tool they call TOCK, or Tangental Oscillating Cutting Knife. TOCK bolts onto your typical CNC router, giving it the ability to make short work of thin materials like cardboard. Rather than apply a constant downward pressure, however, TOCK oscillates vertically at high speeds, perforating the material while cutting through it at a respectable clip.

TOCK’s oscillations are driven by a radially symmetric cam mechanism, allowing the blade to completely pivot full circle while still performing the oscillations. While traditional inexpensive methods for bolting a blade to a CNC machine passively swivel along the path they’re directed, [luben111] has taken the generous extra step of powering that axis, commanding the blade to actively rotate in the cutting director with a custom script that converts PLT files to G-code. The net result is a tool that preserves a tremendous amount of detail in cumbersome thick materials, like cardboard. Best of all, the entire setup is documented on the Thingiverse with CAD files and light instructions. A few folks have even gone so far as to reproduce their own!

It’s great to see some dabbling in various disciplines to produce a working machine tool. As far as knives go, we’re starting to see a good spread of other utility knife augmentations and use cases, whether that’s a traditional CNC retrofit or a solid attempt at a homebrew ultrasonic mod.

Continue reading “Tangential Oscillating Cutting Knife Makes Parts From The Ups And Downs”

Gripper Uses Belts To Pinch And Grasp

For all the work done since the dawn of robotics, there is still no match for the human hand in terms of its dexterity and adaptability. Researchers of the IRIM Lab at Koreatech is a step closer with their ingenious BLT gripper, which can pinch with precision or grasp a larger object with evenly distributed force. (Video embedded below.)

The three fingered gripper is technically called a “belt and link actuated transformable adaptive gripper with active transition capability”. Each finger is a interesting combination of a rigid “fingertip” and actuation link, and a belt as a grasping surface. The actuation link has a small gearbox at it’s base to open and close the hand, and the hinge with the “fingertip” is spring-loaded to the open position. A flexible belt stretches between the finger tip and the base of the gripper, which can be tensioned to actuate the fingertip for pinching, or provide even force across the inside of the gripper for grasping. Two of the fingers can also rotate at the base to give various gripper configurations. This allows the gripper to be used in various ways, including smoothly shifting between pinching and grasping without dropping a object.

We love the relative simplicity of the mechanism, and can see it being used for general robotics and prosthetic hands, especially if force sensing is integrated.  The mechanism should be fairly easy to replicate using 3D printed components, a piece of toothed belt, and two cheap servos, so get cracking! Continue reading “Gripper Uses Belts To Pinch And Grasp”

Magnets Make Prototyping E-Textiles A Snap

How do you prototype e-textiles? Any way you can that doesn’t drive you insane or waste precious conductive thread. We can’t imagine an easier way to breadboard wearables than this appropriately-named ThreadBoard.

If you’ve never played around with e-textiles, they can be quite fiddly to prototype. Of course, copper wires are floppy too, but at least they will take a shape if you bend them. Conductive thread just wants lay there, limp and unfurled, mocking your frazzled state with its frizzed ends. The magic of ThreadBoard is in the field of magnetic tie points that snap the threads into place wherever you drape them.

The board itself is made of stiff felt, and the holes can be laser-cut or punched to fit your disc magnets. These attractive tie-points are held in place with duct tape on the back side of the felt, though classic double-stick tape would work, too. We would love to see somebody make a much bigger board with power and ground rails, or even make a wearable ThreadBoard on a shirt.

Even though [chrishillcs] is demonstrating with a micro:bit, any big-holed board should work, and he plans to expand in the future. For now, bury the needle and power past the break to watch [chris] build a circuit and light an LED faster than you can say neodymium.

The fiddly fun of e-textiles doesn’t end with prototyping — implementing the final product is arguably much harder. If you need absolutely parallel lines without a lot of hassle, put a cording foot on your sewing machine.

Continue reading “Magnets Make Prototyping E-Textiles A Snap”

Be Wary Of Radioactive Bracelets And Similar

Before you start cutting up that ‘negative ion’ health bracelet or personal massager, be aware that these are highly likely to contain thorium oxide or similar radioactive powder, as this research video by [Justin Atkin] (also embedded after the break) over at The Thought Emporium YouTube channel shows. Even ignoring the irony that thorium oxide is primarily an alpha (He+) emitter and thus not a ‘negative ion’ source (which would be beta decay, with e), thorium oxide isn’t something you want on your skin, or inside your lungs.

These bracelets and similar items appear to embed grains of thorium oxide into the usual silicon-polymer-based bracelet material, without any measures to prevent grains from falling out over time. More dangerous are the items such as the massage wand, which is essentially a metal tube that is filled with thorium oxide powder. This is not the kind of item you want to open on your kitchen table and have it spill everywhere.

Considering that these items are readily available for sale on Amazon, EBay and elsewhere, giving items like these a quick check with the ol’ Geiger counter before ripping them open or cutting them up for a project seems like a healthy idea. Nobody wants to cause a radiological incident in their workshop, after all.

Continue reading “Be Wary Of Radioactive Bracelets And Similar”

OpenChronograph Lets You Roll Your Own Smart Watch

At first, smartwatches were like little tiny tablets or phones that you wore on your wrist. More recently though we have noticed more “hybrid” smartwatches, that look like a regular watch, but that use their hands to communicate data. For example you might hear a text message come in and then see the hand swing to 1, indicating it is your significant other. Want to roll your own? The OpenChronograph project should be your first stop.

The watches are drop in replacements for several Fossil and Skagen watch boards (keep in mind Fossil and Skagen are really the same company). There’s an Arduino-compatible Atmega328p, an ultra low power real time clock, a magnetometer, pressure sensor, temperature sensor, and support for a total of three hands. You can even create PCB artwork that will act as the watch face using Python.

Continue reading “OpenChronograph Lets You Roll Your Own Smart Watch”

LoRa Mesh Network With Off-the-Shelf Hardware

An ideal application for mesh networking is off-grid communication; when there’s no cellular reception and WiFi won’t reach, wide-area technologies like LoRa can be used to create ad hoc wireless networks. Whether you’re enjoying the outdoors with friends or conducting a rescue operation, a cheap and small gadget that will allow you to create such a network and communicate over it would be a very welcome addition to your pack.

That’s exactly the goal of the Meshtastic project, which aims to take off-the-shelf ESP32 LoRa development boards and turn them into affordable mesh network communicators. All you need to do is buy one of the supported boards, install the firmware, and starting meshing. An Android application that will allow you to use the mesh network to send basic text messages is now available as an alpha release, and eventually you’ll be able to run Signal over the LoRa link.

Navigating to another node in the network.

Developer [Kevin Hester] tells us that these are still the very early days, and there’s plenty of work yet to be done. In fact, he’s actively looking to bring a few like-minded individuals onto the project. So if you have experience with the ESP32 or mobile application development, and conducting private communications over long-range wireless networks sounds like your kind of party, this might be your lucky day.

From a user’s perspective, this project is extremely approachable. You don’t need to put any custom hardware together, outside of perhaps 3D printing a case for your particular board. The first time around you’ll need to flash the firmware with esptool.py, but after that, [Kevin] says future updates can be handled by the smartphone application.

Incidentally, the primary difference between the two boards is that the larger and more expensive one includes GPS. The mesh networking side of things will work with either board, but if everyone in your group has the GPS-equipped version, each user will be able to see the position of everyone else in the network.

This isn’t the first time we’ve seen LoRa used to establish off-grid communications, and it surely won’t be the last. The technology is perfect for getting devices talking where there isn’t any existing infrastructure, and we’re excited to see more examples of how it can be used in this capacity.