The nuclear age changed steel, and for decades we had to pay the price for it. The first tests of the atomic bomb were a milestone in many ways, and have left a mark in history and in the surface of the Earth. The level of background radiation in the air increased, and this had an effect on the production of steel, so that steel produced since 1945 has had elevated levels of radioactivity. This can be a problem for sensitive instruments, so there was a demand for steel called low background steel, which was made before the trinity tests.
The production of steel is done with the Bessemer process, which takes the molten pig iron and blasts air through it. By pumping air through the steel, the oxygen reacts with impurities and oxidizes, and the impurities are drawn out either as gas or slag, which is then skimmed off. The problem is that the atmospheric air has radioactive impurities of its own, which are deposited into the steel, yielding a slightly radioactive material. Since the late 1960s steel production uses a slightly modified technique called the BOS, or Basic Oxygen Steelmaking, in which pure oxygen is pumped through the iron. This is better, but radioactive material can still slip through. In particular, we’re interested in cobalt, which dissolves very easily in steel, so it isn’t as affected by the Bessemer or BOS methods. Sometimes cobalt is intentionally added to steel, though not the radioactive isotope, and only for very specialized purposes.
Recycling is another reason that modern steel stays radioactive. We’ve been great about recycling steel, but the downside is that some of those impurities stick around.
Why Do We Need Low Background Steel?
Imagine you have a sensor that needs to be extremely sensitive to low levels of radiation. This could be Geiger counters, medical devices, or vehicles destined for space exploration. If they have a container that is slightly radioactive it creates an unacceptable noise floor. That’s where Low Background Steel comes in.
So where do you get steel, which is a man-made material, that was made before 1945? Primarily from the ocean, in sunken ships from WWII. They weren’t exposed to the atomic age air when they were made, and haven’t been recycled and mixed with newer radioactive steel. We literally cut the ships apart underwater, scrape off the barnacles, and reuse the steel.
Fortunately, this is a problem that’s going away on its own, so the headline is really only appropriate as a great reference to a popular movie. After 1975, testing moved underground, reducing, but not eliminating, the amount of radiation pumped into the air. Since various treaties ending the testing of nuclear weapons, and thanks to the short half-life of some of the radioactive isotopes, the background radiation in the air has been decreasing. Cobalt-60 has a half-life of 5.26 years, which means that steel is getting less and less radioactive on its own (Cobalt-60 from 1945 would now be at .008% of original levels). The newer BOS technique exposes the steel to fewer impurities from the air, too. Eventually the need for special low background steel will be just a memory.
Oddly enough, steel isn’t the only thing that we’ve dragged from the bottom of the ocean. Ancient Roman lead has also had a part in modern sensing.
The start of World War II threw quantum theory research into disarray. Many of the European physicists left Europe all together, and research moved across the ocean to the shores of the United States. The advent of the atomic bomb thrust American physicists into the spotlight, and physicists began to meet on Shelter Island to discuss the future of quantum theory. By this time one thing was certain: the Copenhagen interpretation of quantum theory had triumphed and challenges to it had mostly died off.
This allowed physicists to focus on a different kind of problem. At this point in time quantum theory was not able to deal with transitional states of particles when they are created and destroyed. It was well known that when an electron came into contact with a positron, the two particles were destroyed and formed at least two photons with a very high energy, known as gamma rays. On the flip side, gamma ray photons could spontaneously turn into positron-electron pairs.
No one could explain why this occurred. It had become obvious to the physicists of the day that a quantum version of Maxwell’s electromagnetic field theory was needed to explain the phenomenon. This would eventually give rise to QED, short for quantum electrodynamics. This is a severely condensed story of how that happened.
One of the keys to nuclear fission is sustaining a chain reaction. A slow chain reaction can provide clean power for a city, and a fast one can be used to create a weapon that will obliterate a city. These days, kids can learn about Uranium and Plutonium in high school. But just a few generations ago, the idea of splitting the atom was just a lofty goal for the brightest physicists and mathematicians who gathered at Los Alamos National Laboratory under the Manhattan Project.
Decoding the mysteries of nuclear fission required a great deal of experimentation and calculations. One bright physicist in particular made great strides on both fronts. That man was [Enrico Fermi], one of the fathers of the atomic bomb. Perhaps his greatest contribution to moving the research beyond the Manhattan Project was creating a handheld analog computer to do the math for him. This computational marvel is known as the FERMIAC.
What is Fission?
Nuclear fission occurs when a nucleus is split into fragments, a process that unleashes a great deal of energy. As a handful of neutrons travel through a reactor pile or other fissionable material, a couple of outcomes are possible. Any one neutron collision might result in fission. This means there will be some number of new neutrons whose paths must be tracked. If fission does not occur, the neutrons may simply scatter about upon collision, which changes their speed and trajectory. Some of the neutrons might be absorbed by the material, and others will simply escape it. All of these possibilities depend on the makeup of the material being bombarded and the speed of the neutron.
Every event that happens to a neutron comprises its genealogical history. If this history is recorded and analyzed, a statistical picture starts to emerge that provides an accurate depiction of the fissility of a given material. [Fermi]’s computer facilitated the creation of such a picture by performing mathematical grunt work of testing different materials. It identified which materials were most likely to sustain a reaction.
Before he left Italy and the looming threat of fascism, [Fermi] led a group of young scientists in Rome called the Via Panisperna boys. This group, which included future Los Alamos physicist [Emilio Segrè], ran many experiments in neutron transport. Their research proved that slow neutrons are much better candidates for fission than fast neutrons.
During these experiments, [Fermi] ran through the periodic table, determined to artificially irradiate every element until he got lucky. He never published anything regarding his methods for calculating the outcomes of neutron collisions. But when he got to Los Alamos, [Fermi] found that [Stanislaw Ulam] had also concluded that the same type of repeated random sampling was the key to building an atomic weapon.
The Monte Carlo Method: Shall We Play a Game?
[Ulam], a Polish-born mathematician who came to the US in 1935, developed his opinion about random sampling due to an illness. While recuperating from encephalitis he played game after game of solitaire. One day, he wondered at the probability of winning any one hand as laid out and how best to calculate this probability. He believed that if he ran through enough games and kept track of the wins, the data would form a suitable and representative sample for modeling his chances of winning. Almost immediately, [Ulam] began to mentally apply this method to problems in physics, and proposed his ideas (PDF) to physicist and fellow mathematician [John von Neumann].
This top-secret method needed a code name. Another Los Alamos player, [Nick Metropolis] suggested ‘Monte Carlo’ in a nod to games of chance. He knew that [Ulam] had an uncle with a propensity for gambling who would often borrow money from relatives, saying that he just had to go to Monte Carlo. The game was on.
The Tricky Math of Fission
Determination of the elements most suitable for fission required a lot of calculations. Fission itself had already been achieved before the start of the Manhattan Project. But the goal at Los Alamos was a controlled, high-energy type of fission suitable for weaponization. The math of fission is complicated largely because of the sheer number of neutrons that must be tracked in order to determine the likelihood and speed of a chain reaction. There are so many variables involved that the task is monumental for a human mathematician.
After [Ulam] and [von Neumann] had verified the legitimacy of the Monte Carlo method with regard to the creation of nuclear weaponry, they decided that these types of calculations would be a great job for ENIAC — a very early general purpose computer. This was a more intensive task than the one it was made to do: compute artillery firing tables all day and night. One problem was that the huge, lumbering machine was scheduled to be moved from Philadelphia to the Ballistics Research Lab in Maryland, which meant a long period of downtime.
While the boys at Los Alamos waited for ENIAC to be operational again, [Enrico Fermi] developed the idea forego ENIAC and create a small device that could run Monte Carlo simulations instead. He enlisted his colleague [Percy King] to build the machine. Their creation was built from joint Army-Navy cast off components, and in a nod to that great computer he dubbed it FERMIAC.
FERMIAC: Hacking Probabilities
FERMIAC was created to alleviate the necessity of tedious calculations required by the study of neutron transport. This is something of an end-run around brute force. It’s made mostly of brass and resembles a trolley car. In order to use it, several adjustable drums are set using pseudorandom numbers. One of these numbers represents the material being traversed. A random choice is made between fast and slow neutrons. A second digit is chosen to represent the direction of neutron travel, and a third number indicates the distance traveled to the next collision.
Once these settings are dialed in, the device is physically driven across a 2-D scale drawing of the nuclear reactor or materials being tested. As it goes along, it plots the paths of neutrons through various materials by marking a line on the drawing. Whenever a material boundary is crossed, the appropriate drum is adjusted to represent a new pseudorandom digit.
FERMIAC was only used for about two years before it was completely supplanted by ENIAC. But it was an excellent stopgap that allowed the Manhattan Project to not only continue unabated, but with rapid progress. FERMIAC is currently on display at the Bradbury Science Museum in Los Alamos, New Mexico alongside replicas of Fat Man and Little Boy, the weapons it helped bring to fruition. [Fermi]’s legacy is cemented as one of the fathers of the atomic bomb. But creating FERMIAC cements his legacy as a hacker, too.
After Los Alamos, [Stanislaw Ulam] would continue to make history in the field of nuclear physics. [Enrico Fermi] was opposed to participating in the creation of the exponentially more powerful hydrogen bomb, but [Ulam] accepted the challenge. He proved that Manhattan Project leader [Edward Teller]’s original design was unfeasible. The two men worked together and by 1951 had designed the Teller-Ulam method. This design became the basis for modern thermonuclear weaponry.
Today, the Monte Carlo method is used across many fields to describe systems through randomness and statistics. Many applications for this type of statistical modeling present themselves in fields where probabilities are concerned, like finance, risk assessment, and modeling the universe. Wherever the calculation of all possibilities isn’t feasible, the Monte Carlo method can usually be found.
UPDATE: Commentor [lwatchdr] pointed out that the use of the FERMIAC began after the Manhattan Project had officially ended in 1946. Although many of the same people were involved, this analog computer wasn’t put into use until about a year later.
We never really thought about it before, but this post about Rapatronic Shutters answers the question of how to photograph an atomic bomb detonation. The post includes an MIT video where [Charles Wyckoff] explains how he and [Harold Edgerton] developed the Rapatronic Camera. It is designed to snap a photograph based on zero time, marked by the X-ray transmission emanating from the bomb before it actually explodes. This pulse is picked up by a light sensor on a delay circuit, allowing for very precise exposure timing. Many of these cameras were used at the same time, all with slightly different delays so that the images could be viewed in order to show what happens during each stage of detonation.