One can be reasonably certain that when the title of an article includes the phrase “The Nature of Reality”, thought provoking words must surely lie ahead. But when that same title seems to inquire about a gentleman’s socks, coupled with an image of said gentleman’s socks which happen to be mismatched and reflect very loud colors , one might be moved in a direction which suggests the article is not of a serious nature. Perhaps even some sort of parody.
It is my hope that you will be pleasantly surprised with the subtle genius of Irish physicist [John Bell] and his use of socks, washing machines, and a little math to show how we can test one of quantum physic’s most fundamental properties. A property that does indeed reside in the very nature of the reality we are a part of. Few people can say they understand the Bell Inequality down to its most fundamental level. Give me a little of your time, and you will be counted among these few.
The Greek philosopher [Zeno of Elea] proposed that an arrow in flight was in fact not in motion and its visible movement is only an illusion. A simple example of this is to glance at an arrow in flight, doing this causes our mind to store a snapshot of a motionless arrow. [Zeno] further defended this argument by stating that if an object has to travel a finite distance to reach a destination then the finite distance can be divided in half and the object must first reach this halfway point before arriving at the destination. This process can be repeated an infinite number of times, creating an infinite number of points that the object must occupy before reaching the destination thus it can never arrive at the destination.
Whoa, that’s a bit heavy. Let’s take a second here to think about this and never arrive at the conclusion, shall we?
So what does a fancy mathematics parlor trick have to do with the fact that we have all seen an arrow arrive at its destination? Recent experiments conducted at Cornell University have in fact verified the Zeno Effect. Researchers were able to achieve this by having atoms suspended between lasers in temperatures ~1 nano degree above absolute zero so that the atoms arrange themselves in a lattice formation. As per usual in quantum mechanics when observed, the atoms had an equal possibility of being anywhere within the space of the lattice. However, when they were observed at high enough frequencies the atoms remain motionless, bringing the quantum evolution to a halt.
The philosopher in the street, who has not suffered a course in quantum mechanics, is quite unimpressed by the [Einstein-Podolsky-Rosen] correlations. He can point to many examples of similar correlations in everyday life. The case of Bertlmann’s socks is often cited. Dr. Bertlmann likes to wear two socks of different colours. Which colour he will have on a given foot on a given day is quite unpredictable. But when you see that the first sock is pink you can be already sure that the second sock will not be pink. Observation of the first, and experience with Bertlmann, gives the immediate information about the second. There is no accounting for tastes, but apart from that there is no mystery here. And is this [Einstein-Podolsky-Rosen] business just the same?
John Bell began his now famous paper with the above paragraph. The Bell Inequality started off like so many other great theories in science – as a simple thought experiment. Its conclusions were not so simple, however, and would lead the way to the end of Einstein’s idea of local hidden variables, and along with it his hopes for a deterministic universe. In this article, we’re going to look at the Bell inequality in great detail. Our guide will be a chapter from Jim Baggots’ The Quantum Story, as it has one of the best descriptions of Bell’s theory I’ve ever read.
In the wee hours of the late 17th century, Isaac Newton could be found locked up in his laboratory prodding the secrets of nature. Giant plumes of green smoke poured from cauldrons of all shapes and sizes, while others hissed and spat new and mysterious chemical concoctions, like miniature volcanoes erupting with knowledge from the unknown. Under the eerie glow of twinkling candle light, Newton would go on to write over a million words on the subject of alchemy. He had to do so in secret because the practice was frowned upon at that time. In fact, it is now known that alchemy was the ‘science’ in which he was chiefly interested in. His fascination with turning lead into gold via the elusive philosopher’s stone is now evident. He had even turned down a professorship at Cambridge and instead opted for England’s Director of Mint, where he oversaw his nation’s gold repository.
Not much was known about the fundamental structure of matter in Newton’s time. The first version of the periodic table would not come along for more than a hundred and forty years after his death. With the modern atomic structure not surfacing for another 30 years after that. Today, we know that we can’t turn lead into gold without setting the world on fire. Alchemy is recognized as a pseudoscience, and we opt for modern chemistry to describe the interactions between the elements. Everyone walking out of high school knows what atoms and the periodic table are. They know what the sub-atomic particles and their associated electric charges are. In this article, we’re going to push beyond the basics. We’re going to look at atomic structure from a quantum mechanical view, which will give you a new understanding of why the periodic table looks the way it does. In fact, you can construct the entire periodic table using nothing but the quantum numbers.
During the early 1900’s, [Einstein] was virtually at war with quantum theory. Its unofficial leader, [Niels Bohr], was constantly rebutting Einstein’s elaborate thought experiments aimed at shooting down quantum theory as a description of reality. It is important to note that [Einstein] did not disagree with the theory entirely, but that he was a realist. And he simply would not believe that reality was statistical in nature, as quantum theory states. He would not deny, for example, that quantum mechanics (QM) could be used to give a probable location of an electron. His beef was with the idea that the electron doesn’t actually have a location until you try to measure it. QM says the electron is in a sort of “superposition” of states, and that asking what this state is without measurement is a meaningless question.
So [Einstein] would dream up these incredibly complex hypothetical thought experiments with the goal of showing that a superposition could not exist. Now, there is something to be said about [Einstein] and his thought experiments. He virtually dreamed up his relativity theory while working as a patent clerk at the ripe old age of 26 years using them. So when he had a “thought” about something, the whole of the scientific world stopped talking and listened. And such was the case on the 4th of May, 1935.
By the turn of the 19th century, most scientists were convinced that the natural world was composed of atoms. [Einstein’s] 1905 paper on Brownian motion, which links the behavior of tiny particles suspended in a liquid to the movement of atoms put the nail in the coffin of the anti-atom crowd. No one could actually see atoms, however. The typical size of a single atom ranges from 30 to 300 picometers. With the wavelength of visible light coming in at a whopping 400 – 700 nanometers, it is simply not possible to “see” an atom. Not possible with visible light, that is. It was the summer of 1982 when Gerd Binnig and Heinrich Rohrer, two researchers at IBM’s Zurich Research Laboratory, show to the world the first ever visual image of an atomic structure. They would be awarded the Nobel prize in physics for their invention in 1986.
The Scanning Tunneling Microscope
IBM’s Scanning Tunneling Microscope, or STM for short, uses an atomically sharp needle that passes over the surface of an (electrically conductive) object – the distance between the tip and object being just a few hundred picometers, or the diameter of a large atom.
A small voltage is applied between the needle and the object. Electrons ‘move’ from the object to the needle tip. The needle scans the object, much like a CRT screen is scanned. A current from the object to the needed is measured. The tip of the needle is moved up and down so that this current value does not change, thus allowing the needle to perfectly contour the object as it scans. If one makes a visual image of the current values after the scan is complete, individual atoms become recognizable. Some of this might sound familiar, as we’ve seen a handful of people make electron microscopes from scratch. What we’re going to focus on in this article is how these electrons ‘move’ from the object to the needle. Unless you’re well versed in quantum mechanics, the answer might just leave your jaw in the same position as this image will from a home built STM machine.
While the official title of the 5th Solvay conference was “on Electrons and Photons”, it was abundantly clear amongst the guests that the presentations would center on the new theory of quantum mechanics. [Planck], [Einstein], [Bohr], [de Broglie], [Schrodinger], [Heisenberg] and many other giants of the time would be in attendance. Just a month earlier, [Niels Bohr] had revealed his idea of complementarity to fellow physicists at the Instituto Carducci, which lay just off the shores of Lake Como in Italy.
The theory suggested that subatomic particles and waves are actually two sides of a single ‘quantum’ coin. Whichever properties it would take on, be it wave or particle, would be dependent upon what the curious scientist was looking for. And asking what that “wave/particle” object is while not looking for it is meaningless. Not surprisingly, the theory was greeted with mixed reception by those who were there, but most were distracted by the bigwig who was not there – [Albert Einstein]. He couldn’t make it due to illness, but all were eager to hear his thoughts on [Bohr’s] somewhat radical theory. After all, it was he who introduced the particle nature of light in his 1905 paper on the photoelectric effect, revealing light could be thought of as particles called photons. [Bohr’s] theory reconciled [Einstein’s] photoelectric effect theory with the classical understanding of the wave nature of light. One would think he would be thrilled with it. [Einstein], however, would have no part of [Bohr’s] theory, and would spend the rest of his life trying to disprove it.
Complementarity – Wave , Particle or both?
For more than a century it was thought that light was a wave. In 1801, [Thomas Young] had discovered interference patterns when shining a light through two very close slits. Interference is a well known property of waves. This combined with [Maxwell’s] equations, which predicted the existence of electromagnetic radiation put little doubt into anyone’s mind that light was nothing more, or less, than a wave. There was a very odd issue, however, that puzzled physicists during the 18th century. When shining light upon a metallic surface, electrons would be ejected from that surface. Increasing the intensity of the light did not translate to an increase in speed of the expelled electrons, like classical mechanics says it should. Increasing the frequency of the light did increase the speed. The explanation of this phenomenon could not be had until 1900, when [Max Planck] realized that physical action could not be continuous, but must be a multiple of some small quantity. This quantity would lead to the “quantum of action”, which is now called [Planck’s] constant and birthed quantum physics. It would have been impossible for him to know that this simple idea, in less than two decades, would lead to a change in understanding of the nature of reality. It only took Einstein, however, a few years to use [Planck’s] quantum of action to explain that mind-boggling issue of electrons releasing from metal via light and not following classical law with the incredibly complex equation:
E = hv
Where E is the energy of the light quanta, h is Planck’s constant and v is the frequency of the light. The most important item to consider here is this light quanta, later to be called a photon. It is treated as a particle. Now, if you’re not scratching your head in confusion right about now, you haven’t been paying attention. How can light be a wave and a particle? Join me after the jump and we’ll travel further down this physics rabbit hole.