Richard Feynmann noted more than once that complementarity is the central mystery that lies at the heart of quantum theory. Complementarity rules the world of the very small… the quantum world, and surmises that particles and waves are indistinguishable from one other. That they are one and the same. That it is nonsensical to think of something, or even try to visualize that something as an individual “particle” or a “wave.” That the particle/wave/whatever-you-want-to-call-it is in this sort of superposition, where it is neither particle nor wave. It is only the act of trying to measure what it is that disengages the cloaking device and the particle or wave nature is revealed. Look for a particle, and you’ll find a particle. Look for a wave instead, and instead you’ll find a wave.
Complementarity arises from the limits placed on measuring things in the quantum world with classical measuring devices. It turns out that when you try to measure things that are really really really small, some issues come up… some fundamental issues. For instance, you can’t really know exactly where a sub-atomic particle is located in space. You can only know where it is within a certain probability, and this probability is distributed through space in the form of a wave. Understanding uncertainty in measurement is key to avoiding the disbelief that hits you when thinking about complementarity.
This article is a continuation of the one linked above. I shall pick up where I left off, in that everyone agrees that measurement on the quantum scale presents some big problems. However, not everyone agrees what these problems mean. Some, such as Albert Einstein, say that just because something cannot be measured doesn’t mean it’s not there. Others, including most mainstream physicists, say the opposite — that if something cannot be measured, it for all practical purposes is not there. We shall continue on our journey by using modern technology to peer into the murky world of complementarity. But first, a quick review.
Evolution is one clever fellow. Next time you’re strolling about outdoors, pick up a pine cone and take a look at the layout of the bract scales. You’ll find an unmistakable geometric structure. In fact, this same structure can be seen in the petals of a rose, the seeds of a sunflower and even the cochlea bone in your inner ear. Look closely enough, and you’ll find this spiraling structure everywhere. It’s based on a series of integers called the Fibonacci sequence. Leonardo Bonacci discovered the sequence while trying to figure out how many rabbits he could make starting with just two. It’s quite simple — add the right most integer to the previous one to get the next one in the sequence. Starting from zero, this would give you 0-1-1-2-3-5-8-13-21 and so on. If one was to look at this sequence in the form of geometric shapes, they can create square tiles whose sides are the length of the value in the sequence. If you connect the diagonal corners of these tiles with an infinite curve, you end up with the spiral that you saw in the pine cone and other natural objects.
So how did mother nature discover this geometric structure? Surely it does not know math. How then can it come up with intricate and sophisticated structures? It turns out that this Fibonacci spiral is the most efficient way of squeezing the most amount of stuff in the least amount of space. And if one takes natural selection seriously, this makes perfect sense. Eons of trial and error to make the most copies of itself has stumbled upon a mathematical principle that permeates life on earth.
The homo sapiens brain is the product of this same evolutionary process, and has been evolving for an estimated 7 million years. It would be foolish to think that this same type of efficiency natural selection has stumbled across would not be present in the current homo sapiens brain. I want to impress upon you this idea of efficiency. Natural selection discovered the Fibonacci sequence solely because it is the most efficient way to do a particular task. If the brain has a task of storing information, it is perfectly reasonable that millions of years of evolution has honed it so that it does this in the most efficient way possible as well. In this article, we shall explore this idea of efficiency in data storage, and leave you to ponder its applications in the computer sciences.
Of all the things evolution has stumbled across, the eye is one of the most remarkable. Acting as sort of a ‘biological electromagnetic transducer’, the eye converts incoming photons into electrical and chemical spikes, known as action potentials. These spikes then drive the brain of the host life form. Billions of years of natural selection has produced several types of eyes, with some better than others. It would be an honest mistake to think that the human eye is at the top of the food chain, as this is not the case. Mammals underwent a long stint scurrying around in dark caves and crevasses, causing our eyes to take a back seat to other more important functions, such as the development of a cortex.
There are color sensitive cones in all eyes. Mammals have three types of cones, which are…wait for it…Red, Blue and Green. Our red and green cones are relatively recent on the evolutionary timescale – appearing about 30 million years ago.
The way these cones are distributed around our eyes is not perfect. They’re scattered around in lumpy, uneven patterns, and thus give us an uneven light sampling of our world. Evolution simply has not had enough time to optimize our eyes.
There is another animal on this planet, however, that never went through “the dark ages” as mammals did. This animal has been soaring high above its predators for over 60 million years, allowing its eyes to reach the pinnacle of the natural selection process. A bald eagle can spot a mouse from over a mile away. Birds eyes have 5 types of light sensitive cones – red, blue and green like our own. But add in violet and a type of cone that can detect no light, or black. But it is the way these cones are distributed around the bird’s eye that is most fascinating, and the subject of today’s article.
Think not of what you see, but what it took to produce what you see
Randomness is all around you…or so you think. Consider the various shapes of the morning clouds, the jagged points of Colorado’s Rocky Mountains, the twists and turns of England’s coastline and the forks of a lightning bolt streaking through a dark, stormy sky. Such irregularity is commonplace throughout our natural world. One can also find similar irregular structures in biology. The branch-like structures in your lungs called Bronchi, for instance, fork out in irregular patterns that eerily mirror the way rivers bifurcate into smaller streams. It turns out that these irregular structures are not as irregular and random as one might think. They’re self-similar, meaning the overall structure remains the same as you zoom in or out.
The mathematics that describes these irregular shapes and patterns would not be fully understood until the 1970s with the advent of the computer. In 1982, a renegade mathematician by the name of Benoit Mandelbrot published a book entitled “The Fractal Geometry of Nature”. It was a revision of his previous work, “Fractals: Form, Chance and Dimension” which was published a few years before. Today, they are regarded as one of the ten most influential scientific essays of the 20th century.
Mandelbrot coined the term “Fractal,” which is derived from the Latin word fractus, which means irregular or broken. He called himself a “fractalist,” and often referred to his work as “the study of roughness.” In this article, we’re going to describe what fractals are and explore areas where fractals are used in modern technology, while saving the more technical aspects for a later article.
All these fifty years of conscious brooding have brought me no nearer to the answer to the question, ‘What are light quanta?’ Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken.
Albert Einstein, 1954
As 1926 was coming to a close, the physics world lauded Erwin Schrodinger and his wave mechanics. Schrodinger’s purely mathematical tool was being used to probe the internal structure of the atom and to provide predictable experimental outcomes. However, some deep questions still remained – primarily with the idea of discontinuous movements of the electron within a hydrogen atom. Niels Bohr, champion of and chief spokesperson for quantum theory, had developed a model of the atom that explained spectral lines. This model required an electron to move to a higher energy level when absorbing a photon, and releasing a photon when it moved to a lower energy level. The point of contention is how the electron was moving. This quantum jumping, as Bohr called it was said to be instantaneous. And this did not sit well with classical minded physicists, including Schrodinger.
How does one go about measuring the mass of an object? Mass is defined as the amount of matter an object contains. This is very different from weight, of course, as the mass of our object would remain the same despite the presence or size of a gravitational field. It is safe to say, however, that most laboratory measurement systems are here on Earth, and we can use the Earth’s gravity to aid in our mass measurement. One way is to use a balance and a known amount of mass. Simply place our object on one side of the balance, and keep adding known amounts of mass to the other side until the balance is balanced.
But what if our object is very small…too small to see and too light to measure with gravity? How does one measure the mass of single atom? Furthermore, how does one determine how much of an object consists of a particular type of atom? There are two commonly used tools just for this purpose. Chances are you’ve heard of one of these but not the other. These tools used to measure substances on the atomic level is the focus of today’s article.
Einstein referred to her as the most important woman in the history of mathematics. Her theorem has been recognized as “one of the most important mathematical theorems ever proved in guiding the development of modern physics.” Yet many people haven’t the slightest clue of who this woman was, or what she did that was so significant to our understanding of how our world works. If you count yourself as one of those who have never heard of Emmy Noether and wish to enlighten yourself, please read on. I can only hope I do her memory justice. Not just by telling you who she was, but by also giving you an understanding of how her insight led to the coming together of symmetry and quantum theory, pointing academia’s arrow toward quantum electrodynamics.
Being a female in Germany in the late 1800s was not easy. She wasn’t allowed to register for math classes. Fortunately, her father happened to be a math professor, which allowed her to sit in on many of his classes. She took one of his final exams in 1904 and did so well that she was granted a bachelors degree. This allowed her to “officially” register in a math graduate program. Three years later, she earned one of the first PhD’s given to a woman in Germany. She was just 25 years old.
1907 was a very exciting time in theoretical physics, as scientists were hot on the heels of figuring out how light and atoms interact with each other. Emmy wanted in on the fun, but being a woman made this difficult. She wasn’t allowed to hold a teaching position, so she worked as an unpaid assistant, surviving on a small inheritance and under-the-table money that she earned sitting in for male professors when they were unable to teach. She was still able to do what professors are supposed to do, however – write papers. In 1916, she would pen the theorem that would have her rubbing shoulders with the other physics and mathematical giants of the era.
Noether’s Theorem – The Basics
Emmy Noether’s Theorem seems simple on the onset, but holds a fundamental truth that explains the fabric of our reality. It goes something like this:
For every symmetry, there is a corresponding conservation law.
We all have heard of laws such as Newton’s first law of motion, which is about the conservation of momentum. And the first law of thermodynamics, which is about the conservation of energy. Noether’s theorem tells us that there must be some type of symmetry that is related to these conservation laws. Before we get into the meaning, we must first understand a little known subject called The Principle of Least Action.
The Universe is Lazy
I would wager a few Raspberry Pi Zeros that many of you already have an intuitive grasp of this principle, even if you’ve never heard of it before now. The principle of least action basically says that the universe has figured out the easiest way possible to get something done. Mathematically, it’s the sum over time of kinetic energy minus potential energy as the action occurs. Let us imagine that you’re trying to program an STM32 Discovery eval board in GCC. After about the 6,000th try, you toss the POS across the room and grab your trusty Uno. The graph depicts the STM32 moving through time and space.
The green points represent particular points of how how high the STM32 is at a given point in time. Note that there are no values for height and time – this example is meant to explain a principle. We can say that at these points (and all points along the curve), the SMT32 has both kinetic and potential energies. Let us call the kinetic energy (kt) and the potential energy (pt). The ‘t‘ subscript is for time, as both the energies are functions of time. The action for each point will be called s, and can be calculated as:
However, action is the total sum of the difference of energies at each point between t1 and t2. If you’ve read my integral post, you will know that we need to integrate in order to calculate the total action.
Now before you get your jumper wires in a bunch, all that is saying is that we’re taking the difference in potential (p) and kinetic (k) energies at each point along the curve between t1 and t2, and we’re adding them together. The elongated S symbol means a sum, and the (dt) means as it changes over time. The path that the STM32 will take will be the path where the action S is at its minimum value. Check out the video in the source section below if you’re confused. It’s only 10 minutes and goes into this concept in easy to follow details.
Noether’s Theorem – The Details
Noether’s theorem is based upon a mathematical proof. It’s not a theory. Her proof can be applied to physics to develop theories, however. Now that we know what the principle of least action is, we can do just this.
Any law of nature can be traced back to a symmetry and the least action principle. Let’s consider two very simple examples – Newton’s first law of motion and the first law of thermodynamics.
Conservation of Momentum
Space has what is known as translational symmetry. That’s just fancy-pants talk for saying that what you do in one point in space is the same as what you do in another point in space. It doesn’t matter what hacker space you throw your STM32, it will act the same at all hacker spaces on earth. Space itself provides the symmetry. And because the principle of least action applies, you have a natural law – the first law of motion.
Conservation of Energy
Time has the same translational symmetry as space does. If I toss the STM32 now, and toss it tomorrow, it will act the same. It doesn’t matter what point in time I toss it, the results will always be the same. Thus energy is conserved between different points in time. Time is our symmetry, and the 1st law of thermodynamics is the result.
Now, I realize these examples might seem a bit useless. But when you dig a bit deeper, things get interesting. Electrical charge is also conserved. Noether says there must then be some type of symmetry involved. What do you suppose that symmetry might be? Keep following that rabbit hole, and you’ll end up face to face with QED. We’ll get there in a future article, so for now just keep Noether’s Theorem in mind.
Physics Helps, The principle of least action, video link.
Ransom Stephens, Ph.D., Emmy Noether and The Fabric of Reality, video link