.post-body { line-height:100%; } -->

Friday 6 May 2016

The Certainty of Uncertainty

One of the more difficult things to grasp in modern physics is the vast range of consequences of the central law of quantum mechanics, Heisenberg's Uncertainty Principle (HUP), some of which can be extremely subtle. This particular wander through thought-space is an attempt to clarify some things, not least what I see as a major misconception concerning a recent popular physics book, Lawrence Krauss' A Universe From Nothing.

This particular post has been pinging about the pinball machine that is my mind for a few years, pretty much since the release of said tome, motivated by several discussions I've had with various ISBNbots, creationists and even some with a degree of scientific literacy. It is, of course, not remotely beyond the realm of possibility that the misconception is mine, so I look forward to any corrections, as always. This is my own précis of the material, and my reading of what Krauss said in the book and in the lecture that preceded its publication, which I highly recommend, by the way.

I don't intend to treat Krauss' book in any detail on this outing, because the ideas themselves will form part of the Before the Big Bang series. Here, I simply want to look at the consequences of some of the things he talked about in that book, along with some of the things he didn't, and to cover some of the implications of HUP aside from how they apply to particles. I'll also be covering what the evidence is for the effects those implications predict. Of necessity, some of what follows is going to deal with topics I've touched on before, so apologies to regular readers.

Let's start with a little potted history; in this case, the history of our ideas about light:

In the 5th century BCE, Empedocles of Agrigentum postulated the cosmogenic theory of the four classical elements; earth, air, fire and water. He believed that the eye had been made from all four elements by Aphrodite, and that the fire shone from within the eye in rays, making sight possible. This was not without obvious problems, such as why we could see better during the day than at night. Empedocles proposed an interaction between the rays from our eyes and the rays from other sources to account for this.

Hit the timewarp button briefly, and whisk yourself forward to about the late 3rd or early 4th century BCE, and pay a visit to somebody whose name is familiar to teenagers the world over: Euclid. Famous for his book Elements, still the foundational work on geometry today, he also wrote a book on optics, treating reflections geometrically, and challenging the view that light propagated from the eye on the basis that, on opening your eyes at night, it should take a finite amount of time for you to see the stars. The only known solution to this was to propose that light travelled infinitely fast.

For many centuries, the debate between light as particles and light as waves bounced back and forth, with the sway mostly going to the wave models. Descartes thought that light was a mechanical property of the light-emitting body and the transmitting medium. His ideas appeared in a work on refraction and, while some of his conclusions were wrong (he thought that light travelled faster in a denser medium, because sound does), his theory formed the foundation for mechanical optics. 

Christiaan Huygens proposed a mathematical wave theory of light, and postulated the 'luminiferous aether', as waves were thought to require a medium in which to travel. Newton, on the other hand, thought light was made of particles or, as he called them, 'corpuscles', on the basis that waves could turn corners, while light travelled in straight lines.

Newton's model became widely accepted and ruled the roost for over a century until, in around 1800, Thomas Young devised an experimentum crucis that seemed to nail the case in favour of waves. It was known that waves could interfere with each other, so a suitable experiment should show a characteristic pattern of interference. Where the peaks of two waves meet, the resulting peak would be higher, where the troughs of two waves meet, the resulting trough would be deeper. Where the peak of one wave and the trough of another wave meet, they cancel each other out. Music production professionals use this fact in a test known as a 'null' test, in which a recorded signal is copied and thrown out of phase to ensure that nothing is being added to the signal during processing. Because the original signal is cancelled out, any sound that remains is being added during processing. As an aside, this is also how expletives are removed from recordings for radio broadcast, and how noise-cancelling headphones work.

Young's experiment involved shining a beam of light through a board with two slits in it some way apart. If light was made of particles, one would expect to see two lines on the back screen. If it were made of waves, one should expect to see many lines, indicating interference, as in the figure below.


http://www.viewzone.com/light.xy.gif

What he saw was, of course, the interference pattern indicating waves. One would have thought that the case was closed.

This experiment has huge importance, so we'll be coming back to it.

Another important issue was polarisation of light. Anybody who has polarising sunglasses will be aware that the light allowed through changes based on how the lens is rotated, thus:

In 1816, André-Marie Ampère proposed that this could be explained if light 'waved' orthogonally (at right angles) to the direction of travel. August-Jean Fresnel, based on this, worked out a theory of light, later added to by Simèon Poisson, which finally overturned Newton's view.

In 1845, Michael Faraday worked out experimentally that the plane of polarisation could be changed in the presence of a dielectric, an insulator that can be polarised by applying an electric charge, an effect known as Faraday Rotation. This was the beginning of the idea that light and electromagnetism were related in some way.

By 1850, the speed of light had a reasonably robust measure for the first time, measured by Lèon Foucault, famous for his pendulum experiment showing that the Earth rotates. He devised a beautiful experiment along with Hippolyte Fizeau. After some refinements, he finally settled on an absolute speed for light in 1862 of 298000 km/s, so that he could get an accurate figure for the astronomical unit (AU), the mean distance between Earth and the sun.

James Clerk Maxwell, inspired by Faraday's work, studied electromagnetism and light in some depth, and worked out that electromagnetic waves propagate at the same speed as light, which led him to infer that electromagnetism and light were the same thing. This was later confirmed by Heinrich Hertz, who showed that radio waves exhibited all the same behaviours, namely reflection, interference, refraction and diffraction. Further, and importantly for our purposes here, Maxwell's equations described fields. I should also note here that Maxwell's use of the speed of light in his field equations included no term for the motion of the source or the observer. Some sources suggest that the term for the speed of light was only used for mathematical consistency, but this is difficult to confirm, especially in light of the consilience between his measurements and those of Foucault.

The one question that hadn't been answered with any rigour was how light waves propagated without a medium. The luminiferous aether had been around since Huygens as a concept, but nobody had been able to demonstrate its existence. Several possibilities were considered, first, that the aether was stationary and partially dragged by Earth in its orbit, second, that aether moved with the Earth. Eventually, the latter hypothesis was discarded in favour of the former, a hypothesis that owes itself greatly to Fizeau.

Then, via some very clever reasoning, it was worked out that, given a stationary aether, the motion of the Earth through it should generate an 'aether wind', which should be detectable because, at different points in Earth's orbit, it was moving in different directions relative to the aether. In 1887, Albert Michelson and Edward Morley of what is now Case Western Reserve University,  devised an experiment called a laser interferometer - a huge version of which is making a big noise today in the news, in the form of LIGO, the gravitational wave observatory - that should be able to detect differences in the measured speed of light toward, away from, and perpendicular to the motion of the planet, making the aether detectable. This is the now-famous Michelson-Morley experiment, and it was really elegant. 

In essence, it's a single beam, split into two and sent along two arms at right-angles to each other and then reflected back. Because the two arms were the same length, any difference in the speed travelled should manifest in the detector as an interference pattern, because of the shift in phase, in exactly the same way as in Young's double-slit experiment. One arm was aligned with the direction of motion, with the other arm orthogonal to it. 



This has gone down as about the most famous null result in the history of physics. Everybody expected the result to be positive, and for some years, many people were trying to work out what had gone wrong with the experiment.

Then, in 1905, along came a man working as a patent clerk in Switzerland, and he changed everything. First, he published a paper entitled On a Heuristic Viewpoint Concerning the Production and Transformation of Light, a paper on the photoelectric effect. This paper comprehensively put the cat among the pigeons, because it showed that light came in discrete units or 'quanta' (what we now call 'photons'), opening up the wave/particle debate that everybody had thought closed since Young.

He followed that up with another paper entitled On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat, a paper on Brownian motion, which provided the first hard empirical evidence in support of atomic theory, and opened physics up properly to statistical methods. 

These two papers between them are widely regarded as being the ultimate foundation of quantum theory - a theory that Einstein himself never accepted - and were the work upon which his 1921 Nobel prize in physics was based.

Finally, he released a third paper, On The Electrodynamics of Moving Bodies, which brought with it an entirely new way of dealing with space and time, and overthrew Newtonian mechanics, which had stood for about 250 years without serious challenge. We've come to know it as the special theory of relativity (special because it deals with a specific set of circumstances, one in which gravity plays no part). It's unclear whether the Michelson-Morley result entered Einstein's thinking in formulating this theory, though he was certainly aware of it. Einstein himself said that he merely ran with the appearance of c in Maxwell's equations with no mention of the motion of source or observer, and tried to work out what it would imply for it to have the same value of every observer, regardless of motion. In order to accommodate this, space and time had to stretch and squeeze, and in fact they had to be unified into a single entity, spacetime.

Clearly, there was a problem here. One validated theory tells us that light is made of particles, while another validated theory tells us that it's made of waves, so what gives? Is this a contradiction? To answer this question, we need to leave light for a bit and talk about heat. Specifically, black body radiation.

In the early days after Einstein, some work was going on in a different field, starting with Einstein's friend Max Planck working on problems with black body radiation. He'd been trying to solve a problem with working out the energy in an oven. He'd begun by adding up all the frequencies of energy that should be contributing, and to his surprise, he discovered that the energy should be infinite. This was obviously nonsense, or a bit of melted chocolate would have been a forgotten footnote in the history of physics, rather than the basis for a new culinary technology. Clearly something was wrong, but what was it? After much mucking about with the equations, he realised something interesting, and it's all to do with how waves behave.

Look at this picture. It shows a periodic sine wave. You can see that this wave cycle begins at one edge of the image and ends at the other. It also illustrates the zero-point, which is where the amplitude of the wave is zero. What Planck realised was that, if he included in his calculations only those frequencies of energy whose wave returned to the zero point exactly at the wall of the oven, the calculations worked out and gave the correct energies.
This principle allows any frequency in which the energy returns to the zero point, even if that point is halfway through a cycle.

He realised that this meant that energy was quantised, which meant it came in discrete units. If you couldn't get back to the zero line at the wall, you couldn't join the party. This meant that any of the following were perfectly acceptable.

 
While the  following are not:

This was the birth of Quantum Mechanics, and it brings us to Heisenberg's Uncertainty Principle. 

So what is it?


HUP is the central law of quantum mechanics, after Werner Heisenberg, who formulated it. It deals, in a nutshell, with how much information can be extracted from a system. In natural language, it says that, for any of several pairs of quantities known as 'conjugate variables', the more information we extract about one of a pair, the less information we can extract about the other. 

Here's the critical equation again, for the pair of variables 'momentum' and 'position':

\begin{equation} \Delta p \Delta x \geq \hbar/2 \end{equation}

Where Δ (delta) denotes uncertainty, p denotes momentum, x denotes position, and ħ (h-bar) is the reduced Planck constant. The Planck constant is given in joule seconds and has the value 6.626×10−34 Js. The reduced Planck constant (also known as Dirac's constant) is obtained by dividing this result by 2π to give 1.055×10−34 Js.

What the equation tells us is that the uncertainty in momentum multiplied by the uncertainty in position can never be less than this tiny number, ħ/2. The pair of conjugate variables most discussed is the momentum and position of a particle, but there are many such pairs, such as angular momentum and orientation, energy and time, etc. 

Let's simplify by looking at a single particle, so that we can see close-up how this principle applies in the real world (for a given value of real; let's not get ontological). In this case, we want to put our particle in a box. Since we don't have electron microscopes for eyes, we won't actually be able to see the particle.

Here's our box, then:


Our particle is wandering around the box just as you'd expect (this is a simplification, for reasons that will become clear). Now let's shrink the box a little:

Not much has changed except, of course, that now we've pinned down the position of our particle more precisely, which means, as per the uncertainty principle, that we've lost some information about its momentum. The net effect of this will tend to be an increase in momentum, so that the particle is moving around more quickly than it was when the box was larger.

Now let's shrink the box even further: 

 
 Now the particle is fairly whizzing about. It results in behaviour so extreme that, the more you shrink the box, the more probable it is that when you look in the box the particle won't even be there! It will have tunnelled through the wall of the box and will now appear on the outside, or even on the other side of the universe.

I should also note here that, even in the case of the larger box, there is a non-zero probability that the particle will be found on the other side of the universe, but this probability increases the more you constrain the position of the particle. Moreover, because of this relationship, and because you can't pin down the momentum of the particle once it's position is constrained, you also can't tell where the particle will be a moment later.

This is the uncertainty principle in a nutshell.  

Just as an aside, two extremely interesting things come about as a result of this process. The first is hugely important to us, because without it we couldn't live. This 'quantum tunnelling' process is hugely responsible for fusion in stars, because quantum tunnelling is what allows hydrogen atoms to overcome the Coulomb barrier, without which the probability of fusion would be massively reduced. The second is quite important to me today because, without it, I wouldn't be able to share these thoughts with you. Why? Because quantum tunnelling is also the principle underlying the operation of the transistors in the technology I'm employing to write this missive. This principle first found light in a device known as the Esaki diode, after Leo Esaki, then a researcher at what would later become Sony, and for which he won the Nobel prize in 1973 for the first experimental demonstration of quantum tunnelling. The transistors in your computer are basically arrays of Esaki diodes.

Let's go back to our double-slit experiment, because we really need to try to resolve the dispute between particle and wave for our photons. This time, though, we're going to look at it slightly differently

One might think, given that we have good reason now to suppose that light is indeed particulate in nature, that the interference pattern arises because the photons from our beam are interacting with each other, given the huge number of photons being fired at the screen. So what happens when we fire one photon at a time? Surely, we should now see the particle pattern of two lines on the detector screen? Let's see. Here's a time-lapse of the experiment done for real:

https://giant.gfycat.com/BoilingEqualGroundhog.gif


As you can see, even firing individual photons results in the same interference pattern, which means that the photons must be interfering with themselves! That must mean that single photons travel through both slits at once! Can this shit get any weirder?

Actually, yes it can. What happens when we place photon detectors next to the slits?

http://www.viewzone.com/light.detectors.gif

What? The photons 'know' when we're looking at them!

Many iterations of this experiment have been done, including setups that tagged the photons on route and then removed the tag, and we get interference. Any method we can devise to determine which slit individual photons went through results in the same thing. There are even iterations, known as 'delayed choice' experiments, in which we can decide whether or not the detectors are on after the photons have left the source. No matter what we do or how sophisticated and Blackadderesque our cunning, if we extract 'which path' information from the system, the interference pattern disappears and we get a particulate result.

The only reasonable conclusion we can draw is that light behaves like both a particle and a wave, depending on how we interact with it. 

What about other particles? There's no solace there, either. This experiment has been conducted with all manner of particles, and even with larger objects, like buckyballs (buckminster-fullerenes; hat-tip to chemist Sir Harry Kroto, who discovered them, and who died a few days ago). The result is always the same, which tells us that all particles display particle-like and wave-like behaviour. This is the famous 'wave-particle duality'. Some suggestions have been made to explain this, such as 'wavicles', but the best answer comes, we think, from quantum field theory (QFT), namely that there is no particle, and no wave, but that both behaviours are manifestations of the behaviour of something else, and that something else is where we repair to next.

The interesting implication in our immediate context is what happens when we apply this principle to fields.

Spacetime is pervaded by many fields (some would argue that spacetime itself is a field). At its most basic, a field is simply a description of the changing values of some parameter from place to place and from time to time. Maxwell's equations for electromagnetism, for example, are field equations; that is, they describe electromagnetic radiation as it varies from place to place and time to time. In QFT, all our interactions are manifestations of the behaviours of fields. Where the parameters of a field have the values, for example, 0.511 MeV/c2 mass (particle masses are given in electron-volts divided by c2 - units of energy, because E=mc2, meaning that m=E/c2 - or as multiples of the proton mass), -1 charge and spin ½, the field manifests itself as an electron.

The take-home here is, of course, that when we interact with a field in a certain way, we see a particle. When we interact with it in another way, we see a wave. In short, neither manifestation actually exists in and of itself, only as a behaviour exhibited via our interactions. This is the famous and oft-misunderstood 'observer effect' (most often misunderstood as requiring the observer to be conscious; most interactions involve photons, which are not, as far as can be ascertained, conscious).

When we apply HUP to a field, we employ the same equation given above, but now our parameters 'position' and 'momentum' become the parameters 'value' and 'rate of change'. Thus, applying the uncertainty principle to a field tells tells us that, the more closely we constrain the value, the more uncertain we are about the rate of change and, just as with our example of the particle in the box, the more uncertain we are about what that value will be even a split-second later. In short, the value itself, and the rate of change, don't have any independent existence, they're simply the results of interactions.


When we put all of the above together, an interesting consequence arises, and this is where we really begin to make contact with what Lawrence Krauss was talking about in his book and lecture. That consequence is, of course, that 'nothing' can't persist. In order for that to occur, we'd be looking at a field whose parameters could both be known with arbitrary precision and absolute certainty, because both parameters would be zero. There would be no value and no rate of change, which clearly violates the uncertainty principle. In short, there must be something. 

Is there experimental evidence in support of this conclusion? You betcha!

For that, we need to look at a thought-experiment by one of those crazy Dutchmen, in this instance, physicist Hendrik Casimir:

The uncertainty principle implies that a field cannot remain at a fixed value, including zero. This, in turn, implies that there must be activity in the field all the time, which would manifest itself as energy. Touching on what Planck discovered about waves as discussed above, he reasoned that, if one were to put two charged plates very close together, there should be wavelengths of energy that were disallowed, because they couldn't complete a half or full cycle between the plates, while there would be greater energy outside the plates where all wavelengths were allowed, and this should generate a differential pressure, which forces the plates together.

 


This experiment has been conducted under lots of conditions, including in a vacuum bell jar, and the effect is measured, exactly in line with predictions. These energy fluctuations, which have come to be called 'virtual particles' (virtual because they're short-lived, not because they aren't real) borrow energy from spacetime in the form of a differential, manifesting as virtual particle pairs, move apart a little, and then come back together and annihilate.
 
This is what Krauss means when he says that what we think of as 'nothing' isn't really nothing, because the vacuum of space is seething with activity in these fields. However, he isn't, as he's been accused of, redefining 'nothing' because, absent those interactions, there really is precisely nothing. Those particles arise from nothing, and then return to nothing.


What about the universe? What Krauss is proposing is that, given a sufficiently large fluctuation, the universe can literally arise from nothing, via the same mechanism, and with the same underlying principle, Heinsenberg's Uncertainty Principle.

I look forward to any corrections or comments. 

No comments:

Post a Comment

Note: only a member of this blog may post a comment.