Feynman Simplified 1C: Special Relativity and the Physics of Light [2 ed.]


283 121 2MB

English Pages 180 Year 2015

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Chapter 25 Development of Special Relativity
Chapter 26 Dynamics in Special Relativity
Chapter 27 Spacetime
Chapter 28 Relativity: Philosophy & Paradoxes
Chapter 29 Special Relativity Review
Chapter 30 Geometric Optics
Chapter 31 Wave Properties of Light
Chapter 32 Electromagnetic Radiation
Chapter 33 Diffraction
Chapter 34 The Index of Refraction
Chapter 35 Radiation Damping & Scattering
Chapter 36 Polarization of Light
Chapter 37 Light & Relativistic Effects
Chapter 38 Review of Physics of Light
Recommend Papers

Feynman Simplified 1C: Special Relativity and the Physics of Light [2 ed.]

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Feynman Simplified 1C: Special Relativity and the Physics of Light Everyone’s Guide to the Feynman Lectures on Physics by Robert L. Piccioni, Ph.D.

Copyright © 2014 by Robert L. Piccioni Published by Real Science Publishing 3949 Freshwind Circle Westlake Village, CA 91361, USA Edited by Joan Piccioni

All rights reserved, including the right of reproduction in whole or in part, in any form. Visit our web site www.guidetothecosmos.com

Everyone’s Guide to the Feynman Lectures on Physics Feynman Simplified gives mere mortals access to the fabled Feynman Lectures on Physics.

This Book Feynman Simplified: 1C covers about a quarter of Volume 1, the freshman course, of The Feynman Lectures on Physics. The topics we explore include: Einstein’s Theory of Special Relativity The Principle of Relativity What Motivated Special Relativity? The Lorentz Transformation Length Contraction & Time Dilation E=mc & Simultaneity Light Cones & Causality Paradoxes, Puzzles, & Philosophy 2

The Physics of Light, including: Geometric Optics The Principle of Least Time Interference & Diffraction Feynman Sum Over Histories Refraction & Polarization Electromagnetic Radiation Light Scattering Relativistic Effects on Light To find out about other eBooks in the Feynman Simplified series, click HERE. I welcome your comments and suggestions. Please contact me through my WEBSITE. If you enjoy this eBook please do me the great favor of rating it on Amazon.com or BN.com.

Table of Contents Chapter 25: Development of Special Relativity Chapter 26: Dynamics in Special Relativity Chapter 27: Spacetime Chapter 28: Relativity: Philosophy & Paradoxes Chapter 29: Special Relativity Review Chapter 30: Geometric Optics Chapter 31: Wave Properties of Light Chapter 32: Electromagnetic Radiation Chapter 33: Diffraction Chapter 34: The Refractive Index Chapter 35: Radiation Damping & Scattering Chapter 36: Polarization Chapter 37: Relativistic Effects Chapter 38: Physics of Light Review

Chapter 25 Development of Special Relativity In V1p15-1, Feynman says: “For over 200 years the equations of motion enunciated by Newton were believed to describe nature correctly, and the first time that an error in these laws was discovered, the way to correct it was also discovered. Both the error and its correction were discovered by Einstein in 1905.” Einstein’s Special Theory of Relativity is one of the most beautiful and profound theories in physics. It is also the theory that is most comprehensively and precisely confirmed by countless experiments. The two fundamental principles from which all of special relativity derives are: Absolute velocity has no physical significance.

The speed of light is the same for all observers. The first statement is the principle of relativity that we discussed earlier, but described differently. The concept of relativity originated with Galileo and was incorporated into Newton’s laws. The second statement is a shocking departure from Newtonian physics, and is due entirely to Albert Einstein.

The Principle of Relativity V1p15-1 A comprehensive statement of the principle of relativity is: The same laws of nature apply to all observers moving at constant velocity. Absolute velocities have no significance, only relative velocities are physical meaningful. Einstein’s general relativity removes the requirement for constant velocities, stating that nature’s laws apply universally. General relativity supersedes Newtonian gravity and correctly describes the laws of nature in accelerating reference frames. (Due to its complex mathematics, general relativity is beyond the scope of this course, but an entirely accessible introduction is available in General Relativity 1: Newton vs Einstein.) Let’s explore what the principle of relativity means with several examples.

Galileo observed that objects fall the same way on a moving ship as they do on land. The modern version of this might be: if you spill your coffee while “dining” in an airplane, it falls straight down onto your lap even though the plane is flying at 1000 km/hr. At first, it’s not surprising that coffee falls straight down; everything does. But consider how this looks from outside the airplane: the coffee isn’t just falling down, it’s also moving forward at 1000 km/hr along with everything else in the plane, even though nothing is pushing it forward as it falls. That would have surprised Aristotle. Consider another example: imagine two laboratories, each with a scientist and all the equipment they might desire. Stop! That’s impossible; make that “a lot of equipment.” Seal the labs so the scientists cannot see anything or measure anything outside their own lab. Then put each lab in its own airplane, with one flying west at 500 km/hr and the other flying east at 1000 km/hr, as shown in Figure 25-1 below.

Figure 25-1 Scientists Can’t Measure Their Velocities

The principle of relativity states there is no test or measurement the scientists can perform to determine their planes’ velocities. Every test they make gives exactly the same results in both planes, as long as they cannot detect anything outside their labs. They could find their planes’ speeds with GPS, but detecting an external signal is cheating. Relativity says all observers see the same physical phenomena, the same laws of nature, regardless of their own velocities, as long as their velocities are constant. Absolute velocity is not detectable. But we can detect when an object’s velocity changes or when two objects have different velocities. If one of the above airplanes suddenly stops (let’s not ask why), we could measure the velocity change (deceleration) with a pendulum. The pendulum would not hang straight down, it would tilt toward the plane’s nose, just as airline passengers have the sensation of being pulled toward the front of the plane when the pilot hits the brakes after landing. Velocity changes are relative and they are physically meaningful and measurable. So are velocity differences; if we observe an airplane from the ground, we can measure its speed relative to us. It’s a very good thing that the laws of nature don’t depend on our “true” or “absolute” velocity, because we don’t really know what that is. You might think that while you are sitting in a chair reading this eBook your velocity is zero, but is it? As Earth spins on its axis once daily, it carries us eastward at up to 1600 km/hr (1000 mph). As Earth orbits the Sun, we’re being carried along at 100,000 km/hr. As the Sun orbits the Milky Way galaxy, we are moving at 800,000 km/hr. And we’re moving at 1,400,000 km/hr through the cosmic microwave background (CMB) radiation, the first light of our universe. (Learn more about the CMB in Our Universe 3: CMB, Inflation, & Dark Matter.) So, how fast are you really moving?

An observer moving with constant velocity — constant speed and direction — is said to be in an inertial frame. Any change in speed or direction is a change of velocity, which implies acceleration. Newton’s laws of motion and special relativity are only valid in inertial frames. In inertial frames, physics is simpler, and who doesn’t want simpler physics? Special relativity is “special” in that it applies only to special circumstances: those without accelerations. General relativity applies generally: to any state of motion.

What Inspired Special Relativity? This section contains material not included in the Feynman Lectures. As we discussed earlier, before 1905, everyone believed that the mass of a body was constant, the same whether it was moving or not. This was consistent with all experiments of the day, and was assumed in Newton’s laws. Einstein’s theory made the astonishing claim that mass increases at high speeds. For “normal” speeds, the mass increase predicted by Einstein is extremely small, far below the weighing precision of the day. At 30,000 miles per hour, a body’s mass increases by only one part per billion. There was no observational evidence to motivate a theory of mass increase. Einstein’s motivation for special relativity was to make relativity and electromagnetism compatible. Einstein’s worries were more philosophical: Maxwell had two equations for one problem. Consider the magnet and the wire shown in Figure 25-2. If the wire moves up, through a stationary magnetic field, an electric current is induced in the wire, according to one of Maxwell’s equations. But, if the magnet moves down and the wire is stationary, the moving magnetic field creates an electric field that induces a current in the wire, according to a different one of Maxwell’s equations. Both equations calculated the same current, so no one cared that there were two equations. No one, that is except Einstein.

Figure 25-2 Magnet And Wire

To Einstein, these were not two different problems; there was only one problem: a wire and a magnet move relative to one another. Einstein accepted the principle of relativity: only relative velocities have physical significance. Hence, it must make no difference whether the wire moved up or the magnet moved down. Since nature does not distinguish between the two, Einstein said neither should physicists. Einstein thought having two different equations for one problem was ugly, and he was sure nature was not ugly.

Others had noted that the equations of electromagnetism weren’t consistent with relativity. Recall the Galilean transformation, discussed in V1p15-2 and our Chapter 9, that allows us to translate coordinates measured in a stationary reference frame with those measured in a moving reference frame. Assume the two frames are perfectly aligned at time t=0, and the velocity of the moving frame is u in the +x-direction. If an object’s position has value x in the stationary frame and value X in a moving reference frame, these quantities are related by: X = x – ut Additionally, the measured speed of any object moving in the x-direction will be different in these two frames, as shown by taking the time derivative of the above coordinates: dX/dt = dx/dt – u Newton’s laws are equally valid in both the stationary and moving reference frames. Indeed they are valid in any frame compatible with the Galilean transformation. This is because Newton’s laws do not specify any absolute positions and velocities; they depend only on relative positions and velocities. For example, in F=GMm/r , r is the distance between M and m, not the absolute position of either. Additionally, the equation tells us how much the velocities of M and m will change, not what their absolute values will be; nor does it depend on what the absolute velocities were. 2

We describe this by saying: Newton’s laws are invariant under any Galilean transformation, and are therefore consistent with the principle of relativity. This makes it impossible to determine which frame is “actually” moving by observing any phenomenon of mechanics — forces, work, kinetic energy, gravitational potential energy, and such. However, Maxwell’s equations, developed in the 1860’s, are not invariant under Galilean transformations. These transformations change the form of Maxwell’s equations. Maxwell’s equations are valid only in the frame in which light’s medium, the luminiferous ether, is stationary. It appeared that physicists would be forced to choose between two seemingly wonderful pillars of physics: the principle of relativity and Maxwell’s equations. Most physicists believed someone would one day detect the luminiferous ether, forcing everyone to accept that relativity is not a universal principle.

The Search For Ether V1p15-3 Most physicists believed light required a medium, the luminiferous ether, to travel through. For more than a century, it had been well established that light was a wave phenomenon. Light exhibits interference and diffraction behaviors that are definitive signatures of wave behavior, both of which we’ll explore later in this eBook. All other waves were clearly the organized motions of some medium: ocean waves are the motion of water molecules; and sound waves are the motion of air molecules. What was the medium of light? What moved as light passed by? No one knew. This undiscovered medium was named the luminiferous ether, or simply ether, and physicists launched major efforts to find it.

It was quite clear that this ether must fill the entire universe, because we can see the light of extremely distant galaxies. This is different from sound; since we can’t hear the sounds of stars and galaxies, no one claims the universe is filled with air (it isn’t). Hence, light and ether posed a new and unique situation. It was also clear that if the universe is filled with ether, we must be moving through it. People once thought Earth was the center of all existence, with everything revolving around us. Now we know that even our own galaxy is but a small part of a much vaster universe. We also know Earth is moving relative to everything else in the cosmos; it would be absurd to claim that the Earth isn’t moving through the ether, that somehow the ether moves in lock-step with Earth. As we move through light’s medium we should see light having different speeds in different directions. Why? Consider the following analogy. If a rock is thrown into a pond, ripples spread across the water moving out at the same speed in all directions. If a rock is thrown into a river, a person on the bank sees the ripples moving faster going downriver and slower going upriver, because the current carries everything downriver. If Earth moves through the ether we would see a “river” of ether flowing past us, and we would see light’s speed being higher going downriver than upriver. But Maxwell’s equations of electromagnetism (discussed in Volume 2) say the speed of light is a fixed number and is the same in all directions. Hence, Maxwell’s equations would be correct only in the frame in which the ether is not moving, which is not Earth’s frame. We could then measure our absolute velocity, our velocity relative to the ether, contradicting the principle of relativity. Ether would make Maxwell’s equations and relativity incompatible. Physicists set out to see if light’s speed really did vary. High precision, meticulous experiments measured light’s speed in different directions. They all found the same speed in all directions, day or night, and regardless of Earth’s position orbiting the Sun. Experiments generally cannot prove two quantities are exactly equal; there will always be limitations to the precision of our instruments. The most that experiments can say is that two quantities are equal within a specified level of precision. If we measure two meter-sticks, we might be able to say they are the same length to 1mm. With better instruments, we might be able to say they are equal to 0.001 mm, but we can never say they are exactly the same length. While exactitude is impossible, experimental physicists can sometimes achieve amazing levels of precision. Albert Michelson (1852 – 1931) and Edward Morley (1838 – 1923) performed the most famous of these experiments in 1887. Michelson developed the first interferometer, whose descendants are still the gold standard of precise optical measurements. Michelson’s interferometer, illustrated below, had two orthogonal arms, each with a mirror at its far end. He sent light back and forth along each arm and very precisely measured the difference in travel time along the two paths.

Figure 25-3 Michelson-Morley Interferometer

Next, Michelson rotated the entire interferometer, keeping the arms perpendicular. To minimize vibrations, the interferometer was placed on a large marble table that floated on a pool of mercury; fortunately scientists have since learned to avoid such toxic setups. He also watched his interferometer as Earth orbited the Sun, changing our motion relative to the purported ether river flowing by, as sketched in Figure 25-4.

Figure 25-4 Ether “River” Flowing By Earth

If ether did exist, the light travel times would have changed as the direction of the light beams rotated through the ether. However, Michelson and Morley found no changes in travel times to a precision of

1/40th of what was expected from Earth’s orbital speed — pretty impressive. Does this prove ether doesn’t exist? After Michelson and Morley published their dramatic result in 1887, theorists George FitzGerald (in 1889) and Hendrik Lorentz (in 1892) suggested that material objects were compressed as they passed through the ether. This became known as the Lorentz-FitzGerald contraction, or simply Lorentz contraction. Specifically, objects were foreshortened in the direction of motion as follows: L = L √(1–v /c ) 2

2

Here L is the shortened length at velocity v, and L is the length at zero velocity. While this equation gave the needed adjustment that “explained” Michelson-Morley’s null result (no light speed change moving through ether), it was widely seen as contrived, having no meaningful rationale, and invented solely to plug a hole. In 1905, Einstein’s Special Theory of Relativity provided a cogent and comprehensive understanding of why lengths appear to be contracted. In 1907, Michelson became the first American to receive a Nobel Prize in Physics. Surprisingly, despite being acclaimed for proving Earth was not in a “river” of ether, Michelson continued to believe that ether really did exist. Long after Einstein’s rejection of ether became widely accepted, Michelson struggled to understand how his marvelous experiment had gone wrong. The truth is, the experiment wasn’t wrong; it has been repeated and confirmed many times, with ever-greater precision. The latest experiments find light’s speed is the same in all directions to a precision of 1 part in a billion, billion. If ether does exist, why can’t we detect it?

Einstein Understands Light This section contains material not in the Feynman Lectures. Einstein believed that the principle of relativity and Maxwell’s theory of electromagnetism were both too elegant to be wrong. He realized that if both were valid, it must be our understanding of light that is wrong. He became convinced that there was no ether and no preferred reference frame in which ether is stationary. Ether simply had to go. Einstein radically changed our understanding of light by discovering:

Light is both a particle and a wave.

Light has no medium, unlike all other waves.

Light always travels at the same speed (through empty space). Einstein proclaimed that light is both a particle and a wave, which led to the concept of particlewave duality, the fundamental principle of quantum mechanics, which we will discuss later in this course. The particles of light are called photons. Einstein also said that, unlike all other waves, light needs no medium, such as the luminiferous ether, to travel through; it is an electromagnetic wave that travels through empty space. Light is composed of electric and magnetic fields that oscillate — no physical object moves as light passes — as we will discuss later. For other types of waves, such as sound waves, the wave speed is determined by the properties of its medium. All sounds, whether from a human voice or an aircraft, move through air at the same speed, “the speed of sound”, about 1234 km/hr (768 mph) at sea level. The reason all sound waves travel at the same speed is that sound waves are the motion of air molecules, not the motion of the source: the person or aircraft. The wave speed is also fixed relative to its medium; sound waves move at 1234 km/hr in the reference frame in which the air is stationary. In Figure 25-5 lightning is observed by a stationary Einstein and two jets flying 1000 km/hr, the higher jet flying toward and the lower jet flying away from the lightning.

Figure 25-5 Lightning & Three Observers

Einstein observes the lightning’s thunder pass him at 1234 km/hr, while the upper jet will observe the thunder pass it at 1234 + 1000 = 2234 km/hr, and the lower jet will observe it pass at 1234 – 1000 = 234 km/hr. Before 1905, physicists believed that our three observers would perceive the light flash from the

lightning pass by them at different speeds, just as the sound waves do. But, Einstein declared that light waves are uniquely different; they have no material medium. Equivalently, we can consider the medium of light to be the vacuum of empty space. Since vacuum is at rest in every reference frame, Einstein reasoned, the speed of light has the same value in all frames, unlike every other type of wave. Einstein and both jets in Figure 25-5 observe the lightning’s flash pass them at the same speed, c, despite their differing velocities. This is a remarkable and unique property of light. With his theory of special relativity, Einstein proclaimed that the laws of nature were universal, the same for all observers in uniform motion, while measurements of certain quantities were relative, different for different observers. By “uniform” we mean non-accelerating. In his later theory of general relativity, Einstein was able to delete the words “in uniform motion,” extending the universality of nature’s laws to all observers regardless of their motion. So far we’ve discussed the speed of light through empty space, a perfect vacuum. Light’s speed through glass, water, or even air, is apparently less than c. This is because photons are absorbed and reemitted by atoms in these materials. Photons travel between atoms at speed c, but absorption and reemission introduce delays and reduce their average effective speed. This is a bit like racecars, whose average speed for an entire race is reduced by pit stops. We thoroughly explore this in Chapter 34. For convenience and to reduce clutter, throughout the rest of this course we will use natural units in which the speed of light in empty space c equals 1, and we will stop endlessly repeating “in empty space.” When presenting final results, we’ll restore the c’s. In this chapter and the next three, we’ll explore the remarkable consequences of Einstein’s postulates about light’s constant velocity and lack of medium.

The Lorentz Transformation V1p15-3 If we assume that the speed of light is the same in all reference frames (equal to 1), we must replace the Galilean transformation; we must derive a new rule. Let’s define two reference frames, now adding time as the fourth dimension: a stationary frame (t,x,y,z) and another frame (T,X,Y,Z) moving with relative velocity u in the +x-direction. Assume both frames match perfectly at (0,0,0,0). At some later time, the frames appear as shown in Figure 25-6.

Figure 25-6 Two Reference Frames with Relative Motion

Since space is homogenous, the transformation must be linear in each coordinate; any terms like x would result in different transformations in different locations. We also know that if the x-transform equation is modified at relativistic velocities, so must the t-transform equation, since dx/dt, the speed of a photon, must equal 1 in all frames. The most general linear transformation rule for motion in the x-direction is: 2

X = A(x + Bt) T = D(t + Cx) Y= y Z=z where A, B, C, and D are constants to be determined. I’ll explain at the end of this section why Y and Z must be unchanged when the frames’ relative velocity is entirely in the x-direction. For u1/n, light is completely contained within the glass. This is the physics enabling fiber optic communication. Commercial fibers optimize various performance factors with multiple concentric layers of different indices of refraction, but all rely on total internal reflection to minimize signal loss.

Diffraction When a wave passes through an aperture, it diffracts. In Figure 31-9, a plane wave, one whose black troughs and white crests are straight lines, rises from the bottom of the figure and hits a barrier with a single large hole.

Figure 31-9 Wave Diffraction Due To Aperture

As the wave continues upward, wave interference produces a beautiful and complex distribution of

wave energy. Note the strong central lobe and weaker side lobes, each separated by minima, radiating lines of zero intensity. No self-respecting particle would behave in such a complex manner; particles passing through a hole simply continue moving in straight lines. Let’s understand the reason for this complex pattern. The key physics is that we can consider each point along the aperture to be a new wave source: a new wave spreads out in all directions from each point. You can think of a large aperture as being the limiting case of many small slits. Recall the twoslit experiment; the intensity at the detector is determined by combining the waves coming from each slit with careful attention to how they interfere. If we had 17 slits, we would combine waves from each of the 17 slits, again noting their interference. One large aperture is the limiting case of a vast number of infinitesimal slits with infinitesimal separations. Figure 31-10 is a magnified view of the aperture. Consider the portion of each wave that moves at angle θ toward the left of the vertical axis, the axis of symmetry and the original wave direction.

Figure 31-10 Magnified View of Diffraction Aperture

In the θ direction, the wave from the right side of the aperture travels a longer path by the amount d than the wave from the left side; for aperture width W, d=Wsinθ. Let y be the horizontal axis with y=0 at the middle of the aperture; y ranges from –W/2 to +W/2. The extra path length for a wave starting at y is ysinθ, corresponding to a phase shift of δ=2πy(sinθ)/λ radians, where λ is the wavelength. Define ß=2π(sinθ)/λ, so δ=ßy. Let’s calculate the sum of all waves arriving at some distant point in the θ direction. We need to normalize this sum by dividing it by ∫dy=W to keep the total wave energy constant. For the moment, we will ignore the time dependent oscillation, which is common to all these waves. The total amplitude at θ is the integral of the cosine of the varying phase from y = –W/2 to +W/2.

A(θ) = ∫ cos(ßy) dy/W A(θ) = sin(ßy)/ßW, from y = –W/2 to +W/2 A(θ) = {sin(ßW/2) – sin(–ßW/2)} /ßW A(θ) = 2sin(ßW/2) / ßW A(θ) = (1/x) sinx where x = π(sinθ) W/λ Figure 31-9 shows that the central lobe carries the substantial majority of the total wave energy. The first minimum on each side of the central lobe occurs when the phase shift δ=±π at y=±W/2. Call the corresponding angle θ . MIN

x = π = π sin(θ ) W/λ sin(θ ) = λ/W, in 1-D MIN

MIN

Our derivation is for a one-dimensional aperture. For two-dimensional apertures, the analysis requires Bessel functions and results in: sin(θ ) = 1.22 λ/W, in 2-D MIN

Diffraction is the ultimate limiting factor in telescopes and microscopes, for which W is the size of the optical element limiting light acceptance, typically the primary mirror or lens. Even the most perfect optical systems cannot avoid diffraction since it is an inevitable consequence of light’s wave nature. The generally accepted Rayleigh criterion states two point sources can be resolved (distinguished from one another) in an optical image if the diffraction maximum of one source coincides with the first diffraction minimum of the other, assuming sources of equal intensity. This corresponds exactly to the θ we calculated above. MIN

Figure 31-11 shows images of two sources whose separation is more than, equal to, and less than the Rayleigh limit.

Figure 31-11 Diffraction from Two Sources vs Separation From Top to Bottom: Above, At, & Below Ray leigh Criterion

While this is subjective, and advanced techniques enable resolving somewhat closer sources, the Rayleigh criterion is a useful guideline. Better resolution requires imaging light of a shorter wavelength λ, or increasing the aperture W, which is often very expensive. The intensity I(θ) of the diffracted light in Figure 31-9 is proportional to A(θ) squared: I(θ) ~ (1/x) sin (x) 2

2

Expanding the sine in a power series yields for small θ: I(θ) ~ (1/x) { x – x /6 + …} I(θ) ~ (1/x) { x – x /3 + …} I(θ) ~ 1 – x /3 + … 2 2

3

2

2

4

2

where again x=π(sinθ)W/λ and θ =λ/W in 1-D. The entire diffraction pattern narrows as W increases, and widens as W decreases. As W goes to infinity, θ goes to zero, and the entire diffraction pattern narrows to a single line. If the aperture is infinitely wide, the barrier effectively does not exist. This confirms that light propagates in a straight line when unimpeded. MIN

MIN

Indeed this is why light travels in a straight line (in a constant refractive index). This brings us to a bizarre solution to a simple problem that dramatically advanced science. How does light “know” how to go in “a straight line” from A to B? Despite appearances, this isn’t at

all simple; hold on tight! In V1p26-7, Feynman says: “With Snell’s theory we can ‘understand’ light. Light goes along, it sees a surface, it bends because it does something at the surface. The idea of causality, that it goes from one point to another, … is easy to understand. But the principle of least time is a completely different philosophical principle about how nature works. Instead of saying it is a causal thing, that when we do one thing, something else happens, … it says this: we set up the situation, and light decides which is the shortest time…and chooses that path. But what does it do, how does it find out? Does it smell the nearby paths, and check them against one another? The answer is yes, it does, in a way.” Our most fundamental understanding is: A source at point A emits light waves in all directions. As that light spreads through space, every point in space becomes a new source, re-emitting light in all directions. Eventually light waves spread to all possible destinations. The intensity of light at any destination B is determined by the interference of waves reaching B along every possible path from A. Along one or possibly more paths, interference of nearby paths is constructive; wave energy flows along these paths. Along other paths, interference averages to zero, little if any energy flows along these paths. The process just described is generally referred to as either the Feynman sum over histories or the Feynman path integral formulation. When Feynman first presented this concept in the 1940’s, most of the world’s physics elite didn’t understand him. It was just too bizarre. It wasn’t causal, in the accepted sense. It says quantum waves do many strange things in many different places simultaneously. Strange yes, but its predictions match nature superbly. We don’t really know whether or not this is the way nature actually works. We also don’t really know whether or not neutrinos exist (they’re invisible), but assuming neutrinos do exist explains a lot of what we do see. One could replace the word “neutrino” in the last sentence with almost anything else and it would still be true. One might even argue that your fingers are an assumption your brain makes to make sense of its sensory inputs. Right answers vanquish doubters. The success of Feynman’s path integral formulation made it the backbone of all quantum field theories, of which quantum electrodynamics (QED) and the standard model of particle physics are the primary examples. It does take some getting used to. Let’s try an example. This process is illustrated in Figure 31-12, where each dot is a reemission point and the circles indicate waves emitted from those points. In reality, light is re-emitted at every point. The solid-line circles and dark dots indicate one possible path that reaches B.

Figure 31-12 Light Propagating from A to B

The shortest path from A to B is a straight line. Waves traversing nearby paths have travel distances only slightly longer than the straight-line path. These waves arrive at B with relative phase shifts that are nearly zero, so they interfere nearly completely constructively, creating a high intensity path. Waves taking more circuitous routes have large, rapidly varying phase shifts. Indeed, since the wavelength of most light is microscopic, most path length differences are much larger than λ, and the phase shifts of such waves tend to be uniformly distributed across the full range of 0 to 1 cycle, eliminating interference effects. This distinction has dramatic consequences. If one million almostsame-time paths interfere constructively, the resulting intensity is one trillion times that of other paths. The net effect of this intricate microscopic ballet is: light moves in a straight line from A to B. How does Feynman’s sum over histories mesh with the principle of least time? Feynman says sum the waves, with their proper phases, over all possible paths. The path(s) with the most constructive interference will be the path(s) light travels. Consider a situation in which we can parameterize all paths with one variable. We can plot travel time along all paths as a function of that variable, which might look like Figure 31-13. At the indicated points A, B, and C, the slope of the curve is zero: travel time is at an extremum, a local maximum or minimum.

Figure 31-13 Travel Time vs Path

Near A, B, and C, travel times and therefore phase shifts, are varying more slowly than anywhere else. These are the paths with the greatest constructive interference, and thus according to Feynman, the paths along which almost all light energy travels. Our understanding of light’s motion began with the principle of least distance, which works within a single medium, but fails to explain refraction. Our thinking evolved to the principle of least time, which can explain refraction, but not reflection within an ellipse. Feynman sums amount to a principle of extremal times, where interference is maximally constructive. To date, we know of no situation that is not explained by Feynman sum over histories.

Chapter 31 Review: Key Ideas 1. Waves are motion. Waves oscillate in identically repeating cycles. The spatial extent of one complete cycle is the wavelength λ. The number of complete cycles/second is the wave’s frequency f. Amplitude A is how high the wave goes up above and goes down below its average. 2. Light is an electromagnetic phenomenon consisting of oscillating electric and magnetic fields, giving light wave properties. The E and B fields crest and trough simultaneously. The directions of E, B, and light’s velocity are all mutually perpendicular. While different frequencies are assigned different names, the term “light” properly applies to the entire spectrum. 3. Einstein and de Broglie developed key relationships. Let E be the energy and p be momentum of a wave. Let h be Planck’s constant and ħ=h/2π. For light: E=hf E=ħω E=pc c=λf λ=hc/E λ=h/p ω/c=2π/λ 4. When waves with the same frequency and a fixed phase shift combine, they interfere constructively if the phase shift is zero, and interfere destructively, totally cancelling one another, if the phase shift is λ/2. Interference is a hallmark signature of wave behavior. 5. For reflection from a mirror, wave interference ensures the angle of reflection θ equals the incident angle ø. Incident light hits atoms across the entire surface of the mirror. These atoms reemit light in all directions. Slow variation of travel distance near the path with θ=ø ensures constructive interference of nearby rays and establishes the dominant light path.

6. When light of wavelength λ transits an aperture of width W, it diffracts, producing a strong central lobe and innumerable side lobes separated by zones of zero intensity. The intensity at angle θ, I(θ), and the angle of first minimum θ are: MIN

I(θ) ~ (1/x) sin (x) with x = π (sinθ) W/ 2

2

sin(θ )=λ/W, for a 1-D aperture MIN

sin(θ ) = 1.22 λ/W, in 2-D MIN

7. The Feynman sum over histories, also called the Feynman path integral formulation, states: A source at point A emits light waves in all directions. As that light spreads through space, every point becomes a new source, re-emitting light in all directions. Eventually light waves spread to all possible destinations. The intensity of light at any destination B is determined by the interference of waves reaching B along every possible path from A. Along one or possibly more paths, interference of nearby paths is constructive; wave energy flows along these paths. Along other paths, interference averages to zero, little if any energy flows along these paths. The paths where interference is constructive are those of extremal time, where nearby paths all have about the same travel time. The mathematical criterion is: d(travel time)/d(path)=0. This distinction has dramatic consequences. If one million almost-same-time paths interfere constructively, the resulting intensity is one trillion times that of other paths. 8. As the science of light advanced over many centuries, Feynman’s path integral formulation superseded Fermat‘s principle of least time, which had superseded the principle of least distance.

Chapter 32 Electromagnetic Radiation In V1p28-1, Feynman says: “The most dramatic moments in the development of physics are those in which great syntheses take place, where phenomena which previously had appeared to be different are suddenly discovered to be but different aspects of the same thing. The history of physics is the history of such syntheses, and the basis of the success of physical science is mainly that we are able to synthesize.” Feynman continues saying the most dramatic synthesis of the 19th century may have been Maxwell’s combining of the laws of electricity, magnetism, and light. All of Volume 2 of the Feynman Lectures (two-thirds of the second year of his course) is devoted to electricity and magnetism. In Volume 1, he presents only enough electromagnetism to understand light. Feynman begins with a quick summary of the topic’s historical highlights. The gradually revealed properties of electricity and magnetism showed these were complex forces that, like gravity, diminished with distance squared: the famous inverse-square law, 1/r . Remote systems of charged bodies would, therefore, exert virtually no influence on one another. But, as Maxwell tried to unify the equations for these forces, he realized they were mutually inconsistent. Rectifying this inconsistency required the addition of a new term to his equations. This new term led to the amazing prediction of an electromagnetic effect that diminished only with the first power of distance: 1/r. 2

For enormous distances, there is a vast difference between 1/r and 1/r. That difference enables radar, radio, television, and satellite communication — many of the technologies that provide us a quality of life unimaginable 150 years ago. Feynman says: “It seems like a miracle that someone talking in Europe can, with mere electrical influences, be heard thousands of miles away in Los Angeles.” How much more miraculous would Feynman consider someone in Pasadena listening to Voyager 1, 12 billion miles away, now outside our Solar System. 2

“And thus the universe is knit together.” Feynman said. “The atomic motions of a distant star still have sufficient influence at this great distance to set the electrons in our eye in motion, and so we know about the stars. If this law did not exist, we would all be literally in the dark about the exterior world!” Feynman then continues the historical summary, enumerating the laws of physics as they were understood in the 19th century: Newton’s laws of mechanics, Newton’s law of gravity, the laws of thermodynamics, and finally the laws of electromagnetism. I shall not repeat all the equations of the

last 31 chapters, but list only those for electromagnetism. Coulomb’s law for the force F on an electric charge q, moving with velocity v, due to the electric E and magnetic B fields at its location is: F = q(E + v×B) The electric and magnetic fields are both linear, meaning that we can calculate the fields due to each charged particle individually and then simply add them to obtain the total fields. The electric and magnetic fields at spacetime coordinates O=(0,0,0,0) due to a single charge q at the apparent location r is: E = –q/(4πε ){r/r +(r/c)d(r/r )/dt +d (r/r)/dt /c } 3

3

2

2

2

0

B = –r×E/rc No worries: we defer most of this to Volume 2. Here r = r•r, and r/r is a convenient notation for a vector in the r-direction with length 1. By “apparent location” we mean the location of the charge observed at O, accounting for delays due to the finite speed of light. 2

Let me describe the significance of each piece of this complex equation. After the initial constant, the first term containing r/r leads to the 1/r electrostatic force. Feynman describes the next term “very crudely”, saying the d(r/r )/dt term “corrects”, in part, for the light-speed delay; it is the rate of change of the first term multiplied by the time delay, r/c. Finally, the third term, with d (r/r)/dt , enables electromagnetic radiation. 3

2

3

2

2

Radiation V1p28-4 The radiation piece of our complex equation is: E = –q/(4πε c ) {d (r/r)/dt } 2

2

2

0

This says, when an electric charge accelerates, it creates an electric field that diminishes as 1/r. The acceleration generally has components along all three axes of space. We can make this a bit simpler, because we only need account for the acceleration perpendicular to our line-of-sight. For specificity, let z be the axis along the line-of-sight, and let x and y be the transverse axes. Let’s see why we are only concerned with xy-acceleration. Figure 32-1 shows a unit vector, one whose tip is always on a sphere of radius 1, which has rotated from the dashed line to the more vertical solid line. The observer O is very far away to the right.

Figure 32-1 Projected Charge Accelerations

The unit vector’s change is decomposed into two solid-line vectors, a radial component (horizontal) and a transverse component (vertical). We now project the solid-line vectors onto an xy-plane near the observer. Radial changes are greatly reduced in projection since they are always nearly parallel to the line-of-sight. These diminish as 1/r , rapidly becoming negligible at large r. Conversely, transverse accelerations a of the charge are reduced in projection only by the factor 1/r; these are the changes responsible for long-range radiation. 2

xy

If the charge moves slowly (vλ, which is called the wave zone.

Vector Fields The electric field is one of many important vector fields in physics. Vector fields were mentioned in Chapter 10, but their description probably bears repeating. In physics, a field is any function of the spacetime coordinates (such as t, x, y, and z). Temperature is an example of a scalar field; it has a single value at each location and each moment in time. Wind is an example of a vector field. At each location and moment in time, the wind blows with a certain velocity (speed and direction); a wind vector field describes the wind by specifying its velocity vector at each spacetime coordinate. Clearly vector fields are of interest not just in physics, but also in meteorology and many other disciplines.

Wave Character of EM Radiation

In V1p29-1, Feynman probes deeper into the wave character of electromagnetic radiation, beginning with the concept of retarded time. The upper image in Figure 32-2 plots an arbitrary acceleration curve for a radiating charge as a function of t, time measured at the charge’s location.

Figure 32-2 Acceleration vs Time & Electric Field vs Distance

The lower image in Figure 32-2 plots the electric field at one moment in time as a function of distance from the charge. To equalize the scale of the horizontal axes, we divided distance by c. For qD. Let’s assume the dipole charges are sinusoidally accelerated, and define A to make the E fields simple: S : a = –(4πε c /qω ) A cos(ωt) S : a = –(4πε c /qω ) A cos(ωt+ø) 1

y

0

2

y

0

2

2

2

2

Here ø is the phase shift between the dipole accelerations. For simplicity, we define time t so that the phase angle of S is zero. The radiation field at P from each dipole is: 1

S : E = (A/r) sinθ cos(ωt*) S : E = (A/r) sinθ cos(ω[t*–dt]+ø) 1 2

where t* is the retarded time t–r/c, and dt is the extra travel time of the wave from S . As the figure shows, S is farther from P than S by the amount Dcosθ. That extra path length corresponds to an extra travel time of dt=(D/c)cosθ. The electric field at P at time t arises from actions that occur at S at time t* and at S at the earlier time t*–dt. 2

2

1

1

2

We can rewrite the phase shift of S as Φ=–ωdt+ø. The E field from S is then: 2

2

S : E = (A/r) sinθ cos(ωt*+Φ) 2

For ø=0 and θ=π/2, P is on the z-axis, equidistant from S and S , their waves have zero phase shift (Φ=–ωdt+ø=0+0) and interfere constructively. Alternatively, if we pick ø=(ωD/c√2), the phase shift between waves becomes: 1

2

Φ = –ωdt+ø = –ω(D/c)cosθ+(ωD/c√2) The phase shift Φ is zero at θ=π/4, and the waves interfere constructively along the 45-degree line. Thus, we can steer the direction of maximum radiation by adjusting ø.

Mathematical Methods for Interference Define the sum of the electric fields from S and S to be E. The intensity of radiation at any remote location is E . For the two dipoles above: 1

2

2

E = (A/r) sinθ {cos(ωt*) + cos(ωt*+Φ)} In V1p29-6, Feynman says there are three mathematical approaches to solving such interference problems. The first he calls the trigonometric method — just grind through the trig functions. Feynman recalls a useful trig equation: cosA + cosB = 2cos[(A+B)/2] cos[(A–B)/2] I don’t remember learning that in high school trig, but I’ll add it to my toolbox — I’ll at least remember that an equation does exist for cosA+cosB. Employing that, E becomes: E = (2A/r)sinθ cos[ωt*+Φ/2] cos[Φ/2] E has a phase shift half way between S and S , and its amplitude is reduced by the cosine of that half phase shift. 1

2

The intensity of radiation, averaged over a full oscillation cycle, is the time-average value of E . 2

=4(A/r) sin θcos [Φ/2] 2

2

2

2

2

Recall that the average value of cos is 1/2, and that: 2

2cos [Φ/2] = cos [Φ/2] + 1 – sin [Φ/2] 2cos [Φ/2] = cos[Φ/2+Φ/2] +1 2cos [Φ/2] = cos[Φ] +1 2

2

2

2 2

This makes the average intensity I from the two dipoles:

I = (A/r) sin θ {1+cosΦ} 2

2

where Φ=–ω(D/c)cosθ+ø, D is the dipole separation, and ø is the electron acceleration phase shift between the two dipoles. The average intensity from S alone is: 1

S : I = (A/r) sin θ S : I = (1/2) (A/r) sin θ 2

2

2

1

2

2

1

S has the same average intensity. Hence the average intensity of the interfering sources varies between zero (Φ=π) and four times the intensity of one lone source. This is a general property of interference effects of two equal sources. 2

Feynman’s second approach is a geometric method. Figure 32-7 shows the electric vector fields from sources S and S at time t=0, and adds these graphically to produce vector E. 1

2

Figure 32-7 Adding Dipole Fields Graphically

As time advances, all three vectors, S , S , and E, rotate counterclockwise in unison at frequency ω. By symmetry, the phase shift of E is half that of S . From the equation for the diagonal of a rhombus: 1

2

2

E = (A/r) sin θ cos [ωt*+Φ/2] {2+2cosΦ} 2

2

2

2

Averaged over a full cycle, the intensity is just as above: I = (A/r) sin θ {1+cosΦ} 2

2

Feynman calls the third approach the analytical method; it employs complex numbers. He and I agree this is the best of the three. Recall how effective complex numbers were in analyzing harmonic oscillators in Chapters 12 through 14. With complex numbers, the graphical representation is the same as Figure 32-7, except y becomes the imaginary axis. We write E and the electric fields from S and S as complex numbers. 1

2

E = (A/r) sinθ {exp(iωt*) + exp(iωt*+iΦ)} E = (A/r) sinθ exp(iωt*) {1 + exp(iΦ)} The exp(iωt*) term is the oscillating factor. To get the amplitude of E, we calculate EE*, E times its complex conjugate, which is simply E with i replaced by –i. E = EE* = (A/r) sin θ {1+e }{1+e } 2

2

2



–iΦ

= (A/r) sin θ {1+e +e +1} 2

2



–iΦ

= (A/r) sin θ {2+ cosΦ+isinΦ +cosΦ–isinΦ} 2

2

E = 2(A/r) sin θ {1+cosΦ} 2

2

2

The full-cycle average radiation intensity equals E /2, since on average half of E is along the real axis and half along the imaginary axis. Hence, all three methods yield the same result, as logic demands: 2

2

I = (A/r) sin θ {1+cosΦ} 2

2

Multiple Dipole Interference Consider a line of J identical dipole radiators, called S through S , with adjacent dipoles separated by distance D, and with all accelerating their electrons vertically up and down. Figure 32-8 shows the dipoles as viewed from above, with each dot representing one dipole, and all their electrons moving in and out of the screen. North and east are indicated by N and E. 1

J

Figure 32-8 Line of Dipole Radiators

Note that we’ve rotated our viewing orientation by 90 degrees from the previous analyses. The polar angle previously called θ is now fixed at π/2, since we assume P and the dipoles are all at the same elevation.

Consider radiation reaching a remote location P, a distance r from the dipoles. From the figure, we see that the path to P from the dipole S is longer than the path from dipole S by an amount x = (J– 1)Dcosα. As we derived for the case of two dipoles, this extra path length corresponds to a time delay of: (J–1)(D/c)cosα. Similarly any other dipole, say dipole K, the time delay dt = (K–1) (D/c)cosα. J

1

K

Let the acceleration of electrons at dipole K be: S:a K

= –(4πε c /qω ) Acos(ωt +[K–1]ø) 2

vert

2

0

0

Here, t is the time at the dipole array. Note that the phase at the source shifts by +ø for each successively higher numbered dipole relative to its predecessor. The total phase shift at point P, of dipole K relative to dipole 1, is the sum of: (the source phase shift) minus (ωdt for the extra path length delay). 0

K

Φ = (K–1)ø –ω(K–1)(D/c)cosα K

We can rewrite Φ in terms of λ rather than ω. K

ωD/c = 2πDf/c = 2πD/λ Φ = (K–1) (ø–2π[D/λ]cosα) K

The electric field at P due to dipole K is: S : E = (A/r) cos(ωt+Φ ) K

K

Here, t=t –r/c is the retarded time. (That gets rid of the asterisk on t). 0

Finally, then, using complex numbers, we are ready to calculate the total radiation E at point P from all dipoles by summing the contributions of each dipole from K=1 to K=J: E = Σ { (A/r) exp[i(ωt+Φ )] } E = (A/r) e Σ { exp[iΦ ] } K

K

iωt

K

K

If all Φ =0, the sum becomes Σ {1} = J. This means all dipoles interfere constructively, and the maximum time-averaged radiation intensity increases with the square of the number of dipoles: K

I

K

= EE*/2 = (A/r) (J /2) 2

MAX

2

For convenience, define I to be the radiation intensity at any angle α from a single source. 1

I = (A/r) /2 2

1

The peak intensity from J sources is now: I

MAX

=I J 1

2

In V1p29-4, Feynman discusses some interesting cases in which all Φ =0, for which: K

0 = Φ = (K–1) (ø–2π[D/λ]cosα), all K ø = 2π [D/λ] cosα K

If this dipole array is near Pasadena, California, where does its peak radiation go? A simple case is ø=0 and α=±π/2 (cosα=0). Peak radiation intensity is emitted westward and eastward, perpendicular to the north-south line of dipoles. This is optimal for transmitting radio signals from Pasadena to Hawaii or Charlotte, North Carolina. Alternatively, for ø=2πD/λ, peak radiation is emitted at cosα=±1, which means α=0 and α=π. This is optimal for transmitting to Alberta, Canada and Easter Island. By selecting the right ø, this so-called phased array transmitter can aim its beam in any desired direction. This selection can be done electronically without needing to move any dipoles. Feynman says we can even arrange to transmit to Alberta while keeping Easter Island in the dark, if the dipoles are separated by quarter-wavelengths. For D=λ/4, Φ = (K–1)(ø–[π/2]cosα) K

For ø=π/2, all Φ =0 for cosα=+1, which means maximum power is transmitted north towards Alberta. At the same time, all Φ =(K–1)π for cosα=–1, which means each dipole interferes destructively with its neighbor. For an even number of dipoles, no power is transmitted south towards Easter Island. Choosing ø=–π/2, reverses all that, transmitting to the moai but not to the Canucks. K

K

Feynman says peak intensity is radiated at intermediate angles, if D>λ. Let’s see why. If Φ =2π(K– 1)n, for some integer n, the waves from all dipoles have phase shifts that are integer numbers of full cycles, which is equivalent to no phase shifts at all and which thus results in peak intensity. For ø=0: K

2π n(K–1) = Φ = (K–1) (0–2π[D/λ]cosα) n = –[D/λ]cosα cosα = –nλ/D K

For Dλ, additional solutions exist for n≠0. Feynman mentions the case of D=2λ. For n=–1, cosα = +0.5, and α=π/3 For n=+1, cosα = –0.5, and α=2π/3 We have examined the directions of peak radiation for various cases. Lesser amounts of radiation will also be emitted at other angles. Let’s go back to our multi-dipole equation and explore all the angles. E = (A/r) e Σ { exp[iΦ ] } iωt

K

K

Let’s define some variables to reduce the clutter:

u = ø – 2π[D/λ]cosα Φ = (K–1) u K

z = exp{iu} And now examine the sum in the above equation for E. Σ { exp[iΦ ] } = Σ { z = 1+z+z +…+ z K

K

(K–1)

K

2

}

(J–1)

This is the difference of two infinite sums. = (1+z+z +…) – z (1+z+z +…) = 1/(1–z) – z /(1–z) Σ { exp[iΦ ] } = (1–z )/(1–z) 2

J

2

J

J

K

K

The above is valid for u≠0. Next we need this sum times its complex conjugate. Let’s do the numerator first. (1–z ) (1–z )* = (1–e ) (1–e ) = 1 + 1 –e –e = 2 – cos(Ju) –isin(Ju) –cos(Ju) +isin(Ju) (1–z ) (1–z )* = 2 – 2cos(Ju) J

J

iJu

J

iJu

–iJu

–iJu

J

The denominator is the same as above with J=1: (1–z) (1–z)* = 2 – 2cos(u) The radiation intensity is: I = EE*/2 I = I {1–cos(Ju)}/{1–cosu} 1

We can rewrite this using the trig relation (1-cosθ)=2sin θ/2. 2

I = I sin (Ju/2)/sin (u/2) 2

2

1

Let’s try an example. For D=λ/2, ø=0, and angle α very close to 90 degrees: α=(π/2)+ε, for ε>1 can be written: I = 4 I sin (Ju/2)/u I = I J sin (x)/(x) with x=Ju/2 2

2

1

2

2

2

1

The ratio (sinθ)/θ, a common expression in signal theory, is called sinc(θ). It is plotted in Figure 332. (Sinc is not to be confused with sinh, the hyperbolic sine function.)

Figure 33-2 Sinc 2(θ) = (Sinθ) 2/θ2

According to math tables, integrating from u=–∞ to u=+∞: ∫ sin (θ)/θ dθ = π ∫ sin (Ju/2)/(Ju/2) d(Ju/2) = π ∫ sin (Ju/2)/u du = 2π/J 2

2

2 2

2

2

From this, the integral of radiation intensity surrounding a major peak, including all minor peaks, is: ∫ I du = I (2πJ) 1

In V1p30-3, Feynman notes that while the radiation intensity peaks at u=0, it has the same peaks at u=2πm for any integer m. We plug that into the equation for u, and set ø=0 for simplicity: 2π m = –2π[D/λ]cosα cosα = –mλ/D For λ>D, the magnitude of mλ/D is greater than 1 for any nonzero integer m. This results in only two complete radiation patterns (each with one major peak and many nulls and minor peaks) centered at α=±π/2, the directions perpendicular to the line of dipoles. For D>λ, the magnitude of mλ/D is less than 1 for at least three values of m (–1, 0, +1). This means the radiation pattern repeats in its entirety at least four times: twice when cosα=±λ/d, and twice when cosα=0. In this case, m is called the beam order. For nonzero ø and λ>D, radiation peaks at: 0 = u = ø–2π[D/λ]cosα cosα = (ø/2π) (λ/D) Adjusting ø steers the two radiation patterns in any desired anti-parallel directions: at angles α and α +π.

Diffraction Gratings A linear array of dipole radiators works well at radio and television frequencies. For much higher frequency light, diffraction gratings produce similar results. In V1p30-3, Feynman describes one type of diffraction grating, an array of tiny, parallel, equally spaced grooves on the surface of a glass plate. The grooves scatter light differently than the virgin glass. When illuminated by a light beam, each groove becomes a new light source, and the light waves from these sources interfere in precisely the same manner as those from a dipole array, but at a vastly different wavelength. As a side note, it’s amazing what people can achieve when sufficiently motivated. In 1899, Henry Grayson made a mechanical ruling machine to create diffraction gratings with 120,000 lines per inch (212 nanometers/groove); that groove spacing is about half the wavelength of blue light. Consider a narrow beam of light reflecting from a flat diffraction grating, whose line spacing is D. Let the angles between the surface and the incident and the reflected beams be α* and β*, respectively, as shown in Figure 33-3, in which dots represent diffracting grooves running in and out of the screen.

Figure 33-3 Scattering from Diffraction Grating

The figure shows two rays hitting adjacent diffraction grooves. We assume the light source and the final observer are so far away that incident rays I and I are parallel, as are reflected rays R and R . 1

2

1

2

In the figure, I has a longer path to reach the surface than I by the amount y=Dcosα*. Also, the reflected ray R has a longer return path than R by the amount x=Dcosβ*. Angles α* and β* are defined in the same manner we used for dipoles: relative to the line of radiators. With diffraction gratings, incident and diffracted angles are by convention measured relative to the normal to the surface, which are the complementary angles: α=π/2–α*, and β=π/2–β*. Using normal angles, y=Dsinα and x=Dsinβ. 2

1

1

2

Rays R and R interfere constructively if their relative phase shift is zero, which requires: 1

2

Dsinβ = Dsinα or: β = α Thus the rays interfere constructively, producing peak intensity reflection, when incident and reflected angles are equal, confirming the result of Chapter 30. As discussed earlier, this provides a deeper understanding of reflection. Light doesn’t “know” at what angle it is “supposed” to reflect. Instead, atoms in the reflecting surface absorb the incident light, and re-emit light in all directions. Waves leaving the grating interfere constructively when the reflected angle equals the incident angle. The constructive interference defines light’s path. When individual atoms form a “diffraction grating”, their spacing D is much less than the wavelength of all light except extremely high-energy gamma rays. When D>z, tapering would become appreciable for ρ values that contribute little to the total integral, due to rapidly varying phase shifts. The integral from r=z to r=∞, still dropping the projection factor (z/r), would be: –(r–z)/L

∫ exp{–iωr/c–r/L+z/L}dr = –1/(iω/c+1/L)exp{–iωr/c–r/L+z/L}, r=z to r=∞ = +1/(iω/c+1/L) exp{iωz/c} This is at least well defined, and if L>>c/ω, the 1/L term above is negligible. None of this addresses the missing (z/r) projection term. Oh well. Feynman says the end result, which matches our exponential tapering, is: ∫ = ω Aqµ/(2ε c )exp{iωt} c/(–iω)(–e{–iωz/c}) 2

2

0

= –qµ/(2ε c) iωAexp{iω(t–z/c)} 0

Field at P = –qµ/(2ε c) v(t–z/c) 0

Here, v(t–z/c) is the velocity of the radiating charges retarded by the distance between P and the plane, which is the only distance scale in this exercise. Feynman says this result is valid everywhere, even very close to the charge plane, where his approximations are clearly invalid. Not very satisfying.

Chapter 33 Review: Key Ideas 1. Diffraction and interference are really the same physics with two different names. In both phenomena, waves combining along different paths interfere constructively or destructively as determined by their relative phase shifts. 2. For a linear array of J emitting sources, perhaps a diffraction grating, with source spacing D, wavelength λ, and successive phase shift between sources ø, the radiation intensity at point P is: I = I sin (Ju/2) / sin (u/2) with u = ø –2π[D/λ] cosα 2

2

1

Here, I is the peak intensity of a single dipole, and α is the angle between the dipole array and the line-of-sight to P. When u=0, the intensity is at its absolute maximum, with the value: 1

I

MAX

=I J

2

1

I is zero at Ju = ±2π, ±4π, ±6π, … I has minor peaks near Ju = ±3π, ±5π, … Adjusting ø steers the radiation pattern in any desired direction. For Dλ, the entire radiation pattern is repeated, wherever cosα=mλ/D, for m a nonzero integer; m is called the beam order. 3. Thin films often produce reflections from their upper and lower surfaces that interfere constructively or destructively depending on phase shifts, which vary with changing film thickness, refractive index, and light wavelength and angle. This gives such films dynamic, colorful appearances.

4. The electric field at P generated by a plane of accelerating charges, each with charge q, and with µ charges per unit area, is: –qµ/(2ε c) v(t–z/c) 0

Here, v(t–z/c) is the charge velocity retarded by the distance between P and the plane.

Chapter 34 The Index of Refraction In Chapter 30, we discussed refraction and said it was due to light moving at different speeds in different media. Each type of medium has a refractive index n, and the speed of light in that medium is c/n. Empty space has n=1, so light’s speed in vacuum is c. Except for extremely exotic circumstances, n is greater than 1 in all objects made of normal matter. In air, n=1.0003 and light’s speed is 0.9997c. In water, n=1.333 and light’s speed is 0.75c. The largest refractive index of any common substance is n=2.42 in diamond, in which light’s speed is 0.41c. This is why a girl’s best friend dazzles. In Chapter 30 (see Figure 30-3 in particular), we saw that as light passes through a boundary between media of different refractive indices, light bends toward the medium with the higher refractive index and lower light speed. All the above adequately explains most macroscopic refractive phenomena. But in truth, things are more complicated than that. In the microscopic world, electromagnetic fields and individual photons absolutely always travel at speed c — in vacuum, in air, in water, and even in diamond. However, macroscopically, if one measures how long it takes a light beam to move through water, its measured speed is 0.75c. This chapter explains how a light beam can move slower than the individual photons of which it is composed. Here is a brief summary of the detailed discussion to follow. In a material substance, light is absorbed and re-emitted by the atoms of that medium. Between emission and absorption, all photons move at speed c. However, the re-emission process introduces delays in the form of negative phase shifts. By analogy, the peak speed of an Indy 500 racecar can exceed 237 mph (380 km/hr), but the highest average speed for the entire race is 187 mph (300 km/hr). There’s a big difference between peak speed and average speed. In V1p31-3, Feynman says analyzing the origin of the refractive index is the most complex topic in his first-year course — not because the principles are opaque but due to the mathematical challenge of properly combining many different simultaneous phenomena. Let me explain his comment with another analogy. The principles of Newtonian gravity are simple. There are just two short equations: F=ma, and F=GMm/r . After all we’ve been through since Chapter 8, those two equations must seem trivial. Yet in our Solar System, the Sun pulls on each planet, each planet pulls back on the Sun, each planet pulls on every other planet and pulls on all its moons, which pull… The complete problem, including all possible combinations of pulling is both mind-boggling 2

and mathematically insoluble. If physicists had started with the complete problem, we would still not have a theory of gravity. You will recall we didn’t tackle the complete gravitational problem of our Solar System. We tackled the dominant phenomena, which best illustrated the key principles of physics that are most important to learn. Here again, we will address the dominant phenomena, and explore the most important principles. We leave the complete problem to Feynman Volume 2.

The Refractive Index The key principles are: 1. The total electric field is the vector sum of the electric fields generated by every electric charge, everywhere. 2. At time t and distance r far away from charge q, its electric field is dominated by the radiation component, which is proportional to three factors: q; 1/r; and q’s acceleration at retarded time t–r/c. 3. Retarded time is based on c, the speed of light in vacuum, regardless of any media the field passes through. Why isn’t retarded time based on c/n, the speed of light in whatever medium it is traversing? As we shall see, this is because microscopically, electromagnetic fields and light always travel at speed c. We start with the simplest case: a flat, thin plate of refractive material, as shown in Figure 34-1. Light source S is far away on the left side of the plate, and far away on the right side is point P, where we wish to know the field.

Figure 34-1 Field at P is Sum of Opposing Fields from Source and Plate

Electromagnetic radiation (light) from source S accelerates electrons in the plate, which become new radiation sources. The field at P is the vector sum of E , the field from S, plus the field, E , from all S

E

the accelerating electrons in the plate. Mathematically: E at P = E at P + E at P S

E

where each E is appropriately time-retarded. As these equations demonstrate, the field at P is altered by the presence of the plate, and the plate is altered by source S. Without S, the plate would have no electrical influence on P. Without the plate, the field at P from S would be much simpler. There are additional complications, all of which Feynman defers to Volume 2. One such complication is that any particular electron in the plate, call it X, is affected by the fields produced by the motions of all other electrons in the plate, each of which depends on the field produced by electron X. One must, therefore, calculate the mutually self-consistent motions of all plate electrons. Another complication is that the motions of plate electrons also affect the source. Thankfully, we will now address only a simplified case: we will assume the accelerating electrons in the plate are so few that their effects on the source and on one another are negligible. This assumption is approximately valid if the plate is a low-density medium with refractive index just slightly greater than 1. With these simplifying conditions, we will calculate the total electric field at P and illuminate the origin of the refractive index. In V1p31-3, Feynman does this calculation in an unusual manner. He starts with the answer and works backwards about halfway. He then switches to the initial equations and works forward. The forward and backward results meet in the middle. Being less “creative”, I will start at the beginning, and move steadily forward to the final answer. We define these quantities: z = horizontal axis normal to the plate z = 0 at plate center dz = plate thickness N = number of plate charges per unit volume µ = number of plate charges per unit area q and m = charge and mass of one electron ω = angular frequency of source S Ω = natural frequency of plate electrons β = oscillating electrons’ damping coefficient E = electric field from source S E = electric field from all plate electrons S

E

If the source is far enough away from the plate, its electric field is approximately uniform throughout the plate. Choose the electric fields at S, the plate, and P to be entirely vertical, and define t such that the source field E is: S

E at Plate = E e S

iωt

E at P = E e

iω(t–z/c)

S

In the absence of an external force, electrons in the plate are positioned around nuclei, occupying the states with the least total energy that quantum rules allow. The source’s oscillating electric field displaces the plate’s electrons from their equilibrium positions, making the electrons act like harmonic oscillators. As we discovered in Chapter 12, the simplest equation for a driven harmonic oscillator is: m d x/dt + mΩ x = F = qE e 2

2

2

iωt

We previously found the solution to this equation: x(t) = qE e / {m(Ω –ω )} iωt

2

2

Each electron in the plate oscillates about a different position, but their velocities and accelerations are identical. In the last chapter, we derived the field produced by a plane of accelerating charges and found: Field at P = –qµ/(2ε c) v(t–z/c) 0

where v(t-z/c) is the charge velocity, time-retarded for distance z. In the current case, the charge velocities are: v(t) = iωqE e / {m(Ω –ω ) iωt

2

2

The number of charges per unit area equals the number of charges per unit volume times the plate thickness: µ=N dz. Combining the last three equations, we get the field at P due to all plate electrons: E = –qN dz/(iωqE) e E

iω(t–z/c)

/{2ε mc(Ω –ω )} 2

2

0

With all the terms in this equation, it is easy to lose track of its most important part: the leading minus sign. The plate field opposes the source field at P, which we recall is: E =Ee

iω(t–z/c)

S

This phenomenon of opposition is an essential property of nature. It is a consequence of the principle of action-begets-reaction and of nature always seeking the lowest energy state. Opposing fields reduce the total field energy: (E –E ) < (E +E ) . Recall, for example, Boltzmann’s equation of statistical mechanics from Chapter 16: the population of states is proportional to exp{–energy/kT}. Mathematically, the opposition arises from the minus sign in the equation: acceleration = –ω e . Note this opposition does not depend on the polarity of the electron charge. On an anti-matter planet, the plate’s positively charged electrons accelerate in the direction opposite to our electrons, but the field they generate is identical, because their radiation field is proportional to qa, charge times acceleration, both of which flip polarity. 2

S

E

2

S

E

2

iωt

Define: α = q N/{2ε m(Ω –ω )} 2

2

2

0

The field at P is the sum of the source and plate fields: E=E +E E=Ee {1– iωαdz/c} S

E

iω(t–z/c)

For small dz, we use the approximation: 1–x=e . –x

E = E exp{iω(t–z/c)} exp{–iωαdz/c} E = E exp{iω(t–[z+αdz]/c)} The plate shifts the phase of the total field by the angle –ωαdz/c. A negative phase shift corresponds to delaying the wave — it takes an extra time dt=+αdz/c for the wave to achieve the phase it would have without the plate. This means the plate’s field changes the effective retarded time to point P. Without the plate, the source field is retarded by z/c. With the plate, the total field is retarded by [z+αdz]/c. The meaning of this becomes clear if we define n=1+α. The retarding factor is then: [z+αdz]/c = [z+(n–1)dz]/c [z+αdz]/c = [z–dz]/c + ndz/c [z+αdz]/c = [z–dz]/c + dz/(c/n) Eureka! The effect of adding the plate of thickness dz is equivalent to light moving at speed c for distance [z–dz] outside the plate, and moving at speed c/n for distance dz inside the plate. The key concept is: all electric fields propagate at speed c, but the plate shifts the phase, which is equivalent to delaying the wave. The apparently slower speed, c/n, is called the phase velocity. The index of refraction n, is given by: n = 1 + α = 1+ q N/{2ε m(Ω –ω )} 2

2

2

0

Dispersion The most interesting part of the equation for n is the factor (Ω –ω ) in the denominator. In V1p31-6, Feynman notes that we have derived not only an equation “for the index of refraction but we have also learned how the index of refraction should vary with the frequency ω of the light. This is something we would never understand from the simple statement that ‘light travels slower in a transparent material.’” 2

2

Calculating Ω for any given substance requires quantum mechanics. But we can describe general characteristics for various circumstances, which we divide into three categories: ωΩ.

ωΩ, and the denominator of our equation is negative, which makes n>E for small dz and for n≈1. This means we can neglect E •E compared with E •E . This leaves: S

E

E

E

S

S

E

E

–2σE •E = Power S

E

The power done on the plate’s electrons equals the force F applied on each electron by the source’s electric field × the electrons’ velocity v × the number of electrons per unit area. Note that since F and v are collinear, the dot product is simply their algebraic product, with appropriate signs. From above, we have: Power = Fv = q Ee v(t) Ndz iωt

Combining this with the equation for E in terms of velocity: E

–2σ Ee E = q Ee v(t) Ndz –2σ {–qµ/(2ε c)v(t)} = q v(t) Ndz σ {Ndz/(ε c)} = Ndz σ = εc iωt

iωt

E

0

0

0

This means the intensity of an electromagnetic wave, its energy per unit area per unit time, equals ε cE•E. 0

Diffraction From Holes We have several times made the strange claim that when light passes through an aperture, it diffracts as if the light were absorbed and re-emitted by every point in the empty space within the aperture. Let’s now understand why this is true (within a minus sign). As discussed, light from a distant source S is attenuated exponentially as it traverses a plate of refractive material. The real negative exponent is proportional to both the plate thickness dz and the absorption index n . If the plate is thick enough, virtually no light penetrates the plate. This means the total electric field at point P, far from the plate on the side opposite S, is zero. This is not surprising; most familiar objects are opaque. But what may be surprising is why exactly the field at P is zero. I

One of this chapter’s key principles is that the field at P is the sum of the fields produced by every accelerating charged particle everywhere. Even if the plate is completely opaque, the electric field produced by source S must reach P. The answer is: the electric field from the plate exactly cancels the electric field from S, and does so at every point P, at every frequency at which the plate is opaque. In the upper image of Figure 34-3, the radiation fields E from source S and E from the opaque plate exactly cancel one another at point P: E = –E . S

E

S

E

Figure 34-3 Radiation Field at P Upper: Source+Plate=Zero Lower: Source+Plate=–Hole

In the lower image, a portion of the plate has been removed, making a hole. Let’s define E to be the field that was produced by the portion of the plate that is now removed. Making the hole reduces the plate’s field from E to E –E . The total field at P is then: H

E

E

H

E = E + (E –E ) = –E S

E

H

H

The polarity of E is rarely significant. If our final objective is light intensity, and if no waves interfere except those passing through apertures, this minus sign is irrelevant. H

Therefore, the correct statement of diffraction from a barrier with an aperture is: 1. No emission comes from the aperture itself. 2. Every atom in the barrier is an emitting source. 3. The total field after the barrier equals minus what the aperture would radiate if it were a barrier instead of a hole. 4. All this assumes the aperture is much larger than the wavelength of light. Thus, we can calculate fields as if the holes are radiating rather than the barriers, and add a minus sign if necessary. Figure 34-4 illustrates this concept. In each image, plane waves move upward from the bottom. White denotes wave crests (most positive field), black denotes wave troughs (most negative field), and grey denotes zero field.

Figure 34-4 Diffraction of Upward-Moving Waves Left: Field From “Hole” Middle: Field From Barrier Minus Hole Right: Solid Barrier, Zero Field

In the image on the right, everything above the solid black barrier is uniformly gray, indicating the electric field is zero everywhere; the barrier’s field cancels the incident wave. The left image shows the field due to a narrow array of emitters — the “hole.” The center image shows the field due to an opaque black barrier with a narrow aperture — barrier minus hole. Above the barrier, the left and center images are exactly the same but with reversed polarity — their sum equals the right image, zero everywhere. To make the comparison more vivid, the left and center images each show half of the complete picture.

Diffraction & Shadows With the last section in mind, we will now calculate shadowing by a partial barrier. Before addressing this topic in V1p30-8, Feynman cautions that it is mathematically complex even though all the same physical principles apply. He doesn’t solve the equations, and neither will we. When parallel rays of light are partially blocked by an opaque object, interesting diffraction patterns result. In Figure 34-5, x is the vertical axis, and light rays enter horizontally from the left. An opaque barrier extends downward from point B, which is at height x=0.

Figure 34-5 Shadows Near Barrier

Displaced horizontally from the barrier by distance s is a vertical screen, on which we wish to know the light intensity as a function of x. The radiation field on the screen at a point P is the sum of diffracted waves from every point above the barrier, of which the figure shows three: B, D, and G. Define the x coordinate of P to be p. Point D is also at x=p, so the path length DP equals s. Compare that to the path length to P from point G at x=p+h, assuming hΩ for β>0). This is called blueshift, since light is shifted toward blue, the higher frequency end of the visible spectrum. For a source moving away from us, the Doppler effect decreases light’s frequency (ω