Feynman Simplified 1A: Basics of Physics & Newton’s Laws [2 ed.]


291 71 2MB

English Pages 151 Year 2015

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Chapter 1: What is Science?
Chapter 2: Basic Physics
Chapter 3: Physics: Mother of All Sciences
Chapter 4: Conservation of Energy
Chapter 5: Time and Distance
Chapter 6: Motion
Chapter 7: Newton’s Laws of Motion
Chapter 8: Newton’s Law of Universal Gravity
Chapter 9: The Character of Force
Chapter 10: Work & Potential Energy
Chapter 11: Review
Recommend Papers

Feynman Simplified 1A: Basics of Physics & Newton’s Laws [2 ed.]

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Feynman Simplified 1A: Basics of Physics & Newton’s Laws Everyone’s Guide to the Feynman Lectures on Physics by Robert L. Piccioni, Ph.D. Second Edition

Copyright © 2015 by Robert L. Piccioni Real Science Publishing 3949 Freshwind Circle Westlake Village, CA 91361, USA Edited by Joan Piccioni

All rights reserved, including the right of reproduction in whole or in part, in any form. Visit our web site www.guidetothecosmos.com

Everyone’s Guide to the Feynman Lectures on Physics Feynman Simplified gives mere mortals access to the fabled Feynman Lectures on Physics. Caltech Professor and Nobel Laureate Richard Feynman was the greatest scientist since Einstein. I had the amazing opportunity to learn physics directly from the world’s best physicist. He had an uncanny ability to unravel the most complex mysteries, reveal underlying principles, and profoundly understand nature. No one ever presented introductory physics with greater insight than did Richard Feynman. He taught us more than physics — he taught us how to think like a physicist. But, the Feynman Lectures are like “sipping from a fire hose.” His mantra seemed to be: No Einstein Left Behind. He sought to inspire “the more advanced and excited student”, and ensure “even the most intelligent student was unable to completely encompass everything.” My goal is to reach as many eager students as possible, and bring Feynman’s genius to a wider audience. For those who have struggled with the Big Red Books, and for those who were reluctant to take that plunge, Feynman Simplified is for you. I make Feynman’s lectures easier to understand without watering down his brilliant insights. Feynman Simplified is self-contained; you do not need to go back and forth between this book and the Lectures. But, for those who wish to read both, I provide references to Feynman’s books thusly: V1p12-9 denotes Volume 1, chapter 12, page 9. So, if you have trouble with Feynman’s description of reversible machines in Volume 1 page 4-2, simply search this eBook for V1p4-2. Rather than track his lectures line-for-line, some material is presented in a different sequence. The best way to divide material into one-hour lectures is not necessarily the best way to present it in a book. Many major discoveries have been made in the last 50 years, Feynman Simplified updates these lectures informing readers of the latest developments. Links to additional information on many topics are provided in the text. Physics is one of the greatest adventures of the human mind, but with adventure comes challenge. Even “simplified” Feynman physics will be one of the most intellectually challenging courses you will ever take. Give it your very best and you will get the most out of it. Enjoy Exploring.

THIS BOOK Feynman Simplified: Physics 1A covers about the first quarter of Volume 1, the freshman course, of The Feynman Lectures on Physics. This Second Edition of 1A contains many improvements, thanks in large part to feedback from readers like you. The topics we explore include: What is Science? What is Physics? Nature’s Limitless Diversity & Underlying Unity Nature’s Smallest & Largest Parts Matter is Made of Atoms Atoms are made of Elementary Particles Measurement, Units & Dimensional Analysis Time, Distance & Motion Energy and its many forms: Kinetic – Energy due to Motion Work – Force × Distance Heat – Atomic-scale Motion Potential – Energy due to Location Electromagnetic Mass – Condensed Energy Momentum — Mass in Motion Conservation Laws: Total Amounts that Never Change Energy, the Sum of all its Forms Momentum, in any direction Electric Charge Newton’s Laws of Motion Universal Gravity: Orbits & Tides The Character of Force

Essential Math for Physicists: Zeno’s Paradox & Infinite Series Vectors – 3-D Simplified Derivatives — Rate of Change Integrals — Sum of Small Changes

To find out about other eBooks in the Feynman Simplified series, click here. For a free downloadable index to the entire Feynman Simplified series, click here. I welcome your comments and suggestions: click here. If you enjoy this eBook please do me the great favor of rating it on Amazon.com or BN.com.

Table of Contents Chapter 1: What is Science? Chapter 2: Basic Physics Chapter 3: Physics: Mother of All Sciences Chapter 4: Conservation of Energy Chapter 5: Time and Distance Chapter 6: Motion Chapter 7: Newton’s Laws of Motion Chapter 8: Newton’s Law of Universal Gravity Chapter 9: The Character of Force Chapter 10: Work & Potential Energy Chapter 11: Review

Chapter 1 What is Science? Nobel Laureate Richard P. Feynman (1918-1988) begins his legendary Feynman Lectures on Physics by examining the meaning, methodology, capability, and purpose of science. In V1p2-1, Feynman writes: “The things with which we concern ourselves in science appear in myriad forms, and with a multitude of attributes. For example, if we stand on the shore and look at the sea, we see the water, the waves breaking, the foam, the sloshing motion of the water, the sound, the air, the winds and the clouds, the sun and the blue sky, and light; there is sand and there are rocks of various hardness and permanence, color and texture. There are animals and seaweed, hunger and disease, and the observer on the beach; there may be even happiness and thought. Any other spot in nature has a similar variety of things and influences. It is always as complicated as that, no matter where it is. Curiosity demands that we ask questions, that we try to put things together and try to understand the multitude of aspects as perhaps resulting from the action of a relatively small number of elemental things and forces acting in an infinite variety of combinations. “For example: Is the sand other than the rocks? That is, is the sand perhaps nothing but a great number of very tiny stones? Is the moon a great rock? If we understood rocks, would we also understand the sand and the moon? Is the wind a sloshing of air analogous to the sloshing motion of water in the sea? What common features do different movements have? What is common to different kinds of sound? How many different colors are there? And so on. In this way we try gradually to analyze all things, to put together things which at first sight look different, with the hope that we may be able to reduce the number of different things and thereby understand them better.” Over several centuries, people devised a systematic method to attempt to answer such questions: the scientific method, comprising observation, reason, imagination, and experiment. Science is society’s organized effort to understand nature. Science does not attempt to answer all questions, even perhaps not all of the most important questions. Rather, science focuses on observable physical reality. By understanding the physics of our world, we hope to better appreciate its magnificence, and better help society take advantage of opportunities and avoid hazards. The key principle of science that distinguishes it from other human endeavors is its emphasis on testing. In V1p1-1, Feynman says: “Experiment is the sole judge of scientific ‘truth’.” Science has no Pope, no Supreme Court, and no Parliament empowered to enact its laws. Even the most esteemed scientists are not infallible. Einstein was wrong about half the time — the rest of us aren’t nearly that good. Science advances by insisting that truth be determined objectively, by observations of nature.

Prime scientific evidence is not locked in some vault, rather it is reproducible by any competent person, anywhere, anytime. For example, the generally accepted atomic mass of oxygen is 15.999±0.004 AMU because that is the consensus of many independent measurements. But no one should carve that number in stone, because this is only the currently accepted value. One day more precise measurements might yield 15.991±0.001. If confirmed, this new result will become the new generally accepted value. If all this sounds a bit messy, that’s because it is — scientific knowledge advances but is rarely absolute. Quantifying uncertainty and intelligently dealing with it are essential in science. No one said science was easy. Science progresses when its two primary disciplines, experiment and theory, work well together. Feynman says experiments give us hints from which we hope human imagination will guess nature’s “wonderful, simple, but very strange patterns” so we can create mental models of how nature works. Humans excel at creating mental models of reality; it is one of our most distinguishing attributes. These models are sometimes called theories, hypotheses, or even laws of nature. But model seems less pretentious and that term is becoming more common. After creating a model, the next critical steps are prediction and testing. Feynman said: “The basis of science is its ability to predict [correctly].” (I follow the standard convention when quoting others: any changes I make are enclosed in [ ]’s.) People create an endless bounty of ideas about how nature works. Many of these ideas are wonderful and innovative, but almost all are incorrect — nature holds its secrets close. Science succeeds by demanding that our models make unique and definitive predictions, all of which must be validated by experiment. As Einstein said: “a thousand experiments cannot prove me right, but one experiment can prove me wrong.”

Brownian Motion An example should help clarify the process of science. For 25 centuries, scientists and natural philosophers debated in vain whether matter was continuous or discrete. Here is the difference: if a drop of water is cut in half, and then halved again and again, ad infinitum, will we ever reach an end — a discrete, irreducible, smallest piece of water? Or will we always have a continuous drop that can be halved yet again? Greek philosopher Democritus believed that matter was indeed made of tiny irreducible parts that he called atomos, a Greek word meaning uncuttable. Without modern technologies, no one could devise a means of definitively settling the atomic debate. In fact, by 1900, many leading scientists rejected the atomic model of matter. In 1827, English botanist Robert Brown made a seemingly unrelated discovery: when viewed under a

powerful microscope, tiny pollen grains suspended in liquid move constantly and chaotically, without any apparent cause. For 78 years, no one could explain this mysterious Brownian motion. Then in 1905, Einstein postulated that this erratic motion was due to atoms, too small to be seen in microscopes of the day, which continually collided with and jostled the much larger pollen grains. Einstein derived a diffusion equation that precisely predicted how Brownian motion varies with temperature, pollen size, liquid viscosity, and atomic mass. Meticulous experiments by French physicist Jean Baptiste Perrin confirmed Einstein’s predictions. Einstein’s model and equation, validated by experiment, finally established that atoms really do exist and enabled the first measurements of their masses.

Successive Approximation In V1p1-1, Feynman stresses that our knowledge will always be “merely an approximation to the complete truth.” This is in part because we do not yet know, and may never know, all the laws of nature. In addition, our instruments will improve, but will never achieve perfect precision. While perfection is likely impossible, continual refinement is expected. What may be surprising, as Feynman emphasizes in V1p1-2, is that improved precision sometimes reveals tiny discrepancies, which can require dramatic changes in our models. Consider an example in the theory of mechanics developed by Sir Isaac Newton (1642-1727). Before 1905, everyone believed that the mass of a body was the same whether it was moving or not. As Feynman said: people believed “a spinning top has the same weight as a still one.” The law was: mass is constant. Yet, Albert Einstein (1879-1955) said that law was wrong, mass increases with increasing speed — a claim confirmed by subsequent experiments. For “normal” speeds, the mass increase is extremely small, far less than the weighing precision of the day. For example, at 30,000 miles per hour, a body’s mass increases by only one part per billion. One might amend the old law to state: mass is constant to one part per billion for all speeds below 30,000 miles per hour. The amended law is numerically correct, and after all, who moves that fast? But as Feynman says, that amended law is “philosophically completely wrong.” The correct understanding is espoused by Einstein’s theory of special relativity, which revolutionizes our entire worldview. Was Newton wrong? Perhaps it is better to say that Newton’s laws have a limited scope. For everyday activities, Newton’s laws are adequate for almost any purpose. But for more extreme conditions, we need the greater precision, and more importantly, the deeper understanding of Einstein’s relativity. William Stoeger, a Ph.D. physicist and Jesuit priest at the Vatican Observatory, gave an intriguing lecture that I attended. The gist of his message was: science and faith are each never-ending quests for the least inadequate description of Truth. (How proud would you be taking home a report card saying you were the least inadequate student?) I cannot address how Stoeger’s statement relates to

faith, but his description of science is correct. Science will always have unanswered questions, yet scientists have faith that we are inching ever closer to nature’s ultimate truth.

Science Does Not Prove Theorems Mathematical proofs are as close to absolute truth as humans can ever hope to achieve. In Euclidean geometry, the sum of the interior angles of any triangle equals 180 degrees. This theorem was first proven 23 centuries ago. It will be true forever and never needs to be proven again. Note, however, that this geometric theorem relates only to ideal triangles in an ideal planar geometry. There may not be even one real triangle in the entire universe whose angles sum to exactly 180 degrees. By comparison, science is neither that clean nor that simple; absolute proof is impossible in science. Scientific theories cannot be proven mathematically; they can only be validated by experiment. But experiments have limited precision and scope. No measurement of the mass of oxygen atoms will ever produce the correct value to an infinite number of decimal digits, and no one is going to measure the mass of every oxygen atom in the universe. Experiments can falsify wrong theories, but they can only validate models to a specified level of precision within a specified scope of conditions.

How Best To Teach Science? Since science is a progression of better ideas replacing less adequate ones, Feynman ponders the best way to teach physics. In V1p1-2 he says: “Should we teach the correct but unfamiliar law with its strange and difficult conceptual ideas, for example, the theory of relativity, four-dimensional spacetime, and so on? Or should we first teach the simple ‘constant-mass’ law, which is only approximate, but does not involve such difficult ideas? The first is more exciting, more wonderful, and more fun, but the second is easier to get at first, and is a first step to a real understanding of the second.” You might think that it is a waste of time to learn a theory that has been superseded. But it really isn’t. Understanding each model that advanced science reveals the progression of human thought and enhances your own ability to take the next step. The newer theories are usually harder to understand than the ones they replace; it is not an accident that scientists took much longer to discover them. As we explore superseded ideas, we will highlight their limitations and point to their resolutions.

Atoms Einstein said: “The most incomprehensible thing about the universe is that it is comprehensible.” How can a 3-pound human brain perceive even a glimmer of so vast a cosmos? The answer lies in atoms and the particles that comprise them.

In the macro-world, which includes all we can see by eye or by telescope, everything is a unique entity. No two trees are exactly identical, no two stars are identical, and every person is a unique individual. Even every humble snowflake is in some tiny way distinct from every other. In the macroworld there are trillions of trillions of different things. (Much more than that actually, but that is enough to make my point.) If the story ended here, there would be no science, no engineering, and no advanced human society. Our brains could never cope with trillions of trillions of different things. But, if we take those macroscopic things apart, we find they are all made of molecules. We then discover that our universe contains “only” millions of millions of different types of molecules. I say “only” because while millions of millions is a very large number, it is vastly smaller than trillions of trillions. By peeling off the outer layer of the cosmic onion, we find an underlying reality of vastly reduced diversity. Indeed, there are only some 10,000 different molecules produced by inorganic processes; it is living organisms that produce the millions of millions. The human body alone produces about one million different proteins. Peeling off the next layer, we find that molecules are all made of atoms. Even counting all the varieties of different elements, there are only hundreds of different types of atoms. We find here an even deeper reality of even less diversity. Peeling off that layer, we finally arrive, we think, at the bottom: elementary particles that are not made of anything smaller. We have discovered only 17 different types of elementary particles, plus 12 corresponding antiparticles. Atoms contain only 3 of those 17 types. That is almost no diversity at all. As we probe ever deeper, nature gets simpler and simpler, even as it becomes less and less like what we are used to. This is summarized below.

What science has discovered is that everything we see in the universe, even the most distant galaxies, are made from only three different parts, which combine in a dazzling array of diverse combinations. We can comprehend the universe, science and engineering can succeed, because we can hope to understand just three tiny parts and how they fit together. In V1p1-2, Feynman says the atomic hypothesis — that everything is made of atoms — is the most important idea in science. I pause for two disclaimers. Firstly, I follow a common practice in physics of using the word

“molecule” to describe any bound group of multiple atoms. In other circumstances, it is important to distinguish how the atoms are bound, but those distinctions need not concern us at this stage. Secondly, by saying “everything we can see”, I finesse the issue of dark matter, which may comprise over 80% of the matter in our universe, but which we believe is not made of atoms. I explore all we know about dark matter in Our Universe 3. While dark matter is cosmologically important, it seems incapable of forming structures like atoms and molecules. Normal matter seems far more interesting. Planets, stars, trees, and we are made of normal matter that is made of atoms, which gives us much to explore. Atoms are very small, from 1 to 4 angstroms across. (10 billion angstroms = 1 meter; 250 million angstroms is about 1 inch.) Very roughly, a person is 6 billion atoms tall, 1 billion atoms wide, and 1 billion atoms thick.

Figure 1-1 Atoms Have Two Primary Parts

As sketched in Figure 1-1, atoms have two principle parts: a tiny nucleus that is surrounded by a much larger cloud of electrons. The electron cloud is typically 100,000 times larger than the nucleus. We believe electrons have no internal parts and are not made of anything smaller. But nuclei do have internal parts: protons and neutrons, as sketched in Figure 1-2. (Protons and neutrons also have internal parts called quarks, but we will get to that later.)

Figure 1-2 Inside a Nucleus

So all together, atoms are composed of three different types of subatomic particles: protons, neutrons, and electrons. Protons and neutrons have about the same mass, both about 1836 times heavier than electrons. In the units of particle physics, electrons have an electric charge of –1, protons have an electric charge of +1, and neutrons have zero charge. The oppositely charged electrons and protons attract one another, which is what holds atoms together. An atom that has the same number of protons and electrons has total charge zero and is electrically neutral. Atoms with excess protons or excess electrons have a nonzero charge and are called ions. The nuclear particles, protons and neutrons, all attract one another with the strong force, which holds nuclei together. The number of protons in an atom’s nucleus equals its element number, its number on the Periodic Table of Elements. Atoms that have the same number of protons but a different number of

neutrons are said to be different isotopes of the same element. Below are some common elements with their numbers of protons, and abundance (by mass) in our universe and in people.

Notice that the two smallest and simplest elements, hydrogen and helium, account for 98% of all the atomic mass in the universe. Note also that the composition of people (and all living things) is very different from that of the universe — we are made of rare stuff. Finally, if you think a few tiny particles cannot make much difference, note that only three protons separate lead from gold, which is 12,000 times more expensive. Feynman adds that atoms are continuously in motion, that they attract one another when they are a short distance apart, but repel one another if squeezed too close. He then demonstrates the power of the atomic hypothesis.

Atoms in Liquid Phase Imagine looking at a drop of water magnified one billion times.

Figure 1-3 Water Molecules

In Figure 1-3, the white circles represent hydrogen atoms and the larger black circles represent oxygen atoms, with the two combining to form water molecules. This image is merely schematic; it is intended to provide insight, but not realism. The image is two-dimensional, whereas the structure of water is of course three-dimensional. Also, the atoms are shown as sharp-edged black and white circles. Real atoms are none of those things. Despite these imperfections, such illustrations help us better conceive nature at the atomic scale. Physicists frequently employ such imperfect devices to spark ideas. Yet, we must always remember that the words, pictures, and concepts that work so well on a human scale cannot accurately describe nature on the atomic scale, or on the cosmic scale. We must build our understanding of those alien realms bit by bit. Note that the atoms form triplets, water molecules, each with two hydrogens and one oxygen, hence the familiar moniker H S. Isolated hydrogen atoms each have one proton and one electron; oxygen normally has eight protons, eight electrons, and eight neutrons. In a water molecule, the eight oxygen protons pull more forcefully on the hydrogen electrons than do the two hydrogen protons. The electrons respond by moving toward the oxygen nucleus. This gives the oxygen atom a somewhat negative electric charge, while the hydrogen atoms become somewhat positively charged. 2

Water and other atomic structures have a natural size. Molecules attract one another through electric forces. These attractions prevent matter from disintegrating into piles of molecules. In the case of water, the negatively charged side of each molecule (the oxygen side) attracts the positively charged side of neighboring molecules (the hydrogen side). But atoms also resist being squeezed together too tightly. This means the density (mass divided by volume) of most objects is nearly constant, and is determined by its atomic composition. Densities generally change somewhat when temperature changes. Let’s see why. Molecules are constantly moving and jostling one another. In solids and liquids, molecules are so tightly packed that none can move far before hitting a neighbor. Collisions are frequent, and the molecules’ directions of motions are random and constantly changing. We say their motion is chaotic. The higher the

temperature, the greater is their energy of motion, which is called heat. The molecules in a large object will not all have exactly the same energy of motion. For example, to make a cup of tea, we need very hot water. Heating water adds energy and increases its temperature. The molecules move faster, bang into one another more often and more forcefully, and push one another farther apart. The molecules then occupy a larger volume, so their density decreases. Eventually, water molecules are energetic enough to break their intermolecular bonds. Higher energy molecules can then escape from the group. This is how steam is made — with sufficient heat, liquid water turns into water vapor. This is an example of a phase transition — the transition from a liquid phase to a gas phase.

Gas Phase V1p1-3 Mastering steam was pivotal to the Industrial Revolution, and the need to master gas dynamics drove important developments in science, including thermodynamics and statistical mechanics. Let’s see what we can learn about gases from the atomic hypothesis. Figure 1-4 shows a gas-containing vessel, with a cylinder and a piston. We show just a few of the billions of trillions of gas molecules typically in such a cylinder. Because the molecules are constantly moving, they bang into and push against the cylinder and the piston, applying a force.

Figure 1-4 Water in Gas Phase

It is reasonable to assume that the same force is applied to every square centimeter of the surfaces exposed to the gas. The force per unit area is called pressure. To keep the piston stationary, to stop it from being pushed to the left, we must apply an opposing force (toward the right) equal to the pressure times the area of the piston’s face. The underlying equation is: force = pressure × area If we heat a gas, its molecules move faster, hit the walls harder and more frequently, and the pressure

increases. If we cool a gas, its pressure decreases. If we double the number of gas molecules in the vessel, and keep the temperature constant, the number of wall collisions doubles but the impact of each collision remains the same. This means the pressure doubles, and so does the force we must apply to keep the piston stationary. Now let’s try something different. Let’s push the piston to the right, compressing the gas. As molecules hit the moving piston, their speed increases. We will be able to prove this after studying Newtonian mechanics, but most of us already have a sense that this is true. A good whack by a baseball batter (or cricket batsman) greatly increases a ball’s speed. They can hit a ball much farther than any pitcher (or bowler) can throw it. Feynman points out the special case of a molecule that happens to be stationary before being hit by the moving piston; its speed increases as it starts moving. So, moving the piston inward increases the molecules’ average speed and the temperature of the gas. Conversely, moving the piston outward to the left decreases the molecules’ average speed and the temperature of the gas. (Think of a bunt in baseball.) When we study thermodynamics, we will learn that all this is correct, provided the piston moves slowly enough for the gas to equilibrate along the way. Rapid expansion or compression is more complex.

Solid Phase V1p1-4 Now, let’s go back to our drop of liquid water, and reduce its temperature. The average speed of its molecules decreases as the temperature drops. Eventually, when the temperature is low enough, the intermolecular attraction overcomes the atoms’ jostling and confines them to fixed positions. Water then becomes ice, making a phase transition from liquid to solid. Whereas liquids are loose, chaotically changing conglomerations of molecules, solids are crystalline arrays, with fixed atomic patterns that repeat countless millions of times. Since each atom is held in a fixed position relative to its neighbors, solids are rigid structures. One can move an entire icicle simply by pulling on one end. Water ice can form in many distinct molecular patterns, depending on temperature and pressure. But, in Earth’s biosphere almost all ice is of a single type called 1h. Figure 1-5 illustrates two key characteristics of common ice: low density, and hexagonal symmetry.

Figure 1-5 Water Ice

In this form, ice has an open structure with large voids between atoms. There are fewer atoms per cubic centimeter in ice than in liquid water. Hence, ice floats — think of icebergs and ice cubes in tea. This is so familiar that we take this remarkable property of water for granted. Almost every other material increases in density as it transitions from liquid to solid. As temperature decreases, atomic motion slows, atoms push one another less forcefully, and their intermolecular attraction squeezes them closer together. The low density, open structure of ice is unique in nature. If water acted like other materials, ice would sink, with potentially devastating consequences. Floating ice in lakes and rivers shields the underlying water from winter’s cold. If ice sank, the exposed surface water would continue freezing and sinking, and entire lakes and rivers would freeze solid, killing much of the life within. The other key property of ice is hexagonal symmetry. This is why snowflakes, which are ice crystals, are six-sided. Symmetry is an important consideration in physics that has many stunning consequences. If you imagine rotating Figure 1-5 by 60 degrees, you will notice that the rotated image is equivalent to the original. All the properties of the original structure must be properties of the rotated structure. Hence these properties must repeat every 60 degrees of rotation. Lastly, while atomic motions in solids are less than in liquids or gases, the atoms of a solid still vibrate about fixed locations. If a solid’s temperature increases, its atomic vibrations increase. If the temperature is high enough, the atoms will knock themselves out of position and the solid will melt, making a phase transition from solid to liquid. If a solid’s temperature decreases, its atomic vibrations decrease. At the lowest possible temperature, absolute zero (0 Kelvin, –273.15°C, or –459.67°F), atomic motion reaches an absolute minimum, but does not stop entirely. The Uncertainty Principle of quantum mechanics explains why atoms cannot be completely still, as we will discuss later.

Atomic Dynamic Processes V1p1-5 In addition to describing the static state of gases, liquids, and solids, the atomic hypothesis also describes dynamic processes. Consider the surface region between liquid water and air, as illustrated in Figure 1-6.

Figure 1-6 Water Below Air

In the air above the liquid, we see several types of molecules: a water molecule H O; an oxygen molecule O represented by two connected black circles; and nitrogen molecules N represented by two connected gray circles. On average, water vapor comprises only 1/4 of 1% of Earth’s atmosphere, but its abundance varies greatly with location and over time. Nitrogen and oxygen account for 78% and 21% respectively (by volume) of our atmosphere. 2

2

2

Even at temperatures below water’s boiling point, some water molecules will be energetic enough to escape the liquid phase and enter the air as water vapor — they evaporate. At the same time, some water molecules in the air will hit the liquid surface and be captured — this is condensation. In a stable environment, the system eventually reaches equilibrium, a state in which equal numbers of water molecules exit and enter the liquid each second. The equilibrium level of water vapor increases as temperature rises and decreases as temperature drops. Other molecules are also mobile. Air molecules are occasionally absorbed into the liquid; Figure 1-6 shows two absorbed nitrogen molecules. Some absorbed air molecules will, through chance collisions, gain enough energy to evaporate. Again, the absorption rate and evaporation rate equilibrate if conditions do not change — at a certain concentration of absorbed nitrogen molecules, equal numbers of molecules will exit and enter the liquid per second. The concentrations of various molecules at equilibrium are strongly dependent on temperature and gas pressure. An example of unchanging conditions is a sealed vessel containing water and air, held at constant temperature. Such a system will eventually reach equilibrium, with the proportions of water vapor and absorbed air molecules being determined by molecule type, temperature, and pressure. What if the conditions suddenly change? Imagine opening the gas vessel and replacing moist air (air containing water vapor) with dry air (air with no water vapor). The condensation rate immediately drops to zero, since there are no water vapor molecules to condense. But, the evaporation rate remains the same.

This means the air in the vessel must be repopulated with water vapor. More liquid evaporates, the amount of liquid decreases, and the liquid cools. This is because the water molecules that do evaporate are “hotter”, with more energy and more heat than average. As “hotter” molecules leave, the remaining liquid’s average temperature drops. This is why we perspire on hot days: evaporating sweat cools our skin. And this is why a fan, continually blowing away moist air, cools us even faster. This is also why blowing over a bowl of hot soup expedites cooling. A solid dissolving in liquid is another physical reaction illuminated by the atomic hypothesis. For example, salt is a solid crystal composed of equal parts sodium (Na) and chlorine (Cl). Chlorine has very high electronegativity: it forcefully attracts an extra electron, making it negatively charged. Sodium easily loses one electron, making it positively charged. In salt, the oppositely charged ions attract one another, creating a solid, three-dimensional array of alternating Na and Cl. (Think of a three-dimensional chess board with alternating colored squares.) Figure 1-7 illustrates salt dissolving in liquid water.

Figure 1-7 Salt Dissolving in Water

Note that the negatively charged chlorine ions are attracted to the positive side of water molecules, and the positively charged sodium ions are attracted to the negative side of water molecules. Chlorine and sodium ions separate from a salt crystal and dissolve into water at some rate. And those ions will exit the liquid and reattach themselves to the salt crystal at some other rate. As before, if conditions are stable, the system will reach equilibrium when just as many ions are dissolving as are recrystallizing each second. See how much we can understand with the atomic hypothesis.

Chapter 1 Review: Key ideas 1. Science strives to understand nature through reason, imagination, and quantitative observation and testing. Ours is a never-ending quest for ever-closer approximations to nature’s true laws. 2. Experiment is the only judge of scientific truth. 3. Macroscopically, on a human scale and larger, everything we see is a unique entity: the bewildering diversity of the cosmos seems incomprehensible. Fortunately, all these different entities are comprised of a small number of different atoms, which are in turn comprised of only three types of particles: electrons, protons, and neutrons. That uniformity makes science, technology, and an advanced society possible.

Chapter 2 Basic Physics Physicists’ Game Plan In V1p2-1, Feynman compares our quest to understand nature to observing a chess match between gods. As all chess players know, it is far easier to understand the rules of chess than to select the best move in tournament play. Nature’s “game” is much more complex than chess. Perhaps we will never understand precisely why everything happens, but a good first step is to try to understand the rules of nature’s game: nature’s fundamental laws. Physics is the quest to discover all the pieces with which nature plays and all the rules of nature’s game. We are getting ever closer, but we certainly are not there yet. Unification is a powerful means of advancing our understanding. (Feynman sometimes called it amalgamation, but unification is the standard terminology today.) At first, natural phenomena seem to fall into many different classes: mechanics, heat, electricity, magnetism, light, gravity, chemistry, and many more. The goal of unification is to discover the previously unseen commonality among the different classes. We seek the common phenomenon of which different classes are merely different aspects. We strive to discover the octopus where we previously saw only disparate parts: beak, eyes, arms, and suction cups. An example is the union of heat and mechanics. Statistical mechanics reveals that heat is the energy of motion, the kinetic energy of atoms and molecules, which for any specific substance is simply proportional to temperature. Another grand advance is the unification of electricity, magnetism, and light, which we now understand to be different aspects of the electromagnetic field. Einstein excelled at unification. He united: particles and waves; space and time; mass, energy, and momentum; and spacetime with mass-energy. Grand unifications have punctuated the history of physics, greatly simplifying our models and broadening their scope. But we also keep discovering new phenomena that create new classes as yet not unified. The ultimate goal, unifying all natural laws into one comprehensive model, seems as elusive as ever. What follows is an overview of our current state of knowledge, and how we got here. Feynman feels you deserve a roadmap showing where this two-year course is headed. You DO NOT need to immediately learn all this material.

All these topics will be thoroughly and carefully explored in the Feynman Simplified series, but at a humane pace.

Pre-Modern Physics By “pre-modern” or “classical”, we mean the state of physics before the theories of special relativity and quantum mechanics revolutionized science. The pre-modern worldview was that everything fell into one of these separate and entirely distinct categories: Material objects Waves Forces Space Time Each material body was made of atoms, had a fixed mass m greater than zero, and a fixed electric charge q, which could be positive, negative, or zero. Objects with mass had inertia: an object’s motion did not change until acted upon by a force. Waves were not things in and of themselves, but were rather the organized motion of something else. Several types were known: sound, which could travel through gases, liquids, or solids; ocean waves of moving water; and light, which was thought to move through the long-sought, but undiscovered, luminiferous ether. Forces were the means through which material objects acted upon one another. Two forces were known: gravity, which seemed simple; and electromagnetism, which seemed much more complex. The concepts of space and time were Newtonian. Space was a three-dimension, eternally unchanging reference grid with a Euclidian geometry. Time was universal and steadily advancing; it flowed at the same rate everywhere and always. In V1p2-3, Feynman emphasizes that no one knew why objects had inertia, or why gravity existed. In fact, physics still does not, and may never, answer what truly are mass, energy, charge, space, time, and other fundamental entities. What we can hope to answer is how such entities relate to one another. Some believe such relationships are the only true reality. Gravity was a force between all massive bodies; gravity was always attractive and its strength decreased with the square of the distance to the gravitational source — it had an inverse squared law. The electric force, one aspect of electromagnetism, also was understood to decrease with the square of the bodies’ separation, but it was sometimes attractive, sometimes repulsive, and sometimes nonexistent. Bodies with positive electric charge attracted bodies with negative electric charge, and vice versa. Positive bodies repelled other positive bodies. Negative bodies repelled other negative bodies. And bodies with zero charge did not exert any electric force nor did they respond to the

electric charges of other bodies. (We assume here that the size of the bodies is much smaller than their separation.) This is summarized in the following table:

We can encapsulate all this mathematically: the electric force F between object A with charge Q and object B with charge q is: F = –k Q q / r

2

Here, k is a constant, and r is the separation between A and B. The force is attractive if F is greater than zero. To emphasize how much stronger the electric force is than gravity, Feynman says that if all electric forces were attractive, two grains of sand held 30 meters apart would attract one another with a force of 3 million tons, while the gravitational attraction would be immeasurably small.

Atoms & Electromagnetism Our understanding of the structure of atoms and the role of electricity evolved slowly. Scientists began seriously studying electricity 300 years ago. Electrons were first observed in 1869, although it was many decades before physicists realized that they were essential components of all atoms. In 1911, Ernest Rutherford showed that the bulk of an atom’s mass was in its tiny central core. In 1913, Niels Bohr proposed the first version of our modern model of atoms, which was confirmed by experiments in 1914. This is the model we discussed in Chapter 1: a tiny, positively charged nucleus, with almost all the atom’s mass, surrounded by a much larger cloud of negatively charged electrons in specific orbits. We will learn why the orbits are specific when we discuss quantum

mechanics. Some atoms are electrically neutral. From far away, they seem to be chargeless, and neither attract nor repel one another (ignoring the insignificant effects of gravity). But up close, the positive and negative parts of an atom can be distinguished. Atom A can orient itself so that its positive parts are closer to atom B’s negative parts than to B’s positive parts, and vice versa. Even though their net charges are zero, atoms and molecules can strongly attract one another through the electric force. Indeed, this is what holds our bodies together. Every one of our tiny pieces is held in place by strong electric forces, with exquisite precision. Sometimes, we rub off a few electrons by walking across a carpet. Our unbalanced charge is promptly neutralized with startling effect when we next grab a doorknob. The idea that electron A directly repels electron B works for static situations, when neither charge moves, but that simple model fails for moving charges. If A oscillates, such as moving up and down, one would expect the force on B to change correspondingly. However, physicists discovered two surprises. Firstly, the force on B does not change instantaneously to match the motion of A. The change in B’s force comes after the change in A’s motion. That delay increases as the separation between A and B increases. Secondly, the force on B does not decrease with the square of the separation; the force remains much stronger at much greater separations than the inverse squared law predicts. The solution to this new twist requires a new concept: the electromagnetic field. The electromagnetic force now has more rules: Electric charges create electric fields. Moving charges also create magnetic fields. Changing magnetic fields create electric fields. Changing electric fields create magnetic fields. Fields cause forces on other electric charges. For the static case, we get the same result as before — electron A creates an electric field that acts on electron B and repels it. So why bother with this new, more complex field notion? The answer is that only fields properly explain how nature works when charges move. In V1p2-4, Feynman makes this analogy: imagine two corks floating on a pond. If we jiggle cork A, moving it rapidly up and down, we disturb the water near A. Those disturbed water molecules then disturb adjacent water molecules, and so on. The disturbance propagates outward across the pond in waves that eventually make cork B bob up and down. The inverse-squared law limits the effect cork A can have on cork B directly. But through waves, A’s effects extend much farther. Similarly, an oscillating electric charge can affect other charges at surprisingly great distances through the medium of the electromagnetic field. The electromagnetic field corresponds to the water in the cork analogy.

Light V1p2-5 Electromagnetic field waves are called electromagnetic radiation or simply light. Some people reserve the word “light” for the tiny range of electromagnetic radiation that is visible to human eyes. Since all electromagnetic radiation is intrinsically identical, I call all of it “light” to avoid the misconception that the light we see is somehow special and to avoid constantly writing “electromagnetic radiation.” (Of course, visible light is special to us, but not to nature.) Early on, scientists assigned labels to different “types” of light, thinking that they were distinct phenomena. However the only difference is their frequency, how many times per second the field “waves” up and down. Starting with the highest frequencies and working down to the lowest, the labels are: gamma rays, x-rays, ultraviolet, visible, infrared, microwaves, TV and radio waves. Frequency is a continuous variable; it can have any value from zero to infinity cycles per second. Hence there is no sharp cutoff between one “type” of light and another. Frequency is also not an intrinsic property of light; it depends on the observer as well as the source. The star Betelgeuse looks red to us on Earth, but it could look blue to the crew of a rocket ship speeding toward it.

Light & Relativity Albert Einstein revolutionized physics with five spectacular papers published in 1905, which is appropriately called his “Miracle Year.” He demystified the photoelectric effect by postulating that light was both a particle and a wave, thereby unifying two previously seemingly distinct and unrelated phenomena. Louis de Broglie expanded this notion into particle-wave duality, everything has both particle and wave properties, the foundational principle of quantum mechanics. The particle aspect of light is called the photon. Einstein believed that nature was harmonious and beautiful. He was disturbed by an “ugly” inconsistency between James Clerk Maxwell’s equations of electromagnetism and the principle of relativity. Relativity was first espoused by Galileo Galilei, and later incorporated into Newton’s laws of motion. Einstein was sure nature was not ugly. In his theory of special relativity, he hypothesized that light had unique properties never before imagined: it moved like a wave but without a medium (no luminiferous ether), and it always traveled at the same speed, c, relative to any observer. The worldview before Einstein was that measurements were universal but some laws of nature were relative. Scientists thought every competent observer would measure the same value for the length of a meter stick, the duration of an hour, and the mass of a kilogram. They would, however, need to adjust Maxwell’s equations for their own motion through the (universally assumed but undetected) luminiferous ether. Einstein flipped that around — he said the laws of nature were universal, the same for any observer, but certain measurements were relative. Observers moving at different speeds would measure different values of distance, time, and mass (and quantities that depend on those, such as energy).

Time and space can no longer be considered distinct, absolute, and universal, as Newton believed. Time and space are definable only relative to observers. And both are actually two aspects of one unified, four-dimensional spacetime. Special relativity superseded Newton’s laws of motion. In 1915, Einstein presented his theory of general relativity, which superseded Newton’s law of gravity. Einstein said gravity was not a force between distant objects, but rather the effect of curved spacetime. Absent other forces, objects move through four-dimensional spacetime along the straightest and shortest possible paths: geodesics. Einstein said if we could see the cosmos in all four dimensions, the motion of Earth around the Sun would seem so natural that no one would ever have invented the word “gravity.” For Newton, the drama of the cosmos was like a Shakespearean play, acted out by matter and forces on the unchanging stage of space and time. For Einstein, spacetime is dynamic and plays a central role in the cosmic drama, like a stage of Cirque du Soleil.

Quantum Mechanics We now know that, in the micro-world of atoms and subatomic particles, the old notions of inertia and forces do not explain how nature works. In V1p2-6, Feynman says: “…things on a small scale behave nothing like things on a large scale. That is what makes physics difficult—and very interesting. It is hard because [it] is so ‘unnatural’; we have no direct experience with it…it takes a lot of imagination.” Perhaps the most famous idea in quantum mechanics is Werner Heisenberg’s Uncertainty Principle. As we will learn later, it derives directly from particle-wave duality. Since every particle has wave properties, and in particular a wavelength, its exact location and its exact momentum cannot be known simultaneously. (In Newtonian physics, momentum is mass multiplied by velocity.) This uncertainty is not because humans lack the necessary wisdom or technology, but because nature is somewhat fuzzy. Nature does not exactly determine those quantities simultaneously. But the fuzziness is not capricious; nature is consistently fuzzy in a definitive manner and to a precise degree. Mathematically, the Uncertainty Principle says that along any axis x, the uncertainty in a particle’s position, Δx, multiplied by the uncertainty in its momentum, Δp , must be at least h/4π, where h is Planck’s constant. Note that Δx is a single symbol and is not Δ times x; similarly Δp is a single symbol. x

x

Niels Bohr used quantum mechanics to explain why atoms do not collapse — why negative electrons are not simply pulled into the positive nucleus. If electrons were confined within a tiny nucleus, the

Uncertainty Principle shows that their momenta would far exceed the electric force’s ability to contain them. Quantum mechanics determines how far electrons must be from a nucleus, thus setting the size of atoms. Protons and neutrons are confined within the much smaller nucleus due to their much greater masses and the much greater strength of the strong nuclear force, which does not affect electrons. The Uncertainty Principle also explains why atomic motions cannot completely stop, even at absolute zero temperature: if they did, both Δx and Δp would be zero, violating the rule that their product must be ≥ h/4π. x

Quantum mechanics radically changed the philosophy of science. Because of the Uncertainty Principle, it is impossible to predict exactly what will happen in any natural phenomenon. For example, consider a collection of radioactive carbon-14 atoms. We can know the exact rate at which carbon-14 atoms decay to nitrogen-14, but we can never predict exactly when any specific atom will decay. This is somewhat analogous to flipping an ideal coin: we may know that, on average, it comes up heads half the time, but we cannot know the outcome of the next flip. Like most analogies, this one is imperfect. With a coin, we might believe that by precisely measuring the flipping motion we could predict the outcome. With radioactive atoms, quantum mechanics tells us that exact predictions are impossible regardless of how much is known. Uncertainty shocked physicists to their core. Newton and Einstein both believed in a clockwork universe, in which everything evolved mechanistically according to immutable laws toward a precisely predetermined future. Einstein believed the future, the present, and the past all exist right now on an equal basis; we just have not yet reached the future. Some, including Einstein, thought the uncertainty of quantum mechanics might be only apparent, not fundamental — that internal hidden variables might exist, which determine all outcomes with no uncertainty whatsoever. It is only our inability to perceive these hidden variables that leads us to false conclusions. The hidden variable debate raged for decades, but was finally and definitively resolved. Quantum mechanics is the correct model of nature; no classical deterministic theory can possibly explain the observations. Uncertainty is a fundamental property of nature. Uncertainty may be uncomfortable, but scientists must accept nature as it is. We cannot pretend nature is the way we wish it to be. In V1p2-7, Feynman says: “We just have to take what we see, and then formulate all the rest of our ideas in terms of our actual experience.” With Julian Schwinger and Sin-Itiro Tomonaga, Richard Feynman was awarded the 1965 Nobel Prize in Physics for developing the theory of Quantum Electrodynamics (QED). QED is our most advanced model of electromagnetism, incorporating all the discoveries of special relativity and quantum mechanics. QED provides a completely new concept of the electric force, a

concept that became the fundamental principle underlying modern models of all other forces. QED says that the force between two charged particles is caused by their exchange of photons, as sketched below:

Figure 2-1 Electrons Exchanging a Photon

Figure 2-1 is a schematic representation of two electrons repelling one another through the electromagnetic force. The time axis is vertical: time increases from the bottom to the top. The horizontal axis represents the distance between the electrons. Neither axis is intended to be quantitative; only the sequence of events is important. As the electrons (e ) approach one another, the left electron emits a photon (a particle of light) that is subsequently absorbed by the right electron. QED posits that the photon transfers energy and momentum from one particle to the other, which results, in this case, in a repulsive force. –

Not surprisingly, Figure 2-1 is called a Feynman diagram. In V1p2-7, Feynman describes QED: “This fundamental theory of the interaction of light and matter, or electric field and charge, is our greatest success so far in physics. In this one theory we have the basic rules for all ordinary phenomena except for gravitation and nuclear processes. For example, out of [QED] come all known electrical, mechanical, and chemical laws: the laws for the collision of billiard balls, the motions of wires in magnetic fields, the specific heat of carbon monoxide, the color of neon signs, the density of salt, and the reactions of hydrogen and oxygen to make water are all consequences of this one law.” After thousands of exhaustive tests, no prediction of QED has ever been found to be wrong. The most precise experimental tests confirm QED to 12 decimal digits — one part in a trillion.

Nuclei & Particles V1p2-8 Every atomic nucleus contains at least one proton, providing the positive charge required to attract electrons. The simplest nucleus (that of normal hydrogen) has nothing else. The nuclei of all heavier elements contain multiple protons and one or more neutrons. If electromagnetism were the only force,

hydrogen would be the only element in the universe; all multiple-proton nuclei would immediately disintegrate since protons repel one another. What holds nuclei together is the strong force. True to its name, it is the strongest of nature’s forces. Consequently, nuclear reactions occur on an energy scale that dwarfs chemical reactions mediated by the electromagnetic force. Per pound of fuel, nuclear fusion, the merging of four hydrogen nuclei to produce one helium nucleus, releases 40 million times more energy than burning coal. Nuclear fusion is what powers the stars. Fusion releases this tremendous amount of energy without producing any pollution or greenhouse gases. Mastering nuclear fusion is the holy grail of energy generation; it has long been, and still seems to be, a distant goal. Rounding out the list of nature’s forces is the weak force. Comparing force strengths is a bit like comparing apples and oranges, because they depend differently on particle type and separation. But roughly speaking, the strong force is 100 times stronger than the electric force, which is 100 times stronger than the weak force. The weak force is unique in that it allows particles to change identities. Through the weak force, neutrons can transform into protons, and protons can transform into neutrons (within some nuclei) thusly: n becomes p + e + anti-electron-neutrino p becomes n + e + electron-neutrino 0

+



+

0

+

These reactions allow one element to transform into another, which enables radioactive decays and nuclear fusion. Without the weak force, the only elements in our universe would be hydrogen and helium. Much has been discovered since Feynman’s Lectures. Following the approach forged by QED, the strong and weak forces are now understood as the exchange of force-carrying particles between particles of matter, similar to the process in Figure 2-1. We also know that some particles are made of even smaller parts, while others are elementary, not made of anything else. Elementary particles are divided into two classes: 1. fermions, the particles of matter 2. bosons, the force exchange particles Fermions are named in honor of my father’s mentor Enrico Fermi. Bosons are named in honor of Satyendra Nath Bose. Fermions are further divided into two groups: quarks and leptons. Each fermion has a corresponding but distinct antiparticle, whereas most bosons are their own antiparticles (the only exception being the W and W , which are each other’s antiparticles). +



Leptons do not participate in the strong force, and therefore do not combine to form composite particles. The most notable lepton is the electron.

All the particles that participate in the strong force are comprised of quarks. The strong force arises from quarks exchanging bosons called gluons. The strong force binds quarks together so forcefully that, even with the highest energies scientists can produce, we cannot pull apart quark pairs or triplets. As a result, isolated quarks have never been observed. Quarks come in six types: up, down, charm, strange, top, and bottom, with the corresponding symbols u, d, c, s, t, and b. The names are completely fanciful; none is more charming or less strange than the others. Two up quarks and a down quark combine to make a proton, while two downs and an up make a neutron. Physicists have made hundreds of other combinations. The u, c, and t quarks have electric charge +2/3, while the d, s, and b quarks have charge –1/3. Whenever quarks combine to make a composite particle, the result always has integral charge; no fractional charges have ever been detected, despite heroic efforts. The weak force arises from the exchange of Z and W bosons, which are both very massive, 97 and 86 times the proton’s mass respectively. Exchanging such massive particles is much more difficult than exchanging a massless photon, which is why the weak force is so feeble. The elementary particles, those believed not to be made of anything smaller, are grouped to form the following array.

Figure 2-2 Elementary Particles without Higgs

Each box of Figure 2-2 provides data for one particle, starting with a large one-letter symbol above

the particle’s name. The three numbers on the left side of each box are: the particle’s mass, electric charge, and spin. Particle masses are measured in eV, which stands for electron-Volt, the energy an electron gains traversing a one-volt potential. MeV stands for million eV, and GeV for billion eV. A proton’s mass is 0.938 GeV. Electric charge is given in multiples of the proton charge. Spin is a particle’s intrinsic angular momentum (somewhat like a spinning top). The mass, charge and spin numbers in the figure may be hard to read. All spins and charges are discussed below. I will give you the mass numbers here. From left to right each row, the particles and their masses are: row 1: u = 2.4 MeV; c = 1.27 GeV; t = 171.2 GeV; and γ = 0 row 2: d= 4.8 MeV; s = 104 MeV; b = 4.2 GeV; and g = 0 row 3: for neutrino masses see below; and Z = 91.2 GeV. row 4: e = 0.511 MeV; µ = 105.7 MeV; τ = 1.777 GeV; and W± = 80.4 GeV Figure 2-2 is a pretty, pleasingly symmetric, 4-by-4 array; but that is a bit misleading. The left three columns are in fact disjoint from the right column. The right hand column contains four bosons: the photon, gluon, Z, and W. However, the boson on each row is not uniquely associated with the other particles on the same row. This is quite different from the other three columns, where all particles on the same row are intimately connected. The six quarks occupy the upper two rows of the left three columns. The top row contains quarks with electric charge +2/3, while the second row contains quarks with charge –1/3. The six leptons occupy the lower two rows of the left three columns. The bottom row contains leptons with charge –1: the electron, muon, and tau. The next row up contains the neutrinos, leptons with zero charge: the electron neutrino, muon neutrino, and tau neutrino. The most essential difference between bosons and fermions is their spin. The four bosons all have spin 1, while the quarks and leptons all have spin 1/2. Bosons are force exchange particles: photons for the electromagnetism; gluons for the strong force; and Z and W for the weak force. Setting the neutrinos aside for the moment, the three generations of fermions are distinguished by their masses. In each row, the third generation particles are substantially more massive than the second generation, which in turn are substantially more massive than the first generation (left most column). Since they are more massive, each third generation particle can transform (we say decay in this case) into the second generation particle on the same row, which can decay to the first generation particle on the same row. These decays happen rapidly, typically in millionths of a second to trillionths of a

trillionth of a second. First generation particles are stable (they may last forever), because they have nothing less massive into which they can decay. The masses of the three types of neutrinos are very poorly known. We know that their masses are very small, and that the three masses are all different. The values shown in Figure 2-2 are measured in particle reactions. Surprisingly, cosmological measurements provide a much more stringent limit: the sum of the three masses is less than 0.1 eV. In some sense, it is easier to measure the mass of the universe and subtract everything else than to measure the neutrino masses directly. Not shown in Figure 2-2 is the Higgs boson, discovered in 2012. It should go with the other four bosons, but that would spoil the nice 4-by-4 array. The Higgs boson was widely hyped as the “God particle”, but it is no more or less divine than any other particle. The Higgs boson is credited with being the source of the masses of the other elementary particles. Particles that interact strongly with Higgs are very massive. Particles that interact weakly with Higgs have small masses. Particles that do not interact at all with Higgs, such as photons and gluons, have zero mass. Since the masses of the up quark, down quark, and electron are 0.26%, 0.51%, and 0.05% of the proton mass, the Higgs boson accounts for only about 1% of all atomic mass. That isn’t exactly Godly on a biblical scale.

Chapter 2 Review: Key Ideas 1. Pre-modern physics espoused a catalog of separate and distinct phenomena: material objects; waves; forces; space; and time. 2. We are shrinking the catalog through unification, while also adding new discoveries not yet unified. 3. Quantum mechanics and Einstein’s theories of relativity revolutionized our worldview, launching the era of modern physics.

Chapter 3 Physics: The Mother of All Sciences Physics is the science of our physical world. It strives to understand what everything in our universe is made of, how it functions, and how it interacts with everything else. In V1p3-1, Feynman says: “Physics is the most fundamental and all-inclusive of the sciences, and has had a profound effect on all scientific development.” Feynman goes on to explain in extensive detail the importance of physics to other sciences, showing that physics provides the foundation of nearly every other science, including astronomy, chemistry, biology, geology, and psychology. He says he would expand that list except for “lack of space.” All this is of little help in learning physics. And if you want to learn psychology, I suggest there are better sources than The Feynman Lectures on Physics. Indeed the segregation of the sciences into distinct disciplines is “unnatural.” Nature does not recognize the turf boundaries that university departments feel compelled to defend. These divides are diminishing somewhat with more interest in physical chemistry, biochemistry, biophysics, astrophysics, mathematical physics, astrobiology, particle cosmology, etc. Since our primary goal here is to master beginning physics at the Feynman level, I will briefly summarize his third lecture.

Chemistry V1p3-1 Chemistry may be more closely related to physics than any other science. Early chemists focused on discovering the elements, their properties, and how they react in various combinations. Their observations and the rules they developed inspired physicists to develop the modern, quantum mechanical model of atoms. We now know that chemical reactions involve atoms’ outer electrons, the valence electrons. We know such reactions are governed by the electron orbits prescribed by quantum mechanics. We are confident that all chemical processes can be explained by quantum mechanics in principle, although the equations are often too difficult to solve analytically. Statistical mechanics is an important branch of both physics and chemistry that was mutually developed. Physicists were interested in the behavior of gases and chemists were interested in

reactions in solutions. Both involve vast numbers of atoms. Statistical mechanics provides the means to analyze the global properties of such complex systems, without needing to track each individual atomic event.

Biology V1p3-2 Since quantum mechanics provides the rules for chemical reactions, and since much of biology is driven by chemistry, physics is important to biology. Physics is also needed to understand blood flow and pressure, nerve firing and signal transmission, color vision, and many electrical and mechanical functions of life. One remarkable consequence of quantum mechanics, unknown in Feynman’s day, is tautomeric mutations of DNA. The DNA of every life form studied to date is composed of four nucleotide bases denoted A, C, G, and T. The sequence of these bases along strands of DNA defines the genetic code of all life on Earth. It might spell elephant in one case and amoeba in another. DNA is comprised of two strands, coiling around one another in a double helix. The strands bind together because each base on one strand pairs with a base on the other strand. This pairing occurs in only two ways: A with T, and C with G. Each strand is thus the pair-opposite of the other. When DNA replicates, the strands peel apart. Along each original strand, a new complimentary strand forms by assembling bases that pair properly with the original, as illustrated schematically in Figure 3-1.

Figure 3-1 Normal DNA Replication

Nucleotide bases are substantial molecules, 111 to 151 times more massive than a single proton. But, it turns out that one proton moving one bond site can change the DNA code, as shown in Figure 3-2.

Figure 3-2 DNA Mutation with A to g

Here, a single proton shift changes base A into its tautomer g. We do not know when this change occurs, but for convenience we show it occurring as the original strands separate. The tautomer g

bonds like base G and pairs with C rather than T. Note that the final lower strand is ACCG in Figure 3-2 instead of the original ATCG. Single proton shifts are an inevitable consequence of the wave property of protons. Quantum mechanics explains that the location of a proton (and any other particle) is not precisely determined. It has a probability of being here and a different probability of being there, at the same time. The probability of the proton disappearing here and appearing there, at a neighboring bond site, is small but not zero. As a consequence, tautomers arise about once in 10,000 bases. Fortunately, we have DNA error-correcting mechanisms that eliminate the vast majority of these unintended slips.

Astronomy Astronomy may be the oldest science. Recognizing regularities in the motions of celestial bodies enabled mankind to plant and harvest crops at the best times of year. Understanding gravity and the orbits of the planets launched physics. For sometime however, astronomy was considered distinct from physics. It was considered “merely” observational and insufficiently analytical to be “real” physics. One result was that astronomers were ineligible for Nobel Prizes in Physics, or any other category. That prejudice eventually waned as “physics” became more prominent in astronomy and “astronomy” became more prominent in physics. In the last few decades, several outstanding astronomers have been awarded Nobel Prizes in Physics. Now, no one would bother arguing whether the search for dark matter is more important to physicists or to astronomers.

Math & Physics Several times in his book, and particularly in V1p1-2, Feynman addresses the close relationship and key differences between physics and mathematics. Mathematics courses begin with precisely defined terms and axioms from which theorems are logically proven. When properly derived, these theorems are absolutely true, now and forevermore. One example is the theorem that the interior angles of any triangle sum to 180 degrees. This is as true today as it was 23 centuries ago. But, that theorem is about ideal triangles in an ideal Euclidean geometry: triangles whose sides have zero width and whose corners are points of zero size. None of us has ever seen one of those, except in our mind’s eye. Does that theorem have any bearing on real triangles in the real world? No mathematician can answer that question; no one can prove that by logic alone. We actually must measure real triangles to determine to what level of precision this theorem applies to our world. The answer may be that no real triangle has exactly 180 degrees.

The latest cosmological observations (see Our Universe 3) indicate that on a scale of tens of billions of light-years, our universe is Euclidean to within a measurement uncertainty of 0.3%. However, on the scales of galaxies, stars, and planets, we know that space is not absolutely Euclidean. Euclid wasn’t wrong, but he was not describing the real, imperfect world. While much of mathematics is a good approximation to many everyday situations, clever people, including many theoretical physicists, often construct beautiful mathematical systems that have no relation whatsoever to reality. For physicists, mathematics is an indispensible tool, that like any fine tool must be used wisely. The real world is messy: lines have width, space is not flat, nothing is perfect, and everything changes. We all think we know what a chair is. But what is a chair really? At the atomic level, no two chairs are exactly identical, and none remains exactly the same forever. Yet, “chair” is a useful concept. We can all imagine idealized chairs, discuss their attributes, and determine what problems they can solve. We can also deal rationally with the differences between our idealized chair and a real chair. Physics advances by making useful idealizations that are valid to a good approximation, and understanding their limitations.

Cosmic Wine Feynman ends this lecture quoting poetry: “The whole universe is in a glass of wine.” If for convenience, we “divide this glass of wine, the universe, into parts — physics, biology, geology, astronomy… remember that nature does not know it!” Feynman recommended you drink up and enjoy.

Chapter 3 Review: Key Ideas 1. Physics is the Mother of All Sciences, providing a foundational understanding of nature. 2. Every other science reveals remarkable discoveries that no physicist could have imagined.

Chapter 4 Conservation of Energy Prior chapters have been appetizers. We now begin the main course: discovering physics. One of the most basic and most important laws of nature is the conservation of energy. When physicists say X is conserved, we mean that in any closed system, the total amount of X never changes. A closed system is any region of interest enclosed by a boundary through which nothing exits or enters. The boundary could be an impenetrable wall, or it could be an imaginary all-enclosing surface through which nothing happens to pass. Since energy is conserved, its total amount in any closed system will always be equal to what it is now and always has been. Energy can move from place to place, it can change from one form to another, but none of it ever disappears and none is ever created. Einstein proved, in his theory of general relativity, that energy (and also momentum) is conserved because of the geometry of our universe — because it has no holes. Mathematically, the geometry of our universe is a compact manifold. For any two points A and B in our universe, every point on the line connecting A to B is also in our universe (no points are missing). Thus energy (and also momentum) conservation is a fundamental part of our existence, as fundamental as space itself. In general relativity, we imagine a river of energy-momentum flowing through 4-dimensional spacetime.

What is Energy? V1p4-1 Energy is an abstraction. In most cases, you cannot see it or touch it. But after you learn the equations, you can compute how much energy is where. If you properly compute the energy in each form within whatever closed system you choose, the total amount will always be the same. If you find a discrepancy, it means your system was not closed after all; something has exited or entered, smuggling energy across the border. Everything that has physical existence has energy. The more energy something has, the more of it exists. If a new particle pops into existence, it must acquire energy from something that simultaneously disappears. I think of energy as the currency of existence; but unlike human currencies, no one can print more energy. Since energy is abstract, Feynman clarifies energy conservation with a tangible analogy: children’s

blocks. Joan gives her son Luca 28 blocks. Since Luca is an extremely “exuberant” three-year-old, she picks expensive blocks that cannot be broken into pieces. And because the blocks are expensive, Joan often counts them to make sure Luca still has 28. For several days, no matter what imaginative games Luca plays, 28 blocks remain at bedtime. But one day, there were only 27. After searching everywhere, Joan found one block under the rug. The next night, Joan counted 30 blocks; Hunter had come to play, brought his identical blocks, and left three behind. After returning those extra blocks, several uneventful (block-wise) days passed. Then one night, Joan counted 24 blocks. She eventually recalled seeing Luca playing with a new toy box, but he refused to let her open his box, saying: “That’s Mine!” Joan went to the Internet and found that an empty toy box weighs 20 ounces and that each block weighs 3 ounces. She weighed Luca’s toy box and found it weighed 32 ounces. Being a math major, Joan devised a formula: number of blocks seen + number of blocks under the rug – number of blocks left by Hunter + (toy box weight in ounces – 20) / 3 = 28, a number that never changes. Feynman notes that in the increasing complexity of Joan’s world, she found an equation that allowed her to count blocks, even when she could not see them all. Comparing block counting to computing energy, Feynman writes that the most striking difference with energy is that we cannot see any of the blocks. Two other salient points emerge from this analogy. Firstly, one must ensure the system is closed; if Hunter can take blocks in or out of Luca’s house, this needs to be accounted for. Secondly, there are many forms of energy. The principal forms are: Mass Potential: gravitational, electrical, and nuclear Motion: kinetic, heat, and work Electrical: chemical, elastic, radiation, Fields: electric and magnetic Some of these categories overlap: the binding energy of two opposite charges can be called electrical potential energy, but if those charged bodies are atoms, chemical energy is the more common term. It is not important that you use exactly the right label, but you must properly compute and include all types of energy in calculating the total. Feynman stresses that we do not know what energy really is, and we may never know. But we do

understand the relationships between different forms of energy, and the relationship of energy to other entities such as momentum. That is enough for us to accomplish a great deal. We know the equation for each form of energy, and we know that the sum of all forms never changes. The law of conservation of energy is an immensely powerful tool, as we shall see.

Gravitational Potential Energy V1p4-2 Forms of energy that are due to the position of objects relative to one another are called potential energy. Gravitational potential energy is the energy that two bodies have due to their mutual gravity. Every object on Earth has gravitational potential energy due to Earth’s gravity. Lifting a massive object upward against the force of gravity requires energy; the energy expended is called work. The act of lifting converts work energy into gravitational potential energy. The work energy has effectively been stored — the object has more energy because it is higher, and that added energy has the potential to be released and converted into another form of energy. If this object later falls downward, pulled by gravity, it speeds up as its potential energy is converted into kinetic energy, the energy of motion. What is the equation for gravitational potential energy? I will give you my explanation first, and then Feynman’s. First, a sidebar: I am about to introduce metric units: kilograms and meters. You might be more comfortable with pounds and feet, but scientists almost always use the metric system. You simply must to get used to it. In the most commonly used mks units, mass is measured in kilograms, distance in meters, and time in seconds. Also, I am about to use the symbol m for mass and also for meter. Such ambiguities arise often. Sorry, but this is unavoidable. Science has many more than 26 concepts. Context should clarify to what m refers. A mathematician once said: “Life is short, but the alphabet is even shorter.” Lifting a one-kilogram mass (1kg) upward a distance of one meter (1m) requires a certain amount of work energy; call that W. Now if we lift a second 1kg mass upward 1m that will require the same amount of energy, W, because the world has not changed — same kilogram, same meter, same gravity. If we lift six 1kg masses upward 1m, one after another, we must expend a total amount of work energy of 6W. What if we lift all six masses upward 1m all at once? Should not that also require 6W of work energy? We get the same end result whether we lift the masses one at a time or all at once. Gravitational potential energy is the energy that a massive body has due to its height (its elevation in a gravitational field). Since the six masses are all 1m high at the end, the total gravitational potential must be 6W, regardless of how we lifted them. What if we lift one body with a mass of 6kg upward 1m? What is the difference between six 1kg blocks and one 6kg block? If I tape six 1kg blocks together with magical zero-mass tape, they become one block. But massless tape should not change their gravitational potential energy. So a 6kg mass

lifted 1m must also have a gravitational potential energy of 6W. By now, it is clear: an object’s gravitational potential energy must be proportional to its mass. What about height? If the gravitational force does not change appreciably with elevation (as it does not near Earth’s surface), lifting a 1kg mass from a height of 1m to a height of 2m should take work W, the same W as lifting it from a height of 4m to a height of 5m. As with mass, changes in gravitational potential energy must be proportional to height changes, if gravity is constant. Finally, if we double the force of gravity, if Earth’s density doubled, lifting a 1kg mass upward 1m would take twice as much work energy, 2W, and the gravitational potential energy of every object near Earth would also double. So, gravitational potential must be proportional to the strength of gravity, as well as an object’s mass and its height. Near Earth’s surface, the force of gravity produces a nearly constant acceleration called g. Here, the equation for the gravitational potential energy of an object of mass m at a height h is: Gravitational potential energy = m g h Three notes. We showed above that gravitational potential is proportional to mgh; we did not prove it was equal to mgh. As we will see later, in a consistent system of units, such as the mks system, the proportionality factor is 1. Secondly, from where do we measure h, where is h=0? The answer is that it really does not matter; any convenient place will do, because all that matters in the end are height differences. A 1kg mass lifted upward 1m increases the object’s gravitational potential energy by W whether you start at sea level, beside the Dead Sea, or the top of Mt. Everest, assuming gravity is constant. Thirdly, we can write an equation for work energy: Work energy = force × (distance the force acts through) Now, let’s look at this Feynman’s way. He chooses a line of reasoning patterned after Nicolas Carnot’s argument on steam engines. Using gravitational potential energy as an example, Feynman wishes to demonstrate the power of analytic reasoning and show how theoretical physicists work. I simplified this somewhat, but it is more challenging than the explanation I gave above. You may enjoy the mental calisthenics and a peek into the brain of a genius. In V1p4-2, Feynman proposes we accept the impossibility of perpetual motion machines [of the first kind], namely devices that can deliver energy perpetually without receiving energy from an outside source. Perpetual motion machines are the delight of con artists preying on those ignorant of energy conservation. Since energy cannot be created, any machine that delivers energy must eventually exhaust its original supply. Conservation of energy requires that any machine that goes through a sequence of actions and returns

to its original state cannot have delivered energy. This statement in math is: Energy at start = Energy at end (because energy is conserved) Machine’s energy at start = Machine’s energy at end + energy delivered If the first two terms in the last equation are equal (machine returns to its original state), the energy delivered must be zero. Consider the device shown in Figure 4-1, which lifts weights on one side by lowering weights on the other side. Note there are two blocks on the left and one block on the right. We assume all blocks are identical.

Figure 4-1 Weight Lifting Device

The machine is balanced so that an infinitesimal force could raise or lower either side. Such a device is called reversible; it can proceed equally well in opposite directions (left side up or down). Let’s assume each part of our reversible device has no weight and operates with zero friction. While unrealistic, such idealizations will help us understand fundamental principles. Feynman next proves that no machine can outperform a reversible machine: that in this example, no lifting device can do better than a reversible one. His proof is an example of reductio ad absurdum — proving a statement is true by proving that its opposite is absurd. We shall assume some machine is better than a reversible machine, and show that leads to absurd consequences. Here we go. Assume we have an ideal reversible lifting device as above; call that Machine R. Also assume we have another similar device that may or may not be reversible; call that Machine X. Our assumption is: X outperforms R; X lifts weights with less energy. We will compare their performance in lifting two balls while lowering a single ball by 1m, in the manner of Figure 4-1. If the reversible Machine R lifts two balls up height r and Machine X lifts two balls up height x, our assumption is x > r. On the left side of Figure 4-2, is machine X, with one ball up and two down. This is our original state, Step (a). Let’s not worry about the rest of the figure just yet. We use Machine X to lower the single ball by 1m and raise the other two balls by x, bringing us to Step (b).

Figure 4-2 Steps (a) & (b)

The next element of the figure is a ball rack mounted on a piston. Lowering the ball rack pumps water, shown in gray, into the reservoir on the right. The ball rack is at exactly the right height so that both balls can slide onto it from Machine X with no height changes, and hence no change in gravitational potential energy. We assumed Machine X outperforms Machine R, so we show the pair of balls being quite high. After the balls slide onto the rack, we open a valve (not shown) pumping water into the reservoir as the ball pair drops a distance x–r (we assumed x>r). The ball pair then precisely aligns with Machine R, which is the last element in the figure. That takes us to Step (c) shown in Figure 4-3.

Figure 4-3 Steps (c) & (d)

Lastly, we use the reversible Machine R to raise one ball 1m, while lowering two balls a distance r, resulting in Step (d). We know R can do this because it is reversible: this last action is the reverse of lowering one ball 1m while raising two balls up a distance r. This sequence of steps ends with the same number of balls at the same original heights, hence the total gravitational potential energy of all balls is unchanged. But by assuming x>r, we have allowed water to be pumped upward into the reservoir, increasing its gravitational potential energy. We thus created energy in violation of the law of energy conservation, which Feynman assumes is a universal law. This is the absurd conclusion of our assumption. Therefore, x cannot be greater than r. No machine can outperform a reversible machine. Feynman next demonstrates that all reversible machines perform identically. We showed above that no machine can outperform the reversible machine R, so no other reversible machine can be better than R. If Y is another reversible machine, then no machine can outperform Y, by the above logic. R cannot outperform Y, and Y cannot outperform R. Hence their performances must be equal. Thus all reversible machines must perform equally well. Now, what is the value of r? How well can a reversible machine perform? Figure 4-4 shows a reversible lifting device. Step (a) is the starting position, with 2 balls on a stationary platform on the right and one ball on a platform on the left. In Step (b), we slide the balls onto the lifting machine. Let’s imagine this requires zero energy in an ideal, frictionless world. In Step (c), our ideal reversible machine lowers the left ball by 1m and lifts the two right balls a height change r.

Figure 4-4 Reversible Machine Sequence

Knowing what r was going to be, we prearranged the vertical spaces on the right to equal r. This ensures that the machine rack and the platform align perfectly in Step (d), when we slide the balls off the machine onto platforms with no height changes. In Step (e), we move the lowest ball to the right and the highest ball to the left. Again, we prearranged the apparatus so that the lowest ball does not change height in moving to the right. This leaves only one remaining question: in Step (e), is the change in height of the highest ball positive, negative, or zero? Comparing the right sides of Step (d) and (a), we can say each of the two balls moved up a height r, or we can equivalently say one ball did not change position while the other moved up a height 2r. Thus, the question is: how does a 2r rise compare with the left ball’s 1m drop? If 2r > 1m, the highest ball would move to a lower elevation in Step (e), which would release energy and yet restore the system to its original state. This would violate the conservation of energy. But if 2r < 1m, we could reverse the sequence of steps, starting with Step (f) and going backward to Step (a). The highest ball would then start at height 2r and end higher at 1m, while restoring the system to the equivalent of its original state. That would also violate the conservation of energy. Hence, 2r cannot be less than nor greater than 1m; it must be that 2r = 1m. This means the sum of the heights of all balls must not change throughout a complete sequence of steps. Thus, gravitational potential energy must be proportional to height, which is what we derived earlier.

Other Gravitational Examples V1p4-4 Employing the principle of conservation of energy often simplifies complex problems. Figure 4-5 shows weight X on an inclined plane tied to a second weight Y hanging straight down from a pulley. The inclined plane is the hypotenuse of a right triangle, whose side lengths are 3m, 4m, and 5m.

Figure 4-5 Inclined Planes

Assuming zero friction: what weight X will exactly balance weight Y so that the two weights can effortlessly slide up and down, between the two positions shown? One could very well solve this problem with trigonometry, but let’s do it with conservation of energy. Going from the left position to the right one, mass X rises 3m (the length of the vertical side) while mass Y drops 5m (the length of the hypotenuse). Conservation of energy requires 3X = 5Y. Figure 4-6 is a more complex situation. Weight Z hangs from a rope that goes over a pulley and supports the left end of a bar. A fulcrum supports the right end of the bar. Weights Y and X are placed on the bar, dividing its length into equal thirds.

Figure 4-6 Weighted Bar

Question: what mass Z exactly balances masses Y and X? If the bar is exactly balanced, an infinitesimal force could move Z upward 3cm. That would drop the left end of the bar 3cm. Since Y and X are respectively 2/3rds and 1/3rd of the way from the fulcrum to the hanging end of the bar, Y would drop 2cm and X would drop 1cm. To conserve gravitational potential energy the total of all energy changes must be zero: 3Z – 2Y – X = 0 Z = (2Y + X) / 3

Kinetic Energy V1p4-5 Now let’s discuss kinetic energy and how it can convert back and forth to gravitational potential

energy. Kinetic energy is the energy associated with moving bodies. Consider the pendulum in Figure 4-7. When a pendulum is pulled to one side and released, it begins to swing back and forth. As it swings, its gravitational potential energy is constantly changing. It is maximum at the top of the swing and minimum at the bottom, when the pendulum is at the center.

Figure 4-7 Pendulum

Question: when the gravitational potential energy decreases where does the energy go? The answer is: as the pendulum swings, energy is continually being converted back and forth between two forms: gravitational potential energy and kinetic energy. At the top of the swing, the pendulum stops moving for an instant; its kinetic energy is then zero (no motion), and its gravitational potential is at its maximum. At the bottom of the swing, the gravitational potential energy is at its minimum, and the pendulum’s velocity, and hence its kinetic energy, reach their maxima. Energy conservation ensures that the sum of these two forms of energy is constant. The Newtonian equation for kinetic energy T for a body of mass m and velocity v is: T = mv /2. We can now write an equation for the total energy of the pendulum: 2

Total E = m v(t) /2 + m g h(t) 2

I write v(t) and h(t) because velocity and height change over time — v and h are functions of time. We know the total energy E at the top of the swing, where v=0. Call that height H. When h(t) = H, v(t) = 0 and E = mgH. Plugging that into the prior equation we get: m g H = m v(t) /2 + m g h(t) v(t) = 2 g {H – h(t)} 2

2

Note that there is no mass m in the last equation — the m’s cancelled. The pendulum’s motion does not depend on its mass.

Other Forms of Energy V1p4-6

We now briefly mention other forms of energy. We will examine all of these more closely later. Pulling on a spring requires energy; a force must be exerted through a distance to stretch a spring. The expended energy is stored in the spring in what we call elastic energy. That elastic energy can be later released if we use the string to lift something. Elastic energy is a form of electrical energy; it is the deformation of electrical bonds between atoms. When we pull atoms away from one another, or compress them together, we stress the material and move atoms away from their optimal, lowest energy, separations. At other than their natural separations, the atoms in a material object have more energy than their most stable, lowest energy state. The energy expended in stretching a spring increases the electrical energy of the spring’s atoms. Releasing the spring allows atoms to return to their natural separations thereby releasing their stored energy. Chemical energy is another form of electrical potential energy. When isolated atoms come together, they may bind to one another and release chemical energy. This will happen if their electrons move closer on average to positively charged nuclei. Another form of energy is often associated with friction. We often ignore friction to simplify our discussions, but friction is a fact of life. When two macroscopic objects rub against one another, atoms in both objects are squeezed and jostled. This increases their kinetic energy and potential energy, and increases the macroscopic objects’ temperatures. This is the heat energy we discussed in Chapter 1. Earlier in this chapter we said radiation was a form of energy. Light is electromagnetic radiation, waves in the electromagnetic field. Other forms of radiation are individual particles, elementary or composite, with high kinetic energy. Electrical and nuclear potential energies are associated with the separations of interacting particles. The electrical potential energy of two opposite electric charges decreases, becoming more negative as the charges move closer to one another. Protons and neutrons have virtually zero nuclear potential energy if their separation is more than a few of their own diameters. But, at a separation of about one diameter, the nuclear potential energy greatly decreases, becoming more negative, and releasing vastly more energy than any non-nuclear process. In 1905, Einstein proclaimed that mass was also a form of energy. Indeed, a stupendous amount of energy is condensed into every material object. All other forms of energy are abstract, as we have learned in this chapter. But, mass is the epitome of being tangible.

Other Conservation Laws This chapter has focused on the conservation of energy. We will thoroughly discuss several other conservation laws later, but they are so important that I list here key quantities that are conserved::

energy linear momentum angular momentum electric charge baryons leptons No violation of any of the above conservation laws has ever been observed in any circumstance. We believe these laws are absolute. Anyone who discovers a confirmed violation would receive a Nobel Prize. I am not holding my breath. Scientists once thought mass was separately conserved; that the total mass in a closed system never changed. Einstein showed that this is not true; mass can be converted into other forms of energy, and vice versa. Momentum p equals mass times velocity in Newtonian mechanics. Since space is three-dimensional, so are velocity and momentum. The conservation of linear momentum states that for each direction in space, the total sum of all momenta is constant. You are free to pick any three orthogonal directions you please. Angular momentum L is a property of rotating objects, such as a spinning top or an orbiting planet. L = mass × rotational velocity × radius Angular momentum is conserved about any axis oriented in any direction. Conservation of electric charge states that the total net sum of electric charge never changes, and we strongly believe that the total net electric charge throughout the universe is zero. In these sums, positive charges add and negatives charges subtract. It is possible to create (or destroy) an electric charge, such as an electron’s negative charge, but only if an equal amount of the opposite charge is simultaneously created (or destroyed), such as an anti-electron’s positive charge. Particles of matter that participate in the strong nuclear force are called baryons, from a Greek word meaning heavy. Protons and neutrons have baryon number +1; quarks have baryon number +1/3; antiprotons, anti-neutrons, and anti-quarks have baryon numbers –1, –1, and –1/3 respectively. Conservation of baryons means that in every interaction the total number of baryons, adding the positives and subtracting the negatives, never changes. Similarly, particles of matter that do not participate in the strong force are called leptons, from a Greek word meaning light. Electrons have lepton number +1 and anti-electrons have lepton number – 1. Conservation of leptons means that in every interaction the total number of leptons, adding the positives and subtracting the negatives, never changes. I said earlier that general relativity proves that energy and momentum conservation result from our universe being a compact manifold (it has no holes). Newton’s laws also show that energy and momentum are conserved, as does quantum field theory (QFT). In QFT, energy conservation results

from the laws of nature depending on time differences but not on absolute time — nature’s laws do not change over time; for example, no equation contains the year 1066. Also, natural laws depend on position differences but not on absolute position. This ensures conservation of linear momentum. Finally, natural laws depend on angle differences but not on absolute orientation, which ensures conservation of angular momentum. Since all these approaches lead to the same conservation laws, choosing which approach you prefer is a matter of taste. I vote for Einstein.

Chapter 4 Review: Key Ideas 1. Energy comes in many forms. The principal forms are: mass potential: gravitational, electrical & nuclear motion: kinetic, heat, and work electromagnetic: chemical, elastic, radiation, and fields 2. Energy in one form can convert into energy in another form. 3. The total energy in all forms within any closed system never changes.

Chapter 5 Time and Distance This chapter explores the concepts of time and distance. Over the last 50 years, we have witnessed tremendous discoveries pushing the limits of time and distance. While preserving Feynman’s theme and spirit, the bulk of this chapter is new.

Galileo The Experimenter We said earlier that experiment is the sole judge of truth in science. In V1p5-1, Feynman emphasizes the importance of quantitative measurements: “Only with quantitative observations can one arrive at quantitative relationships, which are the heart of physics.” Many consider Galileo Galilei (1564-1642) to be the first modern physicist, the father of experimental science. Before Galileo, “natural philosophers” followed the model of the ancient Greeks, settling scientific questions by debate and force of authority. They sought to divine through discourse the ideals to which nature “should” conform. The ideas of the most articulate and most esteemed, particularly Aristotle, were deemed true. Galileo chose a very different road to truth. He decided to test long-believed notions. One of his famous experiments was observing a ball rolling down an inclined plane, as illustrated in Figure 5-1. He did more than simply watch; he measured how far the ball traveled in what time interval.

Figure 5-1 Ball Rolling on Inclined Plane

Without the benefit of today’s precision timepieces, Galileo used his own pulse to count equal increments of time, and he recorded the ball’s positions at those times. Galileo discovered the following results for the distance D that balls travel after elapsed time T: TD 11 24 39 4 16

He discovered that the distance traveled was proportional to the square of the elapsed time. [An aside. Feynman suggests you try measuring time with your pulse or by counting silently or by watching numerals roll over in your mind’s eye. Feynman told me once that he diligently practiced counting time just to see how well he could do. He was surprised at his success, even while running up stairs or reading a newspaper. His internal timer was so good that he could beat 11 beats on one bongo drum while beating 13 on another. All this just for fun.]

Time What is time? In his excellent, but very challenging book, Quantum Gravity, Carlo Rovelli lists various human notions of time: We have memories and expectations. The Now is a unique moment. The past and future are different. Time is unrelated to anything else. Time is immune to external influence. Universality, the same time everywhere. Each moment occurs only once. Time advances at a constant rate. Time is one-dimensional. Some physicists now believe that most of these human notions of time are wrong. Indeed, Rovelli and others suggest physicists entirely abandon the notion of time — except we all want to be paid on time. As with energy, space, and other foundational concepts, we do not know what time really is. Physicists are practical people; we want to deal with things we can measure and compute. The ultimate meaning of time is less important to science than whether it is measurable and useful. Einstein said it well: “Time is what a clock measures.” In physics, time is how we keep track of change, and the best way to measure time is with regularly repeating natural phenomena. A good start is measuring time in days, the time between consecutive high noons (when the Sun is at its peak). We subdivide days into hours, minutes, and seconds with other repeating phenomena, such

as swinging pendulums and escapements in mechanical watches. We have reasonable confidence in these measurements because of their consistent number of repetitions per day. To measure smaller time intervals, we need phenomena that repeat faster, such as the oscillations of quartz crystals, whose cycles are counted electronically. Quartz crystals in wristwatches typically oscillate every 30 microseconds, and are precise to parts per million. The world’s most precise clock, announced in April 2014, “counts” the oscillations of strontium atoms held at a temperature of 3 millionths of one degree above absolute zero. This atomic clock, developed and operated by the U.S. National Institute of Standards and Technology (NIST), is precise to 1 second in 5 billion years, about 16 parts in a million, trillion. NIST’s atomic clock is hardly a wristwatch — it fills an entire lab — but NIST transmits atomic time via radio. My wristwatch detects that signal and synchronizes to NIST time. I may be slow, but I am never late. Atoms make great clocks because they never age; they have no internal parts that wear or fatigue. Individual atoms are also much less sensitive than macroscopic matter to temperature, humidity, and other environmental factors. Macroscopic objects are composed of trillions of parts that can easily shift slightly in response to external forces. By contrast, individual atoms have very few allowed states, and they cannot change states easily. To measure long time periods, a different type of atomic clock is appropriate. While most common atoms are stable (they will exist for eternity as far as we can tell), some nuclei are radioactive — they transform into other nuclei, often releasing high-energy particles we can detect. As quantum mechanics explains, we can precisely determine the rate at which radioactive nuclei decay, but we can never know when any particular nucleus will decay. But, knowing the rate is good enough for radioactive dating. (I am referring to using radioactive nuclei to measure age, not to the socializing of hazmat teams.) Perhaps the most famous example of radioactive dating employs carbon-14. Carbon has 3 isotopes; each isotope has 6 protons (or it would not be carbon) plus either 6, 7, or 8 neutrons. C-14, with 8 neutrons, is radioactive and extremely rare, amounting to only 1 trillionth of the carbon in Earth’s biosphere. C-12 and C-13 are both stable, with C-12 accounting for 99% of our carbon and the remainder being C-13. Cosmic ray neutrons striking Earth’s atmosphere occasionally transform N-14 into C-14; a neutron knocks out one of nitrogen’s 7 protons and takes its place. You may have seen similar action in billiards. Carbon-14 is produced at a relatively stable rate and decays back to nitrogen-14 with a half-life of 5730 years. At 1 part per trillion, production and decay are in equilibrium; the same number of C-14 nuclei are being produced and are decaying in each instant of time. “Half-life” is a term used to characterize radioactive decay (and some other processes): during 1 half-life, half of the nuclei present at the start will decay by the end of this interval. If we start with 8000 carbon-14 atoms, 4000 will remain after 1 half-life (5730 years), 2000 will remain after 2 halflives (11,460 years), 1000 will remain after 3 half-lives (17,190 years), and 8 will remain after 10 half-lives (57,300 years).

Half of the survivors will decay during the 79th half-life, just as half did during the first. That dependence is a decreasing exponential, e . It arises because each C-14 nucleus has the same probability of decaying every second. Atoms are too small to have a clock to tell them when it is time to decay. They have the same percentage decay rate per second, every second, so there’s no need for them to keep track of time. It’s a quantum thing. We will thoroughly explore quantum mechanics in the Feynman Simplified 3 series. –at

Back to C-14 dating. Plants take in carbon dioxide, a trillionth of which contains C-14, and incorporate all three types of carbon into their cells. Every creature that eats plants, or that eats animals that eat plants, also incorporates radioactive carbon into their bodies. This means we are all slightly radioactive. About 3000 carbon and 4000 potassium atoms decay each second in every one of us — more if you enjoy bananas that are high in potassium. That might sound like a lot of radioactivity, but it is not enough for us to glow in the dark. The only way to become less radioactive is to stop eating. So how does carbon-14 dating work? In all living creatures, about 1 trillionth of their carbon atoms are radioactive C-14. When a living creature dies, it stops taking in more C-14. The C-14 atoms that subsequently decay are not replaced through ingestion. The amount of C-14 in the creature’s remains slowly diminishes over time. A wooly mammoth that died 34,380 years ago, will contain (1/2) = 1/64th of its original C-14, since 6 half-lives have passed since its death. Hence if the fraction of C-14 in organic remains is 1/64th of a trillionth, we know that creature died 34,380 years ago. 6

The above would be exactly true in an ideal world. In practice, the flux of cosmic rays does vary somewhat, so corrections are required. Also, great care must be taken to prevent contaminating ancient remains with “new” carbon containing a full shot of C-14. C-14 dating becomes less precise for older remains, and is rarely useful beyond 60,000 years. To date older materials, such as meteorites, Moon rocks, and Earth rocks, scientists use radioactive atoms with longer half-lives. Uranium and thorium have half-lives of 4.47 billion years and 14 billion years respectively. The oldest meteorites have been found to have formed 4.569±0.002 billion years old. That date is assumed to be when our Solar System, and the Sun and the Earth, began forming. A complete discussion of atomic clocks and radioactive dating is in Timeless Atoms. By entirely different techniques, astrophysicists have dated our universe at 13.82±0.05 billion years old. See Our Universe 3 for a full explanation. Figure 5-2 presents a complete time scale of our existence that contains 62 digits. As usual, each digit represents a factor of ten. For clarity, each entry is rounded off to its most significant digit.

Figure 5-2 Universal Time Scale

The longest possible time period, the age of our universe, is the biggest number: 4×10 seconds. A human lifespan is about 3×10 seconds. One year is nearly π×10 seconds, one day is 9×10 seconds, and one cycle of NIST’s strontium clock lasts 2×10 seconds. The lifetime of a W boson is indirectly determined to be about 3×10 seconds. 17

9

7

4

–15

–25

The shortest possible time, physicists believe, is the time required to travel the shortest possible distance at the highest possible speed: the Planck time, equal to the Planck length divided by the speed of light, is 5×10 seconds. –44

Distance V1p5-5 Let’s avoid doing the “what is space really?” thing again. Whether or not Einstein actually said this, for physicists, distance is what a ruler measures. Instead, let’s explore how we measure some immense and some infinitesimal distances. Rulers suffice for many common distances, but we need non-contact technologies for things that are far apart. Radar works well over large distances. As an example, almost every airport uses radar to track approaching aircraft. Radar systems emit radio waves, electromagnetic radiation with wavelengths in the range of 1m. The time, t, required for radio waves to reach a destination and return determines the destination’s distance, d: d = ct/2, where c is the speed of light. Astronomers have even used radar to measure the distance between Earth and Venus to a precision of ±30m, about 1 part in 10 billion. Repeated measurements over the course of many months determine Venus’ entire orbit. Since the Sun is at the center of Venus’ nearly circular orbit, this determines Earth’s distance from the Sun, the astronomical unit (AU). For longer distances, we can employ parallax, which is how our binocular vision works. Our eyes view the world from two slightly different perspectives, because our eyes are typically 6 cm apart. Our brain uses the difference between the two images to estimate distances. That is how quarterbacks know how far to throw a pass, and how drivers know whether it is safe to overtake slower traffic. Try it. Stretch out one hand, place your index finger in front of a distant object, and alternate closing one eye and then the other. You will see your finger jump back and forth relative to everything farther away. That is parallax. This shows how differently your two eyes see the same reality. If your arm were shorter or your eyes farther apart, the parallax difference would be more.

Figure 5-3 shows how astronomers use parallax to measure the distance to nearby stars. As Earth orbits the Sun, our observing position changes. Our location on January 1st is 300 million km (2 AU) away from our location on July 1st. Images taken on two days that are six months apart will show small differences as parallax shifts the images of nearer stars relative to farther ones. By measuring these shifts, and knowing the AU, astronomers have computed the distance to thousands of stars within the nearest few hundred light-years. (One light-year is the distance light travels in one year, about 6 trillion miles or 10 trillion km.)

Figure 5-3 Measuring Distance to Stars

In April 2014, the Hubble Space Telescope made a record-setting parallax measurement. With an effective angular resolution of 5 billionths of one degree, Hubble measured the distance to a Cepheid Variable star 7500 light-years away to a precision of ±3%. With more sophisticated techniques, astronomers have been able to measure distances almost all the way to the edge of our visible universe, 45 billion light-years away. See Our Universe 5 for the entire amazing story. After centuries of wondering in vain, we now have a complete distance scale of our existence, shown in Figure 5-4. Since space and time are connected, this scale also spans 62 digits from the largest possible to the smallest possible things. Again, each entry is rounded off to its most significant digit.

Figure 5-4 Universal Distance Scale

Our entire visible universe is 9×10 m across; our galaxy is 1×10 m (100,000 light-years) across; and Sirius, our brightest star, is 1×10 m away. Now that Pluto has been demoted, the edge of the planetary zone of our Solar System is defined by Neptune’s orbit, which is 1×10 m across. 26

21

17

13

The Sun’s diameter is roughly 1×10 m; and Earth is somewhat more than 1×10 m across. People are about 2m tall (at least NBA stars are). A single human blood cell is 2×10 m across; and an atom is 2×10 m across. The smallest well-measured size, as of 2015, is the charge radius of a proton at 0.841×10 m. 9

7

–5

–10

–15

And, the smallest possible size that physicists believe can exist in our universe is the Planck length at 1.616×10 m. –35

There are several reasons to think that a smallest possible size may exist, although we do not yet have a comprehensive theory of such tiny dimensions. Quantum mechanics tells us that probing eversmaller distances requires an ever-higher energy. General relativity tells us that if a high enough energy is confined in a small enough volume it creates a black hole of a certain size. More energy produces a larger black hole. These opposing effects result in a minimum distinguishable size: the Planck length. If you try to probe smaller with more energy, you will simply make a larger black hole. How do physicists measure something as small as a proton? One must use another subatomic particle; everything else is too big. Recall that a hydrogen atom consists of only one proton and one electron. In its liquid state, hydrogen atoms are packed together, but because protons are 100,000 times smaller than atoms, a vast “open” space exists around each proton. Imagine that we shoot a beam of high-energy particles from a particle accelerator at a liquid hydrogen target. Since protons are nearly 2000 times heavier than electrons, beam particles that hit protons can be deflected through much larger angles than those hitting electrons. The fraction of beam particles that collide with target protons is simply the ratio of the area covered by target protons to the area exposed to the beam. If the target were one atom thick, that fraction would be (1/100,000) = 10 , the square of (radius of a proton / radius of a hydrogen atom). 2

–10

Of course, real targets are much more than one atom thick. At 20 Kelvin, a 1cm-thick liquid hydrogen target is 20 million atoms thick. The fraction of beam particles that collide with target protons is then 10 times 2×10 = 2×10 . At that small thickness, we do not need to correct for target protons shadowing one another. –10

7

–3

The latest experiments measure the proton radius to be 0.8409±0.0004 fermi. A fermi, named in honor of Enrico Fermi, equals 1×10 m, a trillionth of a millimeter. –15

Recognizing that subatomic particles are fuzzy, with no sharp edges, physicists often speak of crosssections; the proton cross-section equals π times the square of the radius found above. A crosssection of one barn is an area of 10 fermis by 10 fermis, immense by particle physics standards. For comparison, the production cross-section for Higgs bosons is 2×10 barns. –11

I suppose there’s a joke about the physicist whose aim was so bad that he couldn’t hit the broad side of 1 barn.

Final Notes Feynman ends this lecture emphasizing the impact of special relativity and quantum mechanics on measurements. We discuss all this more thoroughly in later chapters. Einstein postulated that light was unique — that its speed is the same for all observers regardless of their motion. From that assumption, he showed that all observers will see the same laws of nature but

will measure different values for length, time, mass, and related quantities. The crew of a passing rocket ship will measure different values than we do for the length of a meter stick, the duration of an hour, and the mass of a kilogram. But when those different values are plugged into the laws of nature, the differences will all cancel, leaving the laws equally valid for all observers. In later chapters, we will explain why all this makes perfect sense. Due to the wave nature of particles, quantum mechanics states that nature does not simultaneously determine certain quantities with unlimited precision. This results in four uncertainty equations: one for each of the three dimensions of space and one for time. Along the x-axis, the uncertainty in position, Δx, and the uncertainty in x momentum, Δp , cannot both be arbitrarily small at the same time; their product can never be less than a specified constant. The x equation is: x

Δx × Δp ≥ h/4π, where h is Planck’s constant x

This means Δx can become smaller only by making Δp larger, and vice versa. Two other equations establish the same requirement in the y and z directions. The fourth equation relates uncertainty in time, Δt, with uncertainty in energy, ΔE: x

Δt × ΔE ≥ h/4π As Feynman says: “If we wish to know more precisely when something happened we must know less about what happened, because our knowledge of the energy involved will be less.” These uncertainties are not due to any inadequacy of our instruments or knowledge. Uncertainty is inherent in nature, and we know precisely how uncertain nature is. Again, thorough explanations will come later. We cannot explain and you cannot learn everything all at once. Even though we cannot explain it all now, we want to reveal what is coming, and convince you that we are not hiding the real story.

Chapter 5 Review: Key Ideas 1. Time is what clocks measure; we use time to monitor change. 2. The best clocks employ atomic processes, measuring time intervals from 10 seconds to 10 years. –15

3. The distance scale and time scale of our universe each span 62 digits.

+10

Chapter 6 Motion Having discussed time and distance, we next turn to motion, the change in distance over time. We begin with the motion of a single point, from which we will later build up to more interesting motions of more complex objects. Eventually, we will extend these ideas to celestial dynamics and relativistic subatomic particles. But let’s start at the beginning.

Simple Motions V1p8-1 Let’s consider a car moving in only one direction. We choose our single point of interest to be the center of mass of the car, and define the car’s position to be s=0 at time t=0. In the table below are some arbitrarily chosen values for the car’s motion, with t in minutes and s in meters. The story might be: we started slowly, hit a traffic light about 3 minutes later, then picked up speed, until a policeman pulled us over. Between t=6 minutes and t=8 minutes, we drove 3000 meters averaging 90 km per hour, or about 56 miles per hour.

t: s 0: 0 1: 350 2: 1100 3: 3000 4: 3200 5: 3300 6: 4000 7: 5500 8: 7000 9: 7200 This data is plotted in Figure 6-1, with the vertical scale being distance in meters and the horizontal scale being time in minutes. It might be nice to draw a curve connecting the points, but to be accurate we would need to know the intermediate values.

Figure 6-1 Plot of Distance versus Time

Next, let’s consider a different motion, that of a falling ball. For simplicity, we will ignore air resistance throughout this discussion. When Galileo dropped balls of different weights from the Leaning Tower of Pisa, he discovered that they all fell at the same rate, contradicting what Aristotle and other Greek natural philosophers and their followers had believed for two millennia. We will explore gravity in a chapter coming soon; for now please accept the following statements about gravity. Near Earth’s surface, the force of gravity is nearly the same everywhere, and the acceleration of a falling body is called 1g, which equals 9.8 m/sec or 32 ft/sec . A body that is initially stationary and is released at t=0, will drop a distance s in time t according to: 2

2

s =gt / 2 s = 4.9 m/sec t 2

2

2

This is a good time to talk about units. Checking an equation’s units is a great way to gain insight and minimize mistakes. This is generally called dimensional analysis. In every valid equation, both sides of the equal sign must have the same units. The left side of the prior equation is a snap; we know that s is a distance, so measuring it in meters makes sense. On the right side, the units are (m/sec ) × (time) . If we measure time in seconds, the units on the right side will be meters, just like the left side. If the right side had t instead of t , the units on the right would be (m/sec ) × (sec) = m/sec, which does not match the left side and is therefore wrong. 2

2

2

2

There is no partial credit for units that are only slightly wrong; only a perfect match is acceptable. Note that we could choose to measure time in minutes. But, that would give the wrong answer: 1g = 9.8 m/sec = 35,280 m/min , which is a much different number. The units must match exactly to get the right answer. 2

2

Units also tell us something about the physics: m/sec means meters are changing with the square of time — that is fast, as we will see next. 2

Back to our experiment. Let’s drop a ball from a great height and track the motion of the point at its center. The table below lists s, the distance in meters that the ball has fallen at various times t, measured in seconds. Notice how rapidly s increases for large times, due to the t dependence. 2

t: s 0: 00.0 1: 04.9 2: 19.6 3: 44.1 4: 78.4 5: 123 6: 177 7: 240 8: 314 9: 398 These data are plotted in Figure 6-2.

Figure 6-2 Distance Balls Drops with Time

The shape of this curve is a parabola; all falling bodies follow parabolas. If we change the scale of time by a factor of 10 and change the scale of distance by a factor of 100, the curve does not change at all. The time scale would read 0, 20, 40, …100 seconds, and the distance scale would read 0, 10000, 20000, …40000 meters, but the dots on the graph would not move.

Speed & Infinitesimals V1p8-2 In V1p8-2, Feynman says: “Even though we know roughly what ‘speed’ means, there are still some deep subtleties; consider that the learned Greeks were never able to adequately describe problems involving velocity.”

A complete understanding of speed required the development of a whole new branch of mathematics beyond geometry and algebra. To illustrate the challenge, Feynman suggests you try to solve the following problem using only algebra: The volume of an inflating balloon is increasing at 100 cc/sec (100 cubic centimeters per second); when its volume is 1000 cc, at what speed is its radius increasing? You can check your answer with mine at the end of this chapter. Next consider a famous paradox conceived by Zeno 25 centuries ago. Without algebra, the ancient Greeks were never able to properly explain the paradox. I have changed the numbers and the characters for better graphic effect. Let’s assume a poodle can run 8 m/sec, while a dachshund (“doxie”) can only run 4 m/sec. Zeno claimed that if the doxie had a 32m head start, the poodle would never catch him. Clearly that is ridiculous, but Zeno had a seemingly reasonable argument. He said it takes the poodle 4 seconds to run 32m to reach the doxie’s starting point, during which time the doxie has moved 16m ahead. It takes the poodle 2 seconds to run those 16m, during which time the doxie has moved another 8m. It takes the poodle 1 second to run those 8m, during which time the doxie has moved 4m ahead, and so on ad infinitum. Hence, Zeno claimed, the dachshund would always be ahead of the poodle, however so slightly. Figure 6-3 plots the poodle gradually catching up to the dachshund. I think everyone can see what is coming.

Figure 6-3 Zeno’s Paradox Will Poodle Ever Catch Doxie?

The key to this paradox, which the ancient Greeks failed to understand, is that an infinite series of terms can have a finite sum. In particular, the total distance the poodle runs to catch the doxie is 64m, which is the sum of the infinite series: 32 + 16 + 8 + 4 + 2 … = 64 Similarly, the total time of this race is 8 seconds, which is the sum of the infinite series: 4 + 2 + 1 + 0.5 + 0.25 … = 8

Thus, 8 seconds after the race begins, the doxie has run 32m at 4m/sec, and the poodle has run 64m at 8m/sec, covering the doxie’s 32m run plus its 32m head start. The general equation for such infinite sums is: 1 + x + x + x + … = 1/(1–x) 2

3

A proof of this equation is at the end of this chapter. In the infinite series of Zeno’s paradox, we have: B (1 + x + x2 + x3 + …) with x = 1/2, and some constant B. Let’s try another conundrum: While driving on a minor road in the U.K., you are stopped by a bobby for speeding. (Hope you were driving on the right wrong left side of the road.) The bobby says you were going 60 km/hr in a 50 km/hr zone. Your feeble defense is that you could not have been going 60 km per hour because you were driving for only 10 minutes, not a whole hour, and in that time you have only traveled 10 km. He replies that at the rate you were going you would have gone 60 km in one hour. But that is impossible, you say, because the road ends in 10 km. He retorts that your speed was 1 km per minute, which is the same as 60 km/hr. You ask if there is a law in the U.K. that prohibits driving 1 km in one minute? You are not talking your way out of a citation, but it is interesting to ponder: what exactly does “a speed of 60 km/hr” really mean? Speed is distance traveled divided by travel time. It does seem more sensible to average a car’s speed over one minute than over an entire hour. To describe your speed at one instant in time, one should average over an even shorter time interval, such as one second. Averaging over one second might be good enough for a car, but what about a ball falling from a great height? It turns out, at 9 seconds after being dropped, the ball’s speed is 88 m/sec, which increases to 98 m/sec one second later. Since that is a substantial change, we should probably compute its speed over even less than one second. The ultimate answer is to define speed in terms of the infinitesimal distance traveled during an infinitesimal time interval. This concept, developed independently by Isaac Newton and Gottfried Leibniz, is the basis of differential calculus, the first “new math.” Calculus is a rich and beautiful branch of mathematics that was essential to the development of physics. It allows us to deal sensibly with infinitesimal quantities and also to properly accumulate them to describe global results. You really must learn calculus to be able to properly understand physics.

Here, we can only introduce you to the limited amount of calculus needed for this course. By convention, we use the symbol ds for an infinitesimal change in distance and dt for an infinitesimal change in time. Calculus provides a precise definition of speed that we denote with the letter v:

This equation states: v equals the limit of the ratio ds/dt as dt goes to zero. Well-behaved ratios come closer and closer to the final value of v as we compute the ratio for smaller and smaller time intervals dt. The value v is the asymptotic limit of the ratio ds/dt. Not all ratios are well behaved. The ratio 1/x is not well behaved as x approaches 0, because it gets ever-larger and is infinite at x=0. But the ratio (sinx)/x is well behaved as x approaches 0, because the numerator and denominator both approach 0 at the same rate; (sinx)/x equals 1 for any infinitesimal but non-zero value of x, hence the limit equals 1. (As a side note: throughout this course, and in almost all of physics, angles in equations are measured in radians, not in degrees.) Let’s consider another example of this concept of limits: what is the speed of a falling ball 8 seconds after its release? We use the above equation relating s and t and write s(t) to denote the distance s at time t, emphasizing that s is a function of t. We compute the distance at time t and also at the infinitesimally later time t+dt. s(t) = g t /2 s(t+dt) = g (t+dt) /2 2

2

The infinitesimal distance traveled, ds, is the change in distance. ds = s(t+dt) – s(t) ds = g [ (t+dt) – t ]/2 ds = g [ t + 2t dt + dt – t ]/2 ds = g [ 2t dt + dt ]/2 ds/dt = g [2t + dt]/2 2

2

2

2

2

2

Now, we take advantage of dt being extremely small, dt 0, and the ball accelerates as gravity does work on it. But, as the ball ascends on the left side, its upward velocity will be largely anti-parallel to gravity, so the power F•v < 0, and the ball decelerates as the work done by gravity is negative. Since each point on the descent has a corresponding point on the ascent, the total work done over one complete cycle is zero. The ball’s velocity and kinetic energy cycle up and down, while its gravitational potential energy cycles down and up, in precisely repeating cycles. This is consistent with the fact that the stationary pivot neither provides nor absorbs energy. Some of the mathematics in the remaining sections of this chapter is more challenging than what we have done before. For those who only wish to learn the most important ideas of physics, I will place those at the start of each section and tell you how to skip the harder math. But aspiring physicists should carefully study everything that follows. Everything in this chapter is material that will help you. You will often face problems, including on exams, which require the mathematical techniques presented here. To help you better master this material, I show and explain all the intermediate steps in each derivation. No one claims this is easy, but as Navy Seals say: “No pain, no gain.”

Gravitational Potential Energy Broadening our horizons, in V1p9-6 Feynman turns our attention to gravitation on a large scale: satellites orbiting Earth and planets orbiting stars. The first problem concerns a small body falling toward a much larger body. This might be a comet from the outer reaches of our Solar System falling toward the Sun, or an asteroid heading Earthward. Let the small body have mass m and initial velocity v=0, and assume it is located at a great distance r from the center of the much larger body of mass M. At t=0, m starts falling straight toward M, crashing into anything in its path. This situation is more

complex than previous problems because the acceleration of gravity varies during the fall. Using the equation for work, Newton’s law of gravity, and knowing that force and velocity will be parallel throughout this descent, we have: ∫ dT = – ∫ F • ds ∫ dT = – ∫ G M m dr / r

2

Integrating from point A to point B yields: T(B) – T(A) = + GMm (1/r – 1/r ) T(B) – GMm /r = T(A) – GMm /r B

A

B

A

Since A and B are arbitrary, this equation demonstrates that (T–GMm/r) has the same value at all points along the descent. This shows that –GMm/r is the proper general equation for gravitational potential energy. One might question if this result would apply to any path, since our derivation assumed F and ds are parallel. Actually it does. We can subdivide any path into a series of infinitesimal vertical and horizontal steps. (By vertical I mean in the direction of F.) Our procedure is correct for every vertical step, and the horizontal steps make no contribution since on these F•ds=0. Let’s check this against the equation we used earlier for gravitational potential energy near Earth’s surface. Let M = Earth’s Mass, r = R (Earth’s radius) and r = R+h. For h