Models of Time and Space from Astrophysics and World Cultures: The Foundations of Astrophysical Reality from Across the Centuries 9783031278907, 3031278909

Models of Time and Space from Astrophysics and World Cultures explores how our conceptions of time, space, and the physi

128 61 25MB

English Pages 310 Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Introduction
Acknowledgments
Contents
1: Exploring the Earth and Sky
1.1 Polynesian Navigation and the Star Compass
1.2 Wave Maps of the Ocean
1.3 Navigating Beyond the Horizon
1.4 Navigation and Maps in the Islamic World
1.5 Islamic Astronomy and Instruments
1.6 Modern Celestial Navigation
References
2: The Geography of Earth Beyond the Horizon
2.1 Maps of the World from Across Cultures
2.2 Models for the Shape of the Earth
2.3 Edmond Halley’s Voyage and Magnetic Maps
2.4 Mapping Terra Incognita
References
3: The Geography of the Skies
3.1 What Is Space?
3.2 Egyptian Star Maps
3.3 Chinese and Asian Star Maps
3.4 Cosmic Cartography from the Islamic World
3.5 European Maps of the Stars
References
4: Early Telescopes and Models of the Universe
4.1 Johann Hevelius and His Telescopes
4.2 Christiaan Huygens and Saturn
4.3 Giovanni Cassini’s Planetary and Comet Observations
4.4 The Discovery of Parallax
4.5 James Bradley and Stellar Aberration
4.6 Descartes and Vortex Theory
4.7 Isaac Newton and the Principia
4.8 Thomas Wright and His New Hypothesis
4.9 Kant and Island Universe Theory
References
5: Early Maps and Models of Galaxies
5.1 William Herschel and His Maps of the Galaxy
5.2 Lord Rosse and Other Nineteenth-Century Reflecting Telescopes and Their Results
5.3 The Discovery of Neptune and Sirius B
5.4 The Rise of Giant Refracting Telescopes
5.5 The Lick Mount Hamilton Observatory
5.6 The Yerkes Giant 40-Inch Refractor
5.7 Percival Lowell’s Observatory
5.8 Models and Maps of Galaxies
References
6: The Discovery of the Big Bang
6.1 Building the Mount Wilson Observatory
6.2 Harlowe Shapley’s Studies of the Milky Way
6.3 The Shapley-Curtiss Debate
6.4 Edwin Hubble’s Discovery of the Expanding Universe
6.5 Hubble Expansion and the Hubble Constant
6.6 Models of Cosmology in Early Twentieth Century
References
7: The Nature of Light
7.1 Ideas About Light Over the Centuries
7.2 Roemer’s Speed of Light Measurement
7.3 The Transit of Venus and the Astronomical Unit
7.4 Measurement of the Speed of Light in the Nineteenth Century
7.5 The Physics of Light
7.6 Albert Michelson and the Michelson-Morley Experiment
References
8: Spacetime and Curved Space
8.1 Einstein’s Special Theory of Relativity
8.2 Worldlines and the Light Cone
8.3 Curved Space and General Relativity
8.4 The Perihelion of Mercury and Einstein’s Theory
8.5 The Solar Eclipse of 1919
8.6 The Schwarzschild Solution and Black Holes
8.7 Time Dilation in Curved Space
8.8 General Relativity and Models of the Universe
References
9: Mapping Space to the Edge of the Observable Universe
9.1 The Mount Palomar 200″ Telescope
9.2 Telescopes on Earth and in Space
9.3 GAIA Maps of the Milky Way and Astrometry
9.4 The James Webb Space Telescope
References
10: Our Light Sphere and Horizon
10.1 GPS Satellites and Modern Navigation
10.2 The Light Sphere of Earth
10.3 Planets That Can Detect Earth
10.4 JWST Imaging of Other Planets
10.5 The Horizon of the Observable Universe
10.6 Maps of the Distant Universe and Large-Scale Structure
10.7 The Cosmic Microwave Background Radiation
10.8 Early Universe and the Limits of Our Light Cone
10.9 The View of the Early Universe
References
11: The Quantum World – Physics of the Very Small
11.1 Discovery of the Fundamental Particles of Physics
11.2 Marie Curie and Her Discoveries
11.3 De Broglie’s Matter Waves
11.4 The Heisenberg Uncertainty Principle
11.5 The Schrödinger Equation
References
12: Accelerators and Mapping the Particle Universe
12.1 Fundamental Particles and Families
12.2 Accelerators as Microscopes for the Particle Universe
12.3 Peering Inside the Fundamental Particles
12.4 The Particle Zoo
12.5 The Next Generation of Particle Accelerators – Tevatron and LHC
12.6 How CERN Works
12.7 The Discovery of the W and Z Bosons
12.8 Development of the Standard Model of Particle Physics
References
13: The Higgs Boson and Models of the Early Universe
13.1 The Higgs Boson Discovery and Mass
13.2 Particle Physics of the Early Universe
13.3 Supersymmetry and Symmetry Breaking
References
14: Exploring the Invisible Universe
14.1 Discovery of Invisible Objects in Astronomy – Pluto, Sirius B, Neutron Stars, and Black Holes, and Modern Searches for Planet X
14.2 Discovery of Dark Matter
14.3 Gravitational Microlensing and 3D Dark Matter Maps
14.4 Laboratory Detection of Dark Matter
14.5 The Discovery of Dark Energy and the Physics of the Vacuum
14.6 Neutrino Detection and Experiments
14.7 Gravitational Waves and Gravitational Wave Astrophysics
References
15: Physics of the Vacuum and Multiverses
15.1 Multiverse Theory from Astrophysics – Fine Tuning and Anthropic Principle
15.2 The Ultimate Future of the Universe from Astrophysics
References
Index
Recommend Papers

Models of Time and Space from Astrophysics and World Cultures: The Foundations of Astrophysical Reality from Across the Centuries
 9783031278907, 3031278909

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Bryan E. Penprase

Models of Time and Space from Astrophysics and World Cultures The Foundations of Astrophysical Reality from Across the Centuries

Astronomers' Universe

Series Editor Martin Beech, Campion College, The University of Regina, Regina, SK, Canada

The Astronomers’ Universe series attracts scientifically curious readers with a passion for astronomy and its related fields. In this series, you will venture beyond the basics to gain a deeper understanding of the cosmos—all from the comfort of your chair. Our books cover any and all topics related to the scientific study of the Universe and our place in it, exploring discoveries and theories in areas ranging from cosmology and astrophysics to planetary science and astrobiology. This series bridges the gap between very basic popular science books and higher-level textbooks, providing rigorous, yet digestible forays for the intrepid lay reader. It goes beyond a beginner’s level, introducing you to more complex concepts that will expand your knowledge of the cosmos. The books are ­written in a didactic and descriptive style, including basic mathematics where necessary.

Bryan E. Penprase

Models of Time and Space from Astrophysics and World Cultures The Foundations of Astrophysical Reality from Across the Centuries

Bryan E. Penprase Soka University of America Aliso Viejo, CA, USA

ISSN 1614-659X     ISSN 2197-6651 (electronic) Astronomers’ Universe ISBN 978-3-031-27889-1    ISBN 978-3-031-27890-7 (eBook) https://doi.org/10.1007/978-3-031-27890-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Introduction

He, who through vast immensity can pierce, See worlds on worlds compose one universe, Observe how system into system runs, What other planets circle other suns, What varied Being peoples every star, May tell why Heaven has made us as we are. Alexander Pope – An Essay on Man (1733), Epistle I, l. 23–28

Suspended as we are between the fundamental limits of knowledge of the physical universe on small and large scales, humanity is poised to have a nearly complete picture of the observable universe. Models of Time and Space from Astrophysics and World Cultures was written to explore the nature of this reality revealed by modern astrophysics and physics, as well as the evolution of conceptions of the universe across cultures and centuries. The book provides a survey of how our concepts of space and time arise from the latest discoveries of astronomy and physics and how historical perspectives on the roots of our current picture of astrophysical reality can provide valuable insights. We explore how our physical reality, the foundation for modern astrophysics, is based upon the knowledge limited by the messengers we can receive from the universe around us. This includes the views of stars, galaxies, and clusters of galaxies which have been mapped and studied for centuries using electromagnetic waves or light, and the unseen universe of neutrinos, gravitational waves, and particles that make up the quantum universe at the smallest scales. We describe how these views have been assembled from data taken with telescopes, and particle accelerators, culminating in the stunning images of the universe taken with telescopes like the JWST and the discovery of the Higgs v

vi Introduction

Boson from the LHC at CERN. This nearly complete description of the universe offers a profound shift in our view of our place in the expanding universe. It raises unanswered questions that arise from the fundamental limitations in our knowledge of the physical universe. We examine these limitations, set by relativity, quantum mechanics, and the unknown origins of the early universe, and how we can begin to answer these questions through new experiments and instruments. Our knowledge, even in these “modern” times, is necessarily incomplete. Yet, our contemplation of the unknown and unknowable can also be informed by historical and ancient conceptions of the physical universe, which worked within greater limitations of knowledge. The book has been developed from the works of leading physicists, astrophysicists, and scholars from a range of disciplines, to provide an interdisciplinary foundation for a detailed exploration of the roots of modern physics and astrophysics while highlighting the limits of knowledge that arise even with these incredible advances of science. The story of physics and astrophysics is also a human tale of courage, ambition, and genius. We share the stories of the many colorful characters that populated the centuries when the foundations of astrophysics were built. What lies beyond those limits, previously the domain of religious and spiritual traditions, is now becoming a frontier area of astrophysical and physics research, and we also discuss the long-term fate of the universe, the possibility of multiverses, and other topics that benefit from a historical and cross-disciplinary perspective. As astrophysics and physics advance, humanity is systematically mapping space out the limits of the observable universe from the largest scales to the smallest subatomic levels. The fantastic progress of astrophysics includes the discovery of galaxies near the beginning of the universe, evidence of early epochs of the Big Bang from cosmic microwave radiation, tantalizing hints of the mysterious dark energy that pervades the universe, and the discovery of thousands of new solar systems. As our knowledge of the distant universe expands, we also approach the limits of knowledge from the fixed speed of light, effectively constraining our view to a sphere of 13.7 billion light-years, based on the universe’s age. With its powerful infrared light gathering power, the new JWST telescope can now routinely view the very first galaxies formed after the Big Bang and can explore the universe back to the first billion years of the history of the universe. In the microscopic domain, particle physics has probed ever smaller dimensions and recreated the dense and hot conditions of the Big Bang, revealing the presence of mysterious particles such as the Higgs Boson and constraining the nature of matter, antimatter, and sub-nuclear matter such as quarks and gluons. Within the quest for an ever-deeper knowledge of the subatomic

 Introduction 

vii

realm comes a similar limitation to our understanding due to the quantum nature of reality which blurs our ability to resolve the smallest scales of matter from fundamental measurement limits. Despite heroic efforts, physics has not yet detected evidence of the particles responsible for dark matter, which could arise from new particle families, but the quest continues in laboratories and particle accelerators. Recent discoveries from CERN have provided reliable experimental data to build models of space and time within the quantum world. They have detected the particles that mediate all of the four forces of the universe. Students and the public often perceive physics and astrophysics as simply about quantitative measurement and precision calculation of quantities irrelevant to their lives. The deeper cultural roots of astrophysical reality and the ways in which space and time craft objective reality and our subjective experience are typically not part of the discussion in university classes. For over 20 years, I have been offering a course entitled “Archaeoastronomy and World Cosmology” at Pomona College that explored how civilization has been shaped by celestial observations and how ancient and modern cultures embody their values in cosmological models and in constellations and calendars. This inquiry was developed into a book entitled The Power of Stars (Springer, 2011; 2nd ed. 2017), and the book and the course could leverage students’ diverse cultural heritages in an introductory science curriculum to obtain a deeper understanding of astronomy. The Roots of Astrophysical Reality is based on a new course developed at Soka University of America, entitled “Earth’s Cosmic Context,” offered to students since 2018. The course enables students to comprehend how physics and astrophysics shape our observable universe and how the process of building a cosmic perspective creates a deeper understanding of the human condition that transcends cultures and makes us all “planetary citizens.” By combining insights from modern science with experiential and historical perspectives that also admit the contributions of diverse world cultures, The Roots of Astrophysical Reality gives a deeper inquiry into the nature of our physical and objective reality and how our experiences and cultural influences shape this reality. The Roots of Astrophysical Reality picks up where The Power of Stars left off by examining in deeper detail the nature of space, time, matter, and energy from a cross-cultural and scientific perspective.

Acknowledgments

This book would not have been possible without the many interactions and helpful conversations with countless students, collaborators, and colleagues over the years, who have shaped my own model of the universe by sharing their ideas in both research and in teaching. The liberal arts colleges where I have worked, Pomona College, Yale-NUS College, and Soka University of America, have provided an ideal environment for articulating the broader context of our universe and the process of understanding it through articulate conversation and dialog. I am grateful to these institutions, with their curious students and adventurous faculty colleagues, for their support and help in the wide-ranging inquiry that is not purely teaching nor purely research, but which enhances both. Many individuals were particularly helpful in providing discussions and in checking sections of the book which were in their specialized expertise. I particularly would like to thank my former Pomona College colleague Tom Moore, who has written extensively about relatively at both a popular and technical level and was kind enough to carefully review those sections of the book for accuracy and for providing many very helpful comments and corrections. I am also grateful to Dan Marlow from Princeton University for reading the sections on quantum mechanics and particle physics and lending his expertise from decades of experimental work in particle physics in checking that part of the story. I would like to thank Steven Angle from Wesleyan University for sharing some of his pieces on Neo-Confucian thought and for helpful discussions about Taoist and Buddhist ideas on emptiness. And even though Freeman Dyson has left our world, I am grateful to him for his encouragement and delightful conversations in an earlier part of my career. His statement that “it is better to be too bold than too timid in extrapolating our ix

x Acknowledgments

knowledge from the known to the unknown,” written within his 1979 article on the future of the universe, is helpful advice for any researcher contemplating areas outside of their immediate areas of expertise. I would also like to thank my family for their patience and support during the long hours of developing this book. This includes my two daughters, Asha and Shanti, who received many impromptu lectures on these topics growing up and inspired my work over the years, and my wife Bidushi, who read through several drafts and has reshaped my models of space and time and life itself from our many years together and with our long conversations. I would also like to thank my mother, Catherine, who as a former librarian was able to read the entire draft and provide helpful comments. November 15, 2022, Aliso Viejo, CA

Contents

1 Exploring  the Earth and Sky  1 1.1 Polynesian Navigation and the Star Compass   1 1.2 Wave Maps of the Ocean   6 1.3 Navigating Beyond the Horizon   8 1.4 Navigation and Maps in the Islamic World   9 1.5 Islamic Astronomy and Instruments  11 1.6 Modern Celestial Navigation  12 References 17 2 The  Geography of Earth Beyond the Horizon 19 2.1 Maps of the World from Across Cultures  19 2.2 Models for the Shape of the Earth  22 2.3 Edmond Halley’s Voyage and Magnetic Maps  27 2.4 Mapping Terra Incognita  33 References 35 3 The  Geography of the Skies 37 3.1 What Is Space?  38 3.2 Egyptian Star Maps  40 3.3 Chinese and Asian Star Maps  40 3.4 Cosmic Cartography from the Islamic World  42 3.5 European Maps of the Stars  45 References 52

xi

xii Contents

4 Early  Telescopes and Models of the Universe 53 4.1 Johann Hevelius and His Telescopes  55 4.2 Christiaan Huygens and Saturn  55 4.3 Giovanni Cassini’s Planetary and Comet Observations  57 4.4 The Discovery of Parallax  60 4.5 James Bradley and Stellar Aberration  63 4.6 Descartes and Vortex Theory  64 4.7 Isaac Newton and the Principia  67 4.8 Thomas Wright and His New Hypothesis  69 4.9 Kant and Island Universe Theory  70 References 72 5 Early  Maps and Models of Galaxies 73 5.1 William Herschel and His Maps of the Galaxy  74 5.2 Lord Rosse and Other Nineteenth-Century Reflecting Telescopes and Their Results  78 5.3 The Discovery of Neptune and Sirius B  81 5.4 The Rise of Giant Refracting Telescopes  83 5.5 The Lick Mount Hamilton Observatory  86 5.6 The Yerkes Giant 40-Inch Refractor  88 5.7 Percival Lowell’s Observatory  90 5.8 Models and Maps of Galaxies  92 References 95 6 The  Discovery of the Big Bang 97 6.1 Building the Mount Wilson Observatory  97 6.2 Harlowe Shapley’s Studies of the Milky Way 100 6.3 The Shapley-Curtiss Debate 101 6.4 Edwin Hubble’s Discovery of the Expanding Universe 103 6.5 Hubble Expansion and the Hubble Constant 106 6.6 Models of Cosmology in Early Twentieth Century 106 References108 7 The  Nature of Light111 7.1 Ideas About Light Over the Centuries 111 7.2 Roemer’s Speed of Light Measurement 113 7.3 The Transit of Venus and the Astronomical Unit 115

 Contents 

xiii

7.4 Measurement of the Speed of Light in the Nineteenth Century118 7.5 The Physics of Light 120 7.6 Albert Michelson and the Michelson-Morley Experiment 122 References126 8 Spacetime  and Curved Space129 8.1 Einstein’s Special Theory of Relativity 129 8.2 Worldlines and the Light Cone 136 8.3 Curved Space and General Relativity 141 8.4 The Perihelion of Mercury and Einstein’s Theory 145 8.5 The Solar Eclipse of 1919146 8.6 The Schwarzschild Solution and Black Holes 150 8.7 Time Dilation in Curved Space 153 8.8 General Relativity and Models of the Universe 156 References159 9 Mapping  Space to the Edge of the Observable Universe163 9.1 The Mount Palomar 200″ Telescope 163 9.2 Telescopes on Earth and in Space 169 9.3 GAIA Maps of the Milky Way and Astrometry 174 9.4 The James Webb Space Telescope 178 References188 10 Our  Light Sphere and Horizon191 10.1 GPS Satellites and Modern Navigation 191 10.2 The Light Sphere of Earth 194 10.3 Planets That Can Detect Earth 195 10.4 JWST Imaging of Other Planets 198 10.5 The Horizon of the Observable Universe 199 10.6 Maps of the Distant Universe and Large-Scale Structure 201 10.7 The Cosmic Microwave Background Radiation 202 10.8 Early Universe and the Limits of Our Light Cone 207 10.9 The View of the Early Universe 208 References210

xiv Contents

11 The  Quantum World – Physics of the Very Small211 11.1 Discovery of the Fundamental Particles of Physics 212 11.2 Marie Curie and Her Discoveries 214 11.3 De Broglie’s Matter Waves 218 11.4 The Heisenberg Uncertainty Principle 220 11.5 The Schrödinger Equation 221 References223 12 Accelerators  and Mapping the Particle Universe225 12.1 Fundamental Particles and Families 226 12.2 Accelerators as Microscopes for the Particle Universe 227 12.3 Peering Inside the Fundamental Particles 231 12.4 The Particle Zoo 232 12.5 The Next Generation of Particle Accelerators – Tevatron and LHC 238 12.6 How CERN Works 239 12.7 The Discovery of the W and Z Bosons 244 12.8 Development of the Standard Model of Particle Physics 246 References248 13 The  Higgs Boson and Models of the Early Universe251 13.1 The Higgs Boson Discovery and Mass 252 13.2 Particle Physics of the Early Universe 255 13.3 Supersymmetry and Symmetry Breaking 257 References259 14 Exploring  the Invisible Universe261 14.1 Discovery of Invisible Objects in Astronomy – Pluto, Sirius B, Neutron Stars, and Black Holes, and Modern Searches for Planet X 262 14.2 Discovery of Dark Matter 264 14.3 Gravitational Microlensing and 3D Dark Matter Maps 267 14.4 Laboratory Detection of Dark Matter 267 14.5 The Discovery of Dark Energy and the Physics of the Vacuum 270 14.6 Neutrino Detection and Experiments 275 14.7 Gravitational Waves and Gravitational Wave Astrophysics278 References283

 Contents 

xv

15 Physics  of the Vacuum and Multiverses285 15.1 Multiverse Theory from Astrophysics – Fine Tuning and Anthropic Principle 286 15.2 The Ultimate Future of the Universe from Astrophysics 287 References297 I ndex299

1 Exploring the Earth and Sky

Even before writing existed, navigators created maps of space and time in their minds and their stories. These maps were conveyed to future generations in myth and star tales and provided a binding between generations and between the people and their environment. In star tales, the values, the environment, and ancestors came to life among the constellations, and their journeys from Earth to the stars included local landmarks as well as a model of space and time for the culture.

1.1 Polynesian Navigation and the Star Compass In some cultures which depend on navigation for survival, such as those of Oceania – Polynesia, Hawaii, and New Zealand – navigation by stars was a critical part of their conceptions of time and space. An early explorer would face a vast ocean and see only an endless expanse of water, reaching out as far as they could see. To bring order and help survive the journey across the ocean, our ancestors developed tools, maps, and most importantly, models of the universe to allow them use their imaginations to provide structures that placed them in a larger context. These ideas were conveyed in training sessions where children were regaled with tales of the ocean, the stars, the winds, and the waves, to help them find their way across the sea. The astronomical navigation techniques relied on two main methods – memorizing and locating a set of “guide stars” in the sky with known rising and setting points on the horizon and measuring the heights of key stars above the horizon with one’s hands to get latitude. The stars used for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_1

1

2 

B. E. Penprase

this technique would also  use polar constellations to locate the North or South celestial pole and would identify stars in the Southern Cross or the Pole Star Polaris to find the celestial pole. Both techniques can be explained with a more modern construct inherited from the Greek and Chinese cultures of a celestial sphere. The celestial sphere is a construct that places the stars in a sphere rotating above the Earth, with the pole of the celestial sphere above the Earth’s north and south poles. If one believes the Earth is stationary (as did nearly all of our ancestors, as it accords with intuition and sensory experience), the stars appear to rotate above us with the poles as their axis. As one moves about on the Earth, both celestial poles appear above the cardinal north and south directions, and the angle between one’s horizon and the north celestial pole is equal to the observer’s latitude. This principle is extremely helpful for navigation and provides a reliable indicator for North or South headings and can also be used to measure progress in latitude during an ocean voyage (Fig. 1.1).

Fig. 1.1  Horizon coordinate system showing the locations of the north celestial pole relative to the horizon. Measuring the altitude of the pole star directly indicates one’s latitude in the Northern hemisphere. For tropical navigators, the north and south celestial poles appear close to the horizon, and their altitude can be measured more easily without a sextant. (Figure from https://commons.wikimedia.org/wiki/File: Meridian_on_celestial_sphere.png)

1  Exploring the Earth and Sky 

3

To measure this angle and other angles on the sky and horizon, the Polynesian navigators would use their hands. By placing their hands on the horizon and locating the star above the horizon in alignment with one of their fingers, they can measure their latitude. Using one’s hands as a sextant and navigation tool provided a robust and reliable mechanism for traveling thousands of miles across the oceans, between Tahiti and Hawaii, and across the many islands in the Pacific. The second principle is that near the equator, the stars and the Sun and Moon rise and set nearly vertically from the horizon, providing reliable headings based on their azimuths. By memorizing a set of star-rising points, the navigator can set a course in precise directions and hold that course as long as some of the stars are visible during the evening, or as long as the Sun or Moon is relatively close to the horizon. This principle gives rise to what is sometimes called the “star compass,” which is not a physical instrument, but a system of knowledge carried in the ship with the navigator and transferred through the generations by oral tradition (Figs. 1.2 and 1.3). Each of the Oceanic cultures had variations on the star compass particular to their language and tradition. In addition to the Micronesian Star Compass

Fig. 1.2  Example of using hand positions to measure angles in the sky – as such, the hand becomes a powerful navigational tool for measuring altitudes of stars and was a part of the traditional Hawaiian and Polynesian navigation system. (Adapted from Low, 2020, p. 141)

4 

B. E. Penprase

Fig. 1.3  Micronesian Star Compass – the positions of stars on the horizon were used to set courses for navigators from Micronesia. This knowledge was passed on through oral tradition across hundreds of generations. (Adapted from Low, 2020, p. 192)

shown above, other cultures had their own versions. Howe, 2006, describes several different variations of the star compass, such as those from Satawal Atoll in the Caroline Islands, which uses locations of rising and setting points of the Southern Cross in addition to a set of 16 stars to indicate 32 compass headings, the Tahitian system, which uses a set of rising and setting points for 11 stars, giving 23 compass headings, the Bugis of Indonesia, who used a star compass that indicated 22 rising and setting locations to help navigate between 16 key directions, and the Hawaiian system, which included both a star compass and a “wind gourd” which was used to locate directions for winds and help the travelers know more about the will of Raka, the god of winds (Polynesian Voyaging Society, 2022). Additional precision for navigation can come from the knowledge of pairs of stars, which provide an additional angle of reference that changes as one

1  Exploring the Earth and Sky 

5

moves across the Earth. One example is how the pair of stars in Gemini, Castor and Pollux, will rise and set simultaneously at a particular latitude; for example, at exactly 18 degrees of latitude, Castor and Pollux will rise and set simultaneously on the horizon (Low, 2020). Other pairs of stars can be seen together on the horizon at different latitudes. This detailed star knowledge gave the navigator tools for precise wayfinding. The advanced navigator would immediately recognize where multiple bright stars rise and set as a function of latitude, providing useful waypoints marking progress in the voyages across the ocean. Navigators would also use constellations to provide a scale of angular sizes, such as the height of the stars in the Southern Cross (6 degrees), to help provide more accurate estimates of star positions relative to the horizon. Often the stars would be incorporated in tales that celebrated ancestors and also helped solidify both the memory of those ancestors and the constellations. For example, the Hawaiians recognize the stars within the Great Square of Pegasus as Ka Lupe o Kawelo, which celebrates the ancestor Kawelo, the Ali’i or ruler of the island of Kawai. The four stars within the square represent both Kawelo’s four greatest ancestors and the four main Hawaiian islands and were recounted in poetry and chants (Polynesian Voyaging Society, 2022). In addition to the horizon positions of stars, islands throughout the Pacific have different stars which appear directly overhead at each location. Using the celestial sphere construction described above, a star on the celestial sphere will pass over a given place on Earth if the star’s declination (something like a celestial latitude) is equal to the latitude of the location on Earth. This fact is a valuable tool for determining latitudes by way of a group of zenith stars, which map to the latitudes of key islands in the Pacific. Examples include the star Arcturus (Hōkūle'a, or “star of joy” in Hawaiian), which appears directly over the Hawaiian islands, and Antares, which appears directly above the Tahitian islands. These zenith stars were encoded in myth and song, with one Tahititi song celebrating the “star pillars” that hold up the dome of the sky. One navigator from Tonga describes these stars as a fanakenga or “overhead stars,” which may have played a role in early chapters of navigation before the star compass became the dominant mechanism for navigation (Huth, 2015a, p. 158). All of the star indicators above can be combined with island star pairings to acquire headings near land and are aided by careful attention to the speed of a boat based on sound and wind. These indicators together give rise to a navigation technology that provides constant clues to one’s location and speed, which is compared with a map of time and space in the navigator’s mind. For the Caroline Islanders, this system of sighting and adjusting to

6 

B. E. Penprase

Fig. 1.4  Wave diffraction and refraction patterns were part of the repertoire of ancient oceanic navigators from Polynesia. By detecting shifts in the shapes and frequency of waves, the navigator could sense the presence of islands far beyond visual range. This image shows the diffraction of waves around the Ano Neuvo Island in California. (Figure from https://en.wikipedia.org/wiki/A%C3%B1o_Nuevo_Island#/ media/File:A%C3%B1o_Nuevo_Island_off_A%C3%B1o_Nuevo_South_Point.jpg)

compensate for the speed of the boat and wind was known as the etak system (Howe, 2014a, p. 168). Still more subtle cues of wind and wave were part of the toolkit for the navigators. From long practice, the presence of land near or beyond the horizon was known to distort clouds and wave patterns. Some of the cues were meteorological and arose from interactions of land and sky. Others were the result of physics, which causes a set of waves to diffract around land masses, creating interference patterns far from the land mass which the well-attuned navigator could discern  (Fig. 1.4). One example is the technique used by Marshall Islanders, who could sense the presence of islands based on a “certain joining of the waves” from this interference. They supplemented their star compass with a mattang chart to help teach the principles of wave refraction and interference (Howe, 2014b, p.  176). They used a technique of “wave piloting,” which noted the patterns of waves that connect islands. The feel of waves on the boat as they arrive gives clues to the presence of distant islands,

1  Exploring the Earth and Sky 

7

which cause the waves to interfere in ways that provide recognizable patterns. The Marshallese would use the word booj or “knot” to denote the distortions of waves as they are diffracted around pairs of islands and used these patterns to reach islands beyond the horizon (Huth, 2015b, p. 312).

1.2 Wave Maps of the Ocean Our ancestors constructed detailed pictures of the open ocean and the geography of distant islands using not only the stars but also from keen observations of waves and their patterns. Even though the theory of diffraction would be devised many centuries later and thousands of miles away, Polynesian navigators used the diffraction of waves from distant islands as part of their repertoire of navigational aids. Waves on the open ocean appear to most observers as periodic swells or random chop. The navigators could read in the patterns between those extremes the presence of distant islands that would cause waves to bend and interfere with one another. A small island by itself when impacted by waves, will cause the waves to bend around it, and these bent waves interfere with the rest of the waves and create telltale signals of the island’s presence  – often hundreds or thousands of miles away. Often an island will bend the wave swell – or refracts them – due to the differing travel speeds for waves at different depths. In other cases, in the deeper ocean, the wave will produce a complex pattern of peaks and valleys, producing what later physicists would describe as a “single source diffraction field” and the navigators could discern these peaks (Fig. 1.4). Another sign for navigators is a change in wave heights and shapes as they approach the shore. The decreasing depth of the waters causes the waves to pile up and sharpen, and the keen navigator can discern from the shape of the waves the hidden contours of the bottom below – even in deep fog. This phenomenon is known as “wave shoaling.” In locations close to an island, the phenomenon of “wave shadowing” also will reveal a hidden island ahead. Navigators from the Marshall Islands have been able to use “wave piloting” by describing the shapes of waves with terms like jur in okme – which represents a V-shaped tool that the Marshallese would use to gather food. Different patterns of waves have been described with names like dilep, which means “backbone” or “spine,” and were telltale diffraction peaks that connect islands (Huth, 2015b, p. 312) (Fig. 1.5).

8 

B. E. Penprase

Fig. 1.5  Another example of wave navigation can be found in the changes in wave shape and height due to a nearby coast. The presence of shallow water would change the face of the waves, and the navigator could use this to detect the approach of land  – even in fog and bad weather. (Image from NASA, https://eoimages.gsfc.nasa. gov/images/imagerecords/149000/149818/maalaeasurf_oli_2018276_lrg.jpg)

1.3 Navigating Beyond the Horizon The horizon for both navigators and modern astrophysicists is the limit to which we can see – and throughout the ages, what is beyond the horizon is of intense interest and curiosity. Within East and West Polynesia, the notion of “Hawaiki” describes the mix of religious awe and geographic discovery that accompanies finding a new island beyond the horizon. Newly discovered lands were first given names of known places, and these were temporary labels “for the spiritual threshold between creation and reality.” The idea was that both geographic and spiritual origins were coincident. The Tahitians gave a similar sounding name of Havai’i as the original homeland of the ancestors, as well as the destination place of the deceased, which was beyond the horizon (Howe, 2014c, p. 49). Crucial for the model of space and time that Polynesians constructed was the active agency of wind and cloud. As we know, “trade winds” move across the Earth in predictable directions and are formed from large convection cells dividing the Earth into latitude zones that span from the equator to roughly 30 and 60 degrees latitude. The boundaries of these convective cells are places where the air typically rises or falls to Earth, such as at the equator (which often has rising air masses that convert water vapor into massive

1  Exploring the Earth and Sky 

9

thunderstorms), and at 30 degrees latitude (where dryer air is falling to Earth, and which therefore coincides with many of the deserts on Earth. In both cases, equator and approximately 30 degrees latitude, sailors confront “doldrums” where little horizontal motion makes sailing difficult. Moving north or south of these areas brings the navigator into zones of Tradewinds, which blow either east or west depending on which cell they are associated with and which side of the equator they arise in. Exceptions to these patterns can also be noted, and the presence of a landmass amid an ocean will interrupt the usual progress of winds and provide telltale cloud patterns and disruptions to the normal patterns of trade winds and doldrums. The savvy navigators from Oceania were aware of these patterns and created “wind compasses,” which matched the prevailing patterns in their home islands. One example is the Anutan wind compass, which incorporates the trade winds which Anutans call tonga. The Anutans know how these winds blow from the southeast from the nearby island of Tonga, which is in the east Southeast of Anuta (Huth, 2015c, p. 287). These techniques, which combined observations of stars, Sun, wind, and currents with a mental map of time and space in the Pacific, have successfully guided ships across thousands of miles over many centuries.

1.4 Navigation and Maps in the Islamic World Across the Earth, Arabian and Persian navigators and astronomers worked to develop techniques for navigating across the vast oceans of desert land and for regulating activities on Earth with advanced knowledge of the stars. Both Persian and Arabic astronomers enthusiastically translated Greek astronomical works and extended them with tables of latitudes and with tables of planetary motions. The timing of prayers and the beginning of Ramadan and other lunar months was also tied to Moon watching and stellar observations. As all Muslims had to determine the qibla, the direction of Mecca, for their prayers, astronomical tables, and instruments rapidly advanced and proliferated. Starting in the eighth century, tables known as the zij were produced, often blending observations and ideas from Indian astronomy with new data from local astronomers. Ptolemy’s work, Syntaxis, which incorporated the most advanced concepts of Greek astronomy, was translated into Arabic by the ninth century and named al-maisti, which translates to “the majestic,” which subsequently became known in Europe as the Almagest (N. Campion, personal communication, 2009).

10 

B. E. Penprase

Ptolemy’s astronomical ideas included the well-known geocentric model of the universe, with Earth at the center of a series of nested spheres for each of the seven luminaries, arranged in distance in the order of Moon, Mercury, Venus, Sun, Mars, Jupiter, and Saturn. Beyond all of these was the sphere of the stars, which represented an unchanging and purely circular region of motion. The data that Ptolemy and his colleagues had compiled accounted for the very non-circular motions of the planets, which included both retrograde loops (resulting from the Earth’s movement and passing of outer planets) and variations in speed (resulting from eccentric or elliptical orbits of planets about the Sun). By combining the uniform circular motion of these planets around the Earth (referred to as the deferent) with a smaller reverse circular motions (resulting in “epicycles”), and placing some of these circular motions off-center from the Earth, Ptolemy was able to accurately duplicate the observed apparent motions of the planets to within the precision of observations of his day. However, the refinements needed to keep the geocentric model aligned with improved data were vexing and required multiple nested spheres moving in different directions for each planet. By the time of Ptolemy, the number of these spheres had increased from the 15 to 20 in the time of Aristotle to a total of 53 nested spheres. Ptolemy dutifully cataloged the necessary combinations of deferent eccentrics and offsets needed to recreate the solar system’s observed motions, and the Arabic and Persian astronomers gratefully adopted the model and added new and more precise astronomical observations to refine Ptolemy’s model. Ptolemy’s works included a set of the best maps from the ancient world, and his Geography was also translated into Arabic in the ninth century. Tables of geographic coordinates and Ptolemy’s  maps as well were adopted into Arabic mapmaking. The surviving Arabic text is known as Kitāb ṣūrat al-arḍ which translates to Geography and includes many of the names and places mentioned in Ptolemy, even though there is some debate about whether the entire text was used for translation (Ducene, 2011). In astronomy and cartography, Arabic and Persian astronomers pushed Ptolemy’s ideas further with extensive and refined data. The Islamic mathematician Abu Rayhan Muhammad ibn Ahmad Biruni (c. AD 973–1048) used a novel technique to measure the size of the Earth by viewing mountains in the Punjab region from a distance where they appeared slightly below the horizon. By combining a measurement of the height of a mountain determined through triangulation on a known baseline with the apparent “dip angle” between the top of the mountain and the horizon, al Biruni was able to measure the Earth’s radius with great precision. With some assumptions about the

1  Exploring the Earth and Sky 

11

units of al Biruni’s data, the radius measurement was 6339.6 km (within 1%), a precision not matched by the West until the sixteenth century. Further data on the Earth’s geography was provided by Arab Scientist al-­ Zarqali, who produced the Toledo Tables, which included precise measurements of the latitude of over 50 locations across the known world. These measurements were obtained from precise shadow measurements combined with knowledge of the Sun’s place in the celestial sphere and allowed maps of the world to be accurate to within 1.4 degrees, with the location of the equator located to within 0.25 degrees (Huth, 2015d, p. 473).

1.5 Islamic Astronomy and Instruments Islam places a high priority on observations of the stars, and the mandate for astronomical work can be found in the Koran, which states in one passage, “It is He who has appointed for you the stars, that you might be guided by them in the darkness of the land and sea” (Savage-Smith, 1992). Since the 622 AD Hijrah of Muhamad, Islamic astronomers charted, observed, and transmitted the positions of stars and planets to aid others in navigation and orientation toward Mecca for their prayers. The Islamic astronomers were especially inventive in developing portable guides to the skies in the form of the astrolabe, which became a high art in the Islamic world. The astrolabe was typically made with elaborate inscriptions and of the finest metalwork in brass. A back plate was inscribed with key celestial markers and features of the celestial sphere and was customized for the location at which the instrument was to be used. The front of the device is a movable set of pointers to star positions known from the Latin word rete, which can be dialed to show the parts of the skies at any given time or date. It even can be used as an observing tool to help measure the altitudes of stars for determining latitudes within navigation. Any combination of the celestial and ecliptic coordinates can be measured by a set of offset circles that inscribe the positions of lines of declination or the location of the Zodiac and its constellations (Figs. 1.6 and 1.7). Islamic astronomers from the Ottoman empire used  other instruments, such as quadrants and cross-staffs, as well as complete spherical models of the celestial sphere. Such instruments were necessary for measuring the skies accurately and providing data that was useful for the timing of prayers and construction of the qibla used to orient oneself toward Mecca for prayers. By the time of the fourteenth century, more sophisticated astrolabes allowed the user to view star positions at any latitude by use of a “universal astrolabe” that

12 

B. E. Penprase

Fig. 1.6  The design of an astrolabe includes a backing plate customized for the latitude of observation, which can be used to locate stars based on the time of the observations. On the background plate, the astronomer can find the horizon, with a dense curved grid of lines showing lines of equal altitude and azimuth for star observations and a set of more radial lines showing lines of each hour. (Image from: https://upload.wikimedia.org/wikipedia/commons/f/f7/One_plate_of_an_astrolabe._ Wellcome_M0017327EB.jpg)

included a grid of star positions that can be dialed in with a movable horizon which would be varied depending on the observer’s latitude (Fig. 1.8).

1.6 Modern Celestial Navigation As Europeans began long journeys to Africa, the Americas, and India, they developed several new technologies to help them navigate. These include the sextant and nautical clocks, that could locate a ship on the Earth with increasing precision. Many of these technologies also made their way into European constellation maps of the skies visible from the Southern hemisphere, which soon were cluttered with navigational and astronomical instruments. Examples include the constellations of Sextans (“sextant”), Antlia (“the pump”), Telescopium, Reticulum (“reticle”), Horologium (“clock”), Vela (“sails”), Norma (“normal” or carpenter’s level) and Pyxis (“compass”), Octans (“octant”), and Microscopium (“microscope”). These navigational instruments made their way into the constellations newly viewed by European

1  Exploring the Earth and Sky 

13

Fig. 1.7  Astronomers from the Islamic world made the astrolabe a high art, and the device could be used to sight stars for navigation and measure time during voyages using the movable rete, which rotates on top of the backing plate and includes pointed features that indicate key stars in the sky. (https://commons.wikimedia.org/wiki/ File:MHS_51182_Astrolabe.jpg)

navigators and persist in modern astronomical practice, despite a rich astronomical tradition from Southern hemisphere cultures from Australia, Africa, and South America (Fig. 1.9). The basics of celestial navigation are easy enough to understand and are available to anyone with access to the night sky, a star chart, and a clock. As mentioned earlier, the latitude of the celestial poles above the horizon will equal the observer's latitude. By using a sextant to spot the north star Polaris or to provide measurements of the “altitude” (or height above the sky) of known stars, the navigator can locate their latitude precisely. Often the Sun can be used to get sightings as well, and the Sun’s elevation can be compared with the ship’s chronometer to determine latitude and longitude. The sextant provides a “split screen” optic that allows the observer to view the horizon and

14 

B. E. Penprase

Fig. 1.8  Arabic astronomers made use of the cross-staff and quadrant to help with their observations, and this illustration from Andreas Ferrara’s 1493 Latin translation of al-Farghani’s Elements of Astronomy from the ninth century, illustrates the astronomer at work, as well as the vital contribution of the Islamic world to advancing mapping of the Earth and skies. (From https://www.loc.gov/resource/rbc0001.2013vollb99507/ ?sp=7, https://www.loc.gov/resource/rbc0001.2013vollb99507/?sp=8)

the star simultaneously while adjusting a dial to bring the two views of the horizon and celestial object into convergence. When this is done, it is possible to precisely measure the star’s height (or the height of the Sun or Moon) above the horizon, which is compared with star tables that give predicted heights as a function of time for a given latitude (Figs. 1.10 and 1.11). More precise navigational positions can be determined by recording the altitudes of multiple celestial objects and then combining the observations on a chart that will place each sighting in the form of a line, which can be drawn with a compass or ruler. Each sighting produces a locus of points on the map, but the intersection of the sighting lines can locate the ship precisely in both latitude and longitude. A sextant sighting of the Sun, using a shade and viewing the Sun and horizon, constrains the location to a circle on the map, referred to as the equal-altitude line of position. The second observation of a

1  Exploring the Earth and Sky 

15

Fig. 1.9  Star chart centered on the South Celestial pole, using European constellation labels, which center navigational instruments such as the sextant (near the South pole at center), triangle, and telescope. From Burritt, Atlas designed to Illustrate Burritt’s geography of the heavens (1856). (Image from https://figwww.loc.gov/resource/ g3180m.gct00292/?sp=5&r=-­0.239,-­0.128,1.2,0.645,0)

star or planet allows the navigator to find the intersection of the two circles on the map, which can locate the position within just a few miles of error. This technique was improved by Captain St. Hilaire in 1875, who recognized that providing an approximate location and comparing the observed altitude with a calculated altitude at the approximate location would determine the distance from the initial guess. By using multiple calculated and measured altitudes, the position can be determined by the intersection of small arcs on a map (Davidson, 2022). All modern navies include high technology satellite and electromagnetic navigation, but still can fall back on the classic tradition of sighting stars, the Sun, and planets with a sextant and drawing lines on charts. Celestial navigation needs no electricity and is also a wonderful connection with ancient navigators from the past, who similarly navigated with the Sun, stars, and planets. And as the need for celestial navigation increased, star maps incorporating the latest knowledge of the constellations and the universe were developed, which we discuss in the next chapter.

16 

B. E. Penprase

Fig. 1.10  A modern sextant provides a split view of the horizon and an astronomical object and includes a dial for measuring the precise altitude of the object in the sky. (From https://en.wikipedia.org/wiki/Celestial_navigation#/media/File:Marine_sextant.svg)

1  Exploring the Earth and Sky 

17

Fig. 1.11  An example of a naval sextant from 1871 showing the compact arrangement of various filters which make Sun sightings possible and a range of eyepieces to alter the magnification of the telescope for sightings of stars and planets. (From https:// www.history.navy.mil/content/history/nhhc/our-­c ollections/artifacts/Navigation/ Sextants/nhhc-­1967-­415-­aa-­sextant.html)

References Campion, N. (2009). Persia: Time as deity (Course notes for history of Western astrology). Davidson, R. (2022). Understanding celestial navigation. https://pbps.org/ celesnav.html Ducene, J.-C. (2011). Ptolemy’s geography in the Arabic-Islamic context. In Cartography between Christian Europe and the Arabic-Islamic World, 1100–1500. Brill. Howe, K. R. (2014a). Vaka moana: Voyages of the ancestors: The discovery and settlement of the Pacific (p. 168). University of Hawai’i Press. Howe, K. R. (2014b). Vaka moana: Voyages of the ancestors: The discovery and settlement of the Pacific (p. 176). University of Hawai’i Press. Howe, K. R. (2014c). Vaka moana: Voyages of the ancestors: The discovery and settlement of the Pacific (p. 49). University of Hawai’i Press. Huth, J. E. (2015a). The lost art of finding our way (p. 158). The Belknap Press of Harvard University Press. Huth, J. E. (2015b). The lost art of finding our way (p. 312). The Belknap Press of Harvard University Press.

18 

B. E. Penprase

Huth, J. E. (2015c). The lost art of finding our way (p. 287). The Belknap Press of Harvard University Press. Huth, J. E. (2015d). The lost art of finding our way (p. 473). The Belknap Press of Harvard University Press. Low, S. (2020). Hawaiki rising: Hōkūleʻa, Nainoa Thompson, and the Hawaiian renaissance. University Of Hawaiʻi Press. Polynesian Voyaging Society. (2022). Hawaiian Star Lines. Hōkūleʻa – Hawaiian Star Lines; Polynesian Voyaging Society. https://www.hokulea.com/education-­at-­sea/ polynesian-­n avigation/polynesian-­n on-­i nstrument-­w ayfinding/hawaiian-­ star-­lines/ Savage-Smith, E. (1992). Celestial mapping. In J. B. Harley & D. Woodward (Eds.), The history of cartography. University of Chicago Press.

2 The Geography of Earth Beyond the Horizon

Our first models of the universe centered on the Earth, the realm of all human exploration, and informed the layout of detailed maps surrounding the locales of the explorers and routes of interest, with the periphery also sketched in. In many cases, the “edge” of the world was shown and surrounded by forbidding sea monsters and other creatures. Just as modern astrophysics is mapping the edges of the known universe with JWST, ancient cartographers used the best knowledge of the time to represent their “universe,” which is generally considered to have a center in the home country of the cartographer.

2.1 Maps of the World from Across Cultures Ancient Chinese explorers conducted extensive journeys to the surrounding countries that  provided information for their world maps. These journeys include the Hsi Yu Lu (Record of a Journey to the West) by Chinese astronomer and explorer Yehlu Chhu-Tshai, who traveled with the Mongol emperor Genghis Khan in an expedition to Persia between 1219 and 1224 AD. The famous mariner Zheng He was able to travel on multiple expeditions between 1405 and 1433 along trade routes that crossed to the West Coast of India, to the Arabian Peninsula, and along the horn of Africa, to Zanzibar (Fig. 2.1). Additional journeys to the North enabled mapping of the extensive Chinese river system, which had been accurately surveyed since the second century BC and provided data for accurate maps of China and beyond. The primary interest for Chinese mapmakers was contained within the borders of China, however, and includes only limited detail of the larger world. The great Yu Ji Tu © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_2

19

20 

B. E. Penprase

Fig. 2.1  (Left) The Yu Ji Tu map of the Chinese river system in Western China, dating from 1137 AD. (Right) The map of China from 1743 shows more precision and details, labeling the various cities, villages, and mountains with the Great Wall of China visible on the top of the map. (https://commons.wikimedia.org/wiki/Category:Yu_Ji_Tu#/ media/File:Yu-­ji-­tu-­map.jpg and https://digital.bodleian.ox.ac.uk/objects/fe07c78b-­4d60-­ 490d-­84e9-­201c996a5f7d. Photo: © Bodleian Libraries, University of Oxford. Terms of use: CC-­BY-­NC 4.0)

map was carved into a large stone 0.9-meters square in 1137 AD and accurately depicted Chinese rivers (Ronan, 1985a, p. 266). The earliest European maps of the world employed a stylized “T” world map which was less about conveying geography, and more about providing an inventory of the known continents and oceans, with the locations of Asia and Jerusalem at the top and center, respectively. These maps were usually drawn with the East at the top and often would locate Eden within Asia in the top half of the map. The “T” shape was drawn to locate the contents of Europe, Africa, and Asia, with Asia at the top and Europe at the bottom left. The Mediterranean Sea provided the vertical stroke of the “T,” and the Nile River and the Don River provided the horizontal strike of the “T.” The edge of the map often depicted an ocean encircling the Earth toward the horizon. The remaining region of the Earth not depicted was a vast uninhabited region known as the “antipodes.” The maps were often called “Isidoran” maps in honor of the seventh-century scholar Isidor of Seville. The design of cosmological charts often included this “T” geography  – with the round Earth placed in the central location of the geocentric universe surrounded by the spheres of the various planets, Sun and Moon (Fig. 2.2).

2  The Geography of Earth Beyond the Horizon 

21

Fig. 2.2  (Left) Eleventh century map of the world based on a copy of Isidore of Seville, showing the standard “T” shape used in medieval European maps. (Right) Map of the universe from the 1482 work Sacro Bosco by Joannes Regiomantus. The cosmology prevalent in the medieval world placed the T-map of the Earth at the center, surrounded by the “sublunary” sphere (with fire, air, and water) and the unchanging spheres of the planets, Sun, and Moon centered on the Earth. (Images from Wikimedia commons, and Claremont Colleges Special Collections, respectively. The image source on left is from https://commons.wikimedia.org/wiki/File:Western_Manuscript_372_ Etymologies._Wellcome_L0024508.jpg)

Islamic cartographers also placed their civilization at the center of the world and included an encircling sea and many details of their known world, which spanned from the equator to the Baltic Sea, and included the Atlantic Ocean and more of the Eurasian continent. The first Islamic world maps had a similar layout to the European “T” maps – but with India and China placed on the East and Northern Europe (including much of Siberia) spanning the top of the map. One example is from the 1283 AD work with a title that translates as “wonders of creation” by Zakariya’ al-Qazwini, which was copied by various later cultures. Another example is from al-Idrisi’s 1152 work with a title that translates to “book of pleasant journeys into faraway lands.” (Fig. 2.3) Korean world maps included some of the same stylings as the Chinese and Islamic maps, with a circular border and a sea encompassing the known world. The Korean maps are known as a Cheonhado, which translates to “all under heaven.” The map is centered on Mt. Meru and represents Korea and China as part of the central land mass. Some of the maps, such as the figure below, include details like the Yellow River and the Great Wall of China (Fig. 2.4).

22 

B. E. Penprase

Fig. 2.3  (Left) Sixteenth century manuscript that features an Islamic version of the world map, with the Arabian and Persian regions detailed in the center and lower left and Eurasia spanning across the top of the map. (Right) Detail from a 1553 manuscript entitled Tabula Rogeriana includes a map of the world from al-Idrisi and more information on the world compiled by Arab cartographers, showing the Islamic world in the center and lower left and the European and East Asian continents spanning the top. (From https://commons.wikimedia.org/wiki/Category:Bestiary_by_Muhammad_ibn_ Mahmud_Tusi_%28Walters_MS_593%29; https://digital.bodleian.ox.ac.uk/objects/ ced0d8bd-­1019-­4af2-­9086-­e411115f1507/)

The tendency to place one’s locale in or near the center  of the world is universal  – and has been  demonstrated  by Chinese, Korean, Arabic, and European maps, which also typically include a surrounding ocean that marks the boundaries of the world. This watery horizon represented the limited extent of exploration of early civilizations, yet also played the role of the frontier still to be mapped  – much as the few unmapped corners of the universe provide a frontier for us today in the twenty-first century. Built into the maps are also assumptions about the structure of the world that embody beliefs about the hierarchy of places on Earth as influenced by religion and nationalism. With improved technology, the accuracy of the maps improved, along with the completeness of the coverage (Fig. 2.5).

2.2 Models for the Shape of the Earth Chinese astronomers and cartographers measured the Earth’s curvature precisely but were not willing to state that the Earth was spherical. Instead, they believed the Earth was a dome that rested on a square base to provide it with stability and was surrounded by water. This gai tian “hemispherical dome” cosmology prevailed in ancient China for nearly a millennium and envisioned a dome of the sky above China, rotating about a celestial pole

2  The Geography of Earth Beyond the Horizon 

23

Fig. 2.4  An example of a Cheonhado map from Korea. This example was produced around 1800, showing the region around China and Korea encompassed by a surrounding sea and an outer region of foreign lands. (From https://www.bl.uk/collection-­ items/cheonhado-­world-­map)

centered on an axis that aligned with the circular sea that encircled the Earth. The measurements of the Earth and numerological considerations gave precise dimensions to the distance between the Earth and the dome of heaven as 80,000 li (about 45,000  km). They also placed China at approximately 100,000 li from the ‘prime vertical’ of heavenly rotation (about 56,000 km) (Ronan, 1985b, p. 84). While some of the overall dimensions of the universe are lacking in accuracy, their measurements of the curvature of the Earth and the dimensions of the Chinese territory were highly accurate, as they made use of observation stations throughout China that provided precise measurements of the Sun’s height and used the change in solar elevation to precisely locate the latitude of their observatories. A common misconception is that the “discovery” of the round Earth developed from journeys across the sea which avoided falling off an imagined

24 

B. E. Penprase

Fig. 2.5  The world map from China shows “the great Qing Dynasty’s complete map of all under heaven” from 1811. (The image is taken from https://www.loc.gov/resource/ g3200.ct003403/?r=0.143,0.146,0.59,0.304,0)

“edge” to a flat Earth. While there were some proponents of a flat Earth in Medieval Europe, the Greeks and Romans many centuries earlier had shown the Earth to be round and even measured the Earth’s size and the Moon’s size. The ancient literature records multiple accounts of Eratosthenes of Cyrene measuring the size of the Earth in the third century BC by using shadow lengths of the Sun in Alexandria and Syene (two cities separated by a known distance). Much like the Chinese astronomers, Eratosthenes used solar elevation angles to determine the Earth’s curvature with great accuracy, and by the most cited account from Cleomedes, the circumference of the Earth was computed to be 250,000 stades, with some reports suggesting 252,000 stades. Like the li, the stade is an ancient unit of length that is not precisely known, but a range of estimates places the stade between 150 and 159 meters. Depending on the value for the stade adopted, Eratosthenes’ estimate for the Earth’s circumference is between 37,500 and 40,068 km. This range of values comes very close to the Earth’s actual circumference, which is 40,0075  km (Diller, 1949). Many ancient astronomers also were aware of the Earth’s spherical shape from observations of the lunar eclipse, which casts shadows of the Earth on the Moon. Further observations by Aristarchus of Samos in the third century BC used the geometry and timing of lunar phases and careful observations of the lunar eclipse and its duration to determine the relative sizes of the Earth

2  The Geography of Earth Beyond the Horizon 

25

and Moon. The duration of the lunar eclipse allowed an estimate of the relative sizes of the Earth and Moon based on the movement of the Earth’s shadow on the Moon’s disk. Aristarchus guessed that the Earth was about three times smaller than Earth and was placed at a distance of about 70 Earth radii away. By combining Eratosthenes’ measurement of Earth’s size, we find that Aristarchus would have determined the Moon to be between 12,500 and 13,356 km in circumference, or with a radius of between 1989 and 2125 km, which is about twice the actual value. Aristarchus also placed limits on the distance to the Sun and the Moon, which were underestimated because of the limitations of his ability to measure small angles, which required instruments of greater precision. While his limit for the Moon distance was off by a factor of about two, and his distance estimate for the Sun even farther off (by some calculations off by a factor of 20), his geometric arguments were sound and just lacked the precision measurement data to provide more accurate estimates (Heath, 1913) (Fig. 2.6). Even though European astronomers were well aware of the Earth’s round shape and, in many cases, fully acknowledged its spherical nature (instead of a disk geometry), the myth of the “flat Earth” as a prevailing model remains. Many of the medieval maps show the boundaries of the known world containing an ocean inhabited by sea monsters and other creatures, and perhaps from these maps, observers have interpreted the prevailing model to be one of a literally flat world (Fig. 2.7).

Fig. 2.6  Lunar Eclipse image from a 1543 edition of Johannes de Scrobosco’s Sphaera showing the alignment of Sun and Earth, and Earth’s shadow in relation to the Moon; image of a lunar eclipse showing the shadow of the Earth on the Moon which can be used to measure the relative size of Earth and Moon. (Image courtesy of Claremont Colleges Special collections, and from https://moon.nasa.gov/moon-­in-­motion/eclipses/)

26 

B. E. Penprase

Fig. 2.7  Most navigators would have recognized the Earth’s curvature from viewing distant ships and islands; this easily observed fact established that the Earth was at least convex, but the exact shape beyond the horizon relied on maps and astronomical arguments. (From Lockyear’s Astronomy, 1875; Image is taken from https://www.loc. gov/resource/gdcmassbookdig.elementsofastron00lock/?sp=90&r=-­0 .225, 0.452,1.437,0.786,0)

As the maps of the Earth and navigation improved, it became clear that the horizon we see is just an illusion, and more detailed maps can extend our view beyond what is possible for us to see from our location on Earth. World maps improved dramatically, aided by voyages from well-known explorers such as Christopher Columbus, Ferdinand Magellan, and James Cook, and other explorers like Chinese Admiral Zheng He, and Portuguese navigators Pedro Cabral and Vasco de Gama. These journeys navigated and charted more of the Earth and gathered data that enabled maps that depicted a view of the Earth that would be impossible to see except from above – as if one were in space. The invention of the Mercator projection in 1569 enabled maps of the Earth to be presented on a cylindrical projection, which preserved angles for compass headings, making them much more useful for navigation. The improved geographic data, combined with more advanced printing technologies, produced atlases of the sixteenth and seventeenth centuries that combined art and science in fantastic ways. A few examples below give a sampling of the refinement of maps of the Earth from this period (Figs. 2.8, 2.9, 2.10, 2.11 and 2.12).

2  The Geography of Earth Beyond the Horizon 

27

Fig. 2.8  Image from the Atlas of Battista Agnese (1544), showing the beginnings of knowledge of the West Coast of North and South America and the hints of the contours of Asia’s eastern regions. (Image from Library of Congress – https://hdl.loc.gov/ loc.wdl/wdl.7336)

2.3 Edmond Halley’s Voyage and Magnetic Maps From the many European journeys of exploration in the sixteenth and seventeenth centuries came a profusion of scientific research, sometimes planned as a primary goal of the expedition. One example would be the explorations of the British astronomer Edmond Halley, who not only played a crucial role in publishing Newton’s Principia and guiding the Royal Society, but also had a chapter in his life as a sea captain in a quest to map the Earth’s invisible magnetic forces to aid future scientists and navigators. Edmond Despite Halley’s wide range of contributions to science, he is most celebrated for his work on the comet that bears his name. Halley made the calculations in 1705 that predicted the return of what would be called Halley’s comet in 1758 based on observations of its earlier appearance in 1682 and using the new physics from Isaac Newton. This publication, called Synopsis Astronomica Cometicae, enabled astronomers to predict the reappearance of comets.

28 

B. E. Penprase

Fig. 2.9  Image from the Atlas of Joan Martines (1587), cosmographer to King Philip II of Spain. The use of a spherical projection and also the boundaries of the known world are distinctive, as is the presence of “Terra Incognita” to the South  – the  regions remaining to be explored and mapped. (Image from Library of Congress – https://hdl. loc.gov/loc.wdl/wdl.10091)

Halley’s work showed that the comet of 1682 was the same one that had been observed in 1066, 1305, and 1380 and predicted its  return in 1758 after completing another extremely elongated elliptical 76-year orbit. And while Halley did not live to see the return of his comet in 1758 (he died in 1742), his life was filled with contributions to our ability to map the Earth and skies, which began at an early age. Halley’s astronomical work began with a publication he submitted in 1676 as an undergraduate at Oxford. Halley extended Kepler’s laws to enable more accurate determination planetary orbits. In the major observatories of Europe, astronomers were just beginning to map the heavens systematically. John Flamsteed, the English Astronomer Royal, worked at Greenwich on his tabulation of stars, while Hevelius worked in Danzig (Gdansk) and Cassini in Paris. While European astronomers were making significant progress charting the Northern skies, the far Southern region of the sky was unexplored and needed more data. To help fill in this unmapped part of the sky, Halley applied for funds to enable the creation of one of the first European maps of the

2  The Geography of Earth Beyond the Horizon 

29

Fig. 2.10  Map of the world from Flemish geographer Ortelius, who published his Theatrum Orbis Terrarium (Theater of the World) in 1570, including new and more accurate projections of the geography of Asia, with hints of the Australian continent visible lower right corner. (Image from Library of Congress – https://hdl.loc.gov/loc.wdl/ wdl.18901)

Southern Skies. King Charles II approved his application and issued orders to place Halley as a passenger on one of the East India Company ships, which would transport him to the remote South Atlantic Island of St. Helena, where he would be stationed for a year. Halley’s first voyage occurred in 1677 when he was just 20 years old, and his atlas was published a year later. During his year on St. Helena, Halley made a catalog of Southern Hemisphere constellations and several star charts and even had the good luck to be able to observe a transit of the planet Mercury across the disk of the Sun. While Halley had dropped out of his undergraduate program to complete this adventure, the King was so impressed that he awarded Halley a Master of Arts degree in 1678  in honor of this work. Halley was promptly named a Fellow of the Royal Society in the same year at the age of only 22. Two years later, Halley took the traditional “grand tour” of Europe and managed to visit all of the leading Continental astronomers on his journey. Halley visited both

30 

B. E. Penprase

Fig. 2.11  Map of the world published in 1690 by Nicolaus Visscher, a Dutch cartographer who employed the latest geographic projections and included dramatic artistic flourishes. The “island” of California is visible along with an incomplete map of Australia and the Northwest coast of North America. (Image from Library of Congress – http://hdl.loc.gov/loc.gmd/g3200.ct001473)

Hevelius and Cassini to learn more about their astronomical work and to tour their observatories (which were much more advanced than England’s at the time). Halley returned to England to pursue his passions of astronomy and geomagnetism and also somehow found the time to be married in 1682 and to begin raising a family. In 1683 he met with Isaac Newton and convinced him to publish his Principia and also covered the expenses of its publication to be sure it reached a worldwide audience (Thrower, 1969). In addition to developing the Royal Society, cultivating relationships with scientists and supporters of science, and continuing his work in astronomy, Halley also published critical papers on geomagnetism, a subject of fascination since his boyhood. Halley’s first scientific research project was to measure the magnetic variation near London while he was a 16-year-old student at St. Paul’s school. Halley continued to dabble in geomagnetism, which he was

2  The Geography of Earth Beyond the Horizon 

31

Fig. 2.12  Map of the world from British cartographers Robert Morden and William Berry from 1690 using the newly developed Mercator projection. The map shows increasing details of the “new world” and Australia but also includes regions clearly needing more exploration. California is again represented as an island, and fanciful sea monsters cavort in the less explored parts of the Pacific Ocean. (Image from Library of Congress – http://hdl.loc.gov/loc.gmd/g3200.ct007058)

convinced would help navigators determine their headings and precise locations on Earth through accurate charts of “magnetic declination” (the deviation between magnetic north and true north). Halley made plans to return to sea to create such maps himself. His connections and persuasion skills paid off as Halley convinced the Admiralty to place him in command of his own ship named the Paramore. His new role as ship’s captain was quite unusual for a scientist without any previous experience at sea except as a passenger (Wakefield, 2005). Although delayed several years by a war with France, Halley’s sea voyage began in 1698, when he set sail to map the geomagnetic lines of force around the North and South Atlantic. Halley’s voyage was inspired both by an intellectual curiosity about magnetism (then an unexplained force – much like the era’s “Dark Energy”) and from more practical interests in improving British navigation. Halley’s maps included a new projection known as an isogonic chart, which combined the cylindrical Mercator projection with an overlay of lines indicating magnetic declination – the corrections to a compass heading for true North. Halley’s isogonic charts were intended to revolutionize navigation at sea since the variations of magnetic declination might be a means for determining one’s longitude on earth (Fig. 2.13).

32 

B. E. Penprase

Fig. 2.13  Edmond Halley’s Map of the world features an isogonic projection, which provides lines of magnetic declination (the angle between true north and compass headings). The maps were hoped to help determine longitude at sea but were more of interest for determining compass headings and for geophysicists trying to learn the nature of Earth’s magnetic field. (Image from New York Public Library Digital collections – available at https://digitalcollections.nypl.org/items/510d47da-­ef24-­a3d9-­e040-­ e00a18064a99). Lionel Pincus and Princess Firyal Map Division, The New  York Public Library. “A new and correct chart showing the variations of the compass in the western & southern oceans as observed in the year 1700” New  York Public Library Digital Collections. Accessed August 19, 2022. https://digitalcollections.nypl.org/items/ 510d47da-­ef24-­a3d9-­e040-­e00a18064a99

Celestial longitude determination requires precise timekeeping and astronomical observations, unlike latitude, which can be determined relatively easily with a sextant. The British Crown urgently needed more reliable ways of determining longitude at sea and had suffered several shipwrecks from navigational errors. Halley’s isogonic magnetic field maps were published in 1701 and provided the most accurate chart of magnetic declination in the world at the time. While Halley’s charts were indeed of great interest to

2  The Geography of Earth Beyond the Horizon 

33

geophysicists and improved our knowledge of the Earth’s magnetism, they ultimately were not accurate enough to provide precise longitude at sea. This led the British Crown to pass the Longitude Act in 1714, which promised a reward of 10,000 pounds for a method that could determine longitude at sea within 1 degree. The prize winner, John Harrison, spent a lifetime making a series of exquisitely designed timekeepers. After some contention with the Longitude Board, Harrison ultimately received the award in 1765.

2.4 Mapping Terra Incognita With the age of exploration in Europe, new maps of Earth extended the horizons in all directions to encompass North and South America and even represent the unexplored “terra incognita” in the far south. These regions included the lands of Australia and Tasmania, and navigators had seen hints of a mysterious continent yet to be discovered – Antarctica. Like any of the boundaries to our knowledge, this terra incognita ultimately was explored, as better technology and navigation allowed the maps to extend into previously unknown parts of the globe (Fig. 2.14).

Fig. 2.14  Many maps would include mention of the “terra incognita”  – a vast unexplored Southern region; in this detail from the atlas by Joan Martines (1587), it begins just south of New Guinea and below the Tropic of Capricorn. (Image from Library of Congress – https://hdl.loc.gov/loc.wdl/wdl.10091)

34 

B. E. Penprase

Fig. 2.15  Closeup views of sea creatures found in Morden and Berry’s 1690 atlas of the world; placed in the farthest reaches of the Pacific Ocean, they appear to be frolicking in the uncharted waters. (http://hdl.loc.gov/loc.gmd/g3200.ct007058)

Fig. 2.16  Closeup views of sea creatures found in Peter Apian’s Cosmographia (1575 edition); the map includes a walrus-like creature off the coast of Java and other creatures cavorting in the South Pacific. (Image from Claremont Colleges Special Collections)

Before the Earth was fully mapped, European maps in the sixteenth and seventeenth centuries often included fanciful sea monsters within the unexplored regions – reflecting both lingering fears among mariners of being lost or destroyed at sea by unknown forces or creatures, and provided a chance for mapmakers to give a fanciful portrait of the unknown. Below are a few examples of the sea creatures common in European atlases of the time  – often appearing like mixtures of elephants, hippopotami, and whales, and sometimes also including humanoid riders. These fanciful images perhaps also encapsulate how imagination takes hold at the boundaries of knowledge (Figs. 2.15 and 2.16). Peter Apian’s Cosmographia, first published in 1544, was a compendium of astronomical and geographic knowledge. Many editions depicted the Earth suspended inside a realm of clouds and mythological creatures who provided all the winds from the corners of the Earth (Fig. 2.17).

2  The Geography of Earth Beyond the Horizon 

35

Fig. 2.17  Figure from Peter Apian’s Cosmographia (1575 edition) shows the boundaries of the known universe populated by fearsome creatures producing winds and several monsters in the less-charted waters. (Image courtesy Claremont Special Collections)

References Diller, A. (1949). The ancient measurements of the earth. Isis, 40(1), 6–9. https:// doi.org/10.1086/348986 Heath, T. (1913). Aristarchus of Samos: The ancient Copernicus. Clarendon Press. Ronan, C. (1985a). The shorter science and civilization in China: Volume 2 (p. 266). Cambridge University Press. Ronan, C. (1985b). The shorter science and civilization in China: Volume 2 (p. 84). Cambridge University Press. Thrower, N. J. W. (1969). Edmond Halley as a thematic geo-cartographer. Annals of the Association of American Geographers, 59(4), 652–676. https://doi. org/10.1111/j.1467-­8306.1969.tb01805.x Wakefield, J. (2005). Halley’s quest. Joseph Henry Press.

3 The Geography of the Skies

Many cultures imagined the sky as the home of gods or mythological creatures or where the soul’s journey afterlife takes place as one is released to travel across the stars. Examples include the Aboriginal Australian idea of the Milky Way as embodying a cosmic emu or of campfires from sky people lighting up the path of the Milky Way. Many Native American tribes viewed the skies as the domain of “sky people” that could include divinities, animal spirits, and departed souls living in an afterlife. The Chumash tribe of Southern California viewed the Milky Way as the path of the pinyon bearers, departed souls making their way to Shimilaqsha and encountering many obstacles. The Chinese culture also placed mythological characters in the skies. Examples include the separated lovers Tchi-Niu, the weaving princess, and Kien-Niou, the cowherd boy, associated with the stars Vega and Altair, gazing across the celestial river of the Milky Way at each other – to be reunited each year in the July rainy season. This story is also told in Japan and is the basis for the annual Tanabata festival on the 7th day of the 7th month. The Inca would see a large and small llama in the Milky Way’s dark clouds and tell of other creatures in the patches of clouds within the Milky way and could read forecasts for future weather from these views. Many traditional cultures use the sky to teach morality and life lessons, and as technology advances, societies begin to map, measure, and chart the skies more systematically (Penprase, 2019).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_3

37

38 

B. E. Penprase

3.1 What Is Space? Nearly all ancient cultures imagined a sphere of the stars extending above the Earth but could not account for the vast distances between the stars and the extent of space beyond the Earth and into the Milky Way. Ptolemy and other Greek thinkers inherited notions of the heavens from Aristotle, which placed the stars on a fixed sphere at the same distance from the Earth and revolving around the Earth, which occupied the central location in the universe. European maps of the skies included this “geocentric” picture with the Earth at the center and concentric spheres surrounding it to the outermost sphere of the stars. What lay beyond the realm of the stars? Theologians of medieval times posited the location of “heaven” or God beyond all the spheres. Still, the possibility of traveling across what we would call “Space” – the region beyond Earth – was thought to be impossible. Despite these barriers, some thinkers from Europe imagined travel in space and the view of Earth from space. One example is the Somnium Astronomicum by Johannes Kepler, published posthumously in 1634, which envisioned a journey to the Moon. Kepler’s imaginary journey is made possible by witchcraft and is only possible when the shadow of the Earth touches the Moon  – which happens during a lunar eclipse. Kepler’s journey also becomes a vehicle for teaching those of us on Earth about astronomy. It accurately depicts the odd appearance of the sky for a viewer on the Moon, as well as the apparent  motion of the Sun (which would circle the Moon in 29.5  days) and the fact that the Earth is placed firmly in the sky in the same place for a viewer on the Moon. This creative act of leaping out from the planet into space can also bring great insights – a form of “interplanetary empathy” (Higgins, 2015). For Medieval Europeans, a sharp division between the Earth, its atmosphere, and the realm below the Moon was drawn, and these areas in the “sublunary realm” were subject to change. This division was apparent in medieval maps, which often showed a realm of fire above the Earth, a region of clouds and vapors, and a clear division between the sublunary realm and the unchanging “crystalline” spheres of the sky. In between the spheres was the “aether,” which was an unchanging substance that helped perpetuate the physical motions of the skies and which some authors believed was filled with the “music of the spheres.” Kepler himself was a great believer in the music of the spheres, like the Pythagorean thinkers of ancient Greece, who explained that we could not hear the notes from the planets because the sound has been present for all our lives (Fig. 3.1).

3  The Geography of the Skies 

39

Fig. 3.1  Musical scales and sounds ascribed by Kepler to Saturn, Jupiter, Mars, Earth, Venus, and Mercury, from Johannes Kepler’s work Harmonices Mundi (1619). (Image from Library of Congress, https://loc.getarchive.net/media/musical-­scales-­or-­range-­of-­ sound-­ascribed-­by-­kepler-­to-­saturn-­jupiter-­mars)

In some cultures, such as the Chinese, all of space was filled with ch’i or qi or “vital energy.” The qi can condense or rarify and, in its thinnest forms, would float to comprise the regions of space or tian (the heavens). For Chinese astronomers, divining the messages from the skies was part and parcel of the work of the empire, and the “mandate of heaven” given to the emperor could be discerned from the sky. These astronomers divided the skies into sections and mapped the sky centered on the North Celestial Pole, with lines radiating outwards  from this source of cosmic power. The center of the sky for the Chinese coincided with the center of influence of the emperor. The imperial city was designed to mirror the skies, with the northernmost portions of the compound reserved for the emperor and highest officials. Some ancient societies provided detailed star maps to help their ancestors travel to the skies in the afterlife, or to guide earthbound seers in their interpretations of the stars. This information was crucial for imagined celestial navigators to avoid the shoals and monsters of the ocean of space. Such star maps are visible in Native American star ceilings, petroglyphs, and the many elaborate tombs from ancient Egypt.

40 

B. E. Penprase

3.2 Egyptian Star Maps The Egyptians left many elaborate sky maps within tombs for departed rulers, presumed to help the ruler navigate through the skies in their voyage after death. These maps included representations of the principal deities that ruled the skies and constellations important to the Egyptians. Around 2000 BCE, coffin lids and royal tombs depicted the stars and gods, who often appeared in boats as they traversed the skies. One of the most spectacular of these ceilings is found in the Tomb of Senemut and dates from between 1479 and 1458 BC. One can see the procession of personifications of key sky divinities associated with days and months along the bottom, with the central figure – a hippopotamus with an alligator on his back – representing the stars in the region of Ursa Major in the circumstellar sky. The top panel represents the “decans stars” and shows the planets in boats (Quack, 2019). Some of the Egyptian sky panels prominently feature Osirus, who ruled the skies and was associated with the constellation Orion, and his consort Isis, who appears near him in the skies in our modern constellation of Canis Majoris – or the star Sirius. Their son, Horus, is a hawk headed deity and often represented in the figures of the planets (Fig. 3.2). As another example, the Temple of Hathor at Dendera shows an array of gods lining the skies and holding it up with central constellations visible in the center. In a panel along the wall, the sky goddess Nut is depicted giving birth to the Sun. Within the temple of Dendera are depictions of the planets Saturn, Jupiter, and Mars as anthropomorphic figures with different animal heads reminiscent of Isis’ son Horus. Saturn is depicted with the head of a bull (“horus bull of the sky”), Jupiter as a falcon with two cow’s horns (“Horus who bounds the two lands”), and Mars with the head of a falcon. Venus, who is also associated with the god Osiris is also referred to as “the phoenix bird” or “the heron bird” and, like many planets is alternately represented as a human figure or as a bird in Egyptian sky panels (Quack, 2019) (Fig. 3.3).

3.3 Chinese and Asian Star Maps Some of the earliest star maps of the world come from ancient China and the Islamic world. Beginning with the “oracle bones,” the most ancient type of astronomical writing from the late Shang dynasty in 1200–1050 BC, Chinese astronomical records include extensive information about the appearance of comets, supernovae, and other transient events that are still useful today. Chinese astronomical charts became more elaborate as their astronomical

3  The Geography of the Skies 

41

Fig. 3.2  Star Ceiling from the Tomb of Senemut from approximately 1460  BCE represents the Egyptian pantheon of gods in the sky and provides a star map of the Northern celestial pole region to aid in the navigation of the sky in the afterlife. (Image from https://upload.wikimedia.org/wikipedia/commons/0/0f/Astronomical_ Ceiling%2C_Tomb_of_Senenmut_MET_48.105.52_EGDP012289.jpg)

bureaus systematically recorded events in the sky to help discern the “mandate of heaven” for the emperor. Centered on the North Circumpolar region, the Chinese star charts offered a detailed mapping with the central hub of the sky being the celestial pole and associated with the emperor. Other Asian cultures adopted some Chinese astronomical practices, including the circumpolar centering of the star chart. An example is a chart from the Korean Choson dynasty, initially engraved in a stone stele in 1395  AD, which depicts the circumpolar stars in the center, and shows the locations of the Milky Way and the ecliptic (Figs. 3.4, 3.5 and 3.6).

42 

B. E. Penprase

Fig. 3.3  Star Ceiling from the Temple of Hathor in Dendera, which shows the sky being held up by deities and other deities in boats that navigate the skies. (Image from https://commons.wikimedia.org/wiki/File:%C3%89gypte,_Denderah,_ Temple_d%27Hathor,_Chapelle_d%27Osiris,_Zodiaque_(49942715636).jpg)

3.4 Cosmic Cartography from the Islamic World Some of the most accurate star charts of the medieval era came from the Islamic world. Maps of the skies from the Islamic world adopted many ideas from the geocentric Greek cosmologies of Ptolemy, Aristotle, and earlier Greek thinkers. They represented the universe in a series of concentric spheres with labels for major constellations in the outer ring. One can also see continuous advancement of mapping of the skies from the eighth century onwards, with examples such as a painted celestial ceiling from Syria, influenced by the many regional cultures. One surviving example is a fifteenth-­ century Byzantine chart of the skies, which clearly shows an accurate representation of a circumpolar region and a complete set of constellation figures patterned after Greco-Roman mythology. Closer inspection of the chart also reveals that the circumpolar region is tilted 35 degrees away from the celestial north pole and is centered on the ecliptic north pole and the Sun’s path through the sky instead of the celestial north pole. Therefore, this model for the sky is a hybrid of the Greek celestial sphere and earlier Middle Eastern

3  The Geography of the Skies 

43

Fig. 3.4  Korean star chart copied to paper from an earlier Stele dated 1395 AD shows standard features to the Chinese star charts of the era. The location of the Milky Way can be seen arcing through the chart, as is the location of the zodiac as a circle offset from the concentric rings centered on the north celestial pole. (Image from https://loc. getarchive.net/media/kujang-­chnsang-­ylcha-­punya-­chido)

cosmological models that place greater primacy on the zodiacal constellations (Savage-Smith, 1992) (Fig. 3.7). One star map that shows the typical layout of an Islamic star map comes from the Kitab al-Bulhan or “Book of Wonders.” The manuscript dates from the late fourteenth century AD and presents a compendium of astrological and astronomical works from the Islamic world. At the edge of the map of the skies is a circle labeled in the manuscript as “the largest sphere” or al-falak al-­ a’zam (Fig.  3.8).  The astronomical research of Islamic astronomers helped improve maps of the stars. It was advanced further by spherical globes or

44 

B. E. Penprase

Fig. 3.5  Chinese star chart from 1647, centered on the circumpolar region in the center, features clusters of stars identified with particular portions of the empire and its rule. (Image from https://loc.getarchive.net/media/xia-­lan-­zhi-­zhang-­7)

“armillary spheres” that enabled calculations of the positions of planets and stars from movable rings. Some of these instruments took the form of “spherical astrolabes,” which included movable rings superimposed on an accurate depiction of the celestial sphere in the form of a metal globe. It is also interesting to note that in addition to the Greco-Roman constellation figures, Islamic and Arabic astronomers included local constellation figures in some of their work. The Arab world naturally would be populated with a different range of animal species and would more likely be inclined to include many of these animals in their constellations. Examples of the Arabic constellations that pre-date Islam include a constellation featuring a giant that includes the stars of both Gemini and Orion, a funeral procession with three girls that takes the place of the figure of the “great bear” or the Big Dipper. In other places, gazelles and sheep made their way into constellation figures, along with a Camel that takes the role of Cassiopeia in the northern constellations.

3  The Geography of the Skies 

45

Fig. 3.6  Section of a Chinese star chart from 1722, which shows a diagram of both the North and South hemispheres, charts of the eight phenomena, maps of the planets and their orbits, and some excerpts of classic Chinese literature. (Image from https:// loc.getarchive.net/media/san-­cai-­yi-­guan-­tu)

3.5 European Maps of the Stars The earliest surviving European star charts come from the period of Charlemagne (768–814), including a remarkable work known as the Aratea, which is a book from the ninth century that is based on the Greek author Aratus, who prepared an astronomy and meteorological work known as the Phenomenon in the period between 315 and 240  BC.  As presented by the authors of The Leiden Aratea, Charlamagne consciously cultivated the legacy of the earlier Roman civilization he was hoping to revive in Medieval Europe ever since his coronation on Christmas day in the year 800. As such, the

46 

B. E. Penprase

Fig. 3.7  The Syrian star map from a celestial ceiling near Syria showed the constellations adopted by Islamic astronomers and shifted the center of the chart toward the ecliptic pole. (Image at https://digi.vatlib.it/view/MSS_Vat.gr.1087, reproduced with permission)

Aratea is an aspirational document that simultaneously reconstructed some of the trappings of ancient Greek and Roman scholarship while working toward consolidating the power needed to reclaim and extend that intellectual tradition (Katzenstein & Savage-Smith, 1988a, p. 52). The Aratea includes illustrated constellation figures and representations of the planets carefully painted on parchment. While it dates from over 1000 years ago, it represents views of the skies and models of the universe prevalent in Europe centuries earlier. Indeed, one of the illustrations depicts the locations of planets from the date of March 28, 579, centuries earlier than the date for the surviving book. It is also interesting to note that the

3  The Geography of the Skies 

47

Fig. 3.8  Star map from the Kitāb al-Bulhān (‘Book of Wonders’) shows the heavens’ layout and a typical starmap from the Islamic world from a manuscript dating from the fourteenth century AD. (Image from Oxford, Bodleian Library MS.  Bodl. Or. 133, p.  117b/118a: https://digital.bodleian.ox.ac.uk/objects/5c9da286-­6a02-­406c-­b990-­ 0896b8ddbbb0/)

configuration of the heavens in this book places the Earth at the center but offsets the planets Mercury and Venus as being in orbit around the Sun – an innovation often attributed to the later astronomer Tycho Brahe  – who favored blending the cosmologies of Copernicus and Ptolemy to provide a hybrid cosmology with some planets seen orbiting the Sun. This innovation was developed much earlier and was suggested as early as the fourth century BC by the philosopher Heraclides and was referred to in the ancient world as the “Egyptian System” of cosmology (Katzenstein & Savage-Smith, 1988b, p. 16) (Fig. 3.9). By the 1500s, European and Islamic sky charts were influencing each other, with works by Apian in the 1500s and the design of astrolabes revealing clues of how each culture was influencing the other. The rise of European astronomy gave ever more elaborate depictions of the skies in books. With the advent of the printing press, these beautiful star charts were in wide distribution. Astronomers used them as they extended their observations to note the presence of comets and other variable sources in the skies (Figs. 3.10 and 3.11).

48 

B. E. Penprase

Fig. 3.9  The first European star atlas, the Aratea, dates from 579 AD. The cosmology depicted resembles the later “Tychonic” world system, which placed the Sun, Mercury, and Venus in orbit around Earth (in the center), with the two planets Mercury and Venus orbiting the Sun. The twelve Zodiacal constellations are commemorated around the outer ring, along with symbols for each of the 12 months. (Image from https://commons.wikimedia.org/wiki/Category:Leiden_Aratea#/media/File:Aratea_93v.jpg)

Several influential astronomical atlases in the eighteenth century provided elaborately decorated maps of the heavens, which in addition to increasingly ornate artwork, included more detailed and accurate locations of the stars, projected accurately onto maps that conformed to projections of the celestial sphere as viewed from equatorial or polar locations. One of the most influential sets of charts was produced by John Flamsteed, the Astronomer Royal of England, and these charts made their way to observers in England and across the continent of Europe. Just as with Isaac Newton and the Principia, Edmond Halley played a crucial role in bringing Flamsteed’s masterwork, his star atlas, to print. Newton and Halley worked together to create a “pirate” version of Flamsteed’s

3  The Geography of the Skies 

49

Fig. 3.10  Image from Peter Apian’s 1524 work Cosmographia – showing the Ptolemaic cosmology, with the sub-lunar realm at the center, where earth, air, fire, and water mix and enable changes in the universe. (Image courtesy of Claremont College’s special collections)

catalog in 1712 before Flamsteed had released all of the information for his chart and authorized the distribution. Halley’s unauthorized version of Flamsteed’s chart played a vital role in astronomical work and included 2866 stars, ordered by constellation. Halley printed 100 copies, yet Flamsteed was so angry about this premature release that he later obtained 300 copies of the chart and “burned the parts he did not like, in front of the observatory.” Despite the controversial and early release, Flamsteed’s chart pioneered the use of labeling stars in order of their brightness with a number and constellation to designate the star in order of brightness, a designation we now call “Flamsteed numbers.” These charts also had very accurate projections of the sky, making it possible for astronomers to measure positions in the sky based on the chart and to locate newly discovered objects. William Herschel and other astronomers used these Flamsteed charts in the eighteenth century to discover and locate new objects in the sky (Steinicke, 2014) (Fig. 3.12).

50 

B. E. Penprase

Fig. 3.11  Image from Durer (1515), which provides the earliest known European printed star chart. (Image from http://hdl.loc.gov/loc.gmd/g3190.ct006836)

3  The Geography of the Skies 

51

Fig. 3.12  Details from Flamsteed’s charts of the sky. Like maps of the Earth, later star charts employed more accurate projections of the sky. Also, they featured multiple coordinate systems to aid in locating newly discovered objects in the sky. These charts show details from the region near the galactic center in Sagittarius (top) and the region near the Andromeda constellation (bottom). (From https://loc.getarchive.net/ media/star-­map-­with-­constellations-­of-­andromeda-­perseus-­and-­triangulum)

52 

B. E. Penprase

References Higgins, B. (2015, May 16). Kepler and the Dream: Science and the storytelling urge. Vatican Observatory. https://www.vaticanobservatory.org/sacred-­space-­ astronomy/kepler-­and-­the-­dream-­science-­and-­the-­storytelling-­urge/ Katzenstein, R., & Savage-Smith, E. (1988a). The Leiden Aratea: Ancient constellations in a medieval manuscript (p. 52). J. Paul Getty Museum. Katzenstein, R., & Savage-Smith, E. (1988b). The Leiden Aratea: Ancient constellations in a medieval manuscript (p. 16). J. Paul Getty Museum. Penprase, B. E. (2019). Power of stars. Springer. Quack, J. F. (2019). The planets in ancient Egypt. Oxford Research Encyclopedia of Planetary Science. https://doi.org/10.1093/acrefore/9780190647926.013.61 Savage-Smith, E. (1992). Celestial mapping. In J. B. Harley & D. Woodward (Eds.), The history of cartography. University of Chicago Press. Steinicke, W. (2014). William Herschel, Flamsteed numbers and Harris’s star maps. Journal for the History of Astronomy, 45(3), 287–303. https://doi. org/10.1177/0021828614534811

4 Early Telescopes and Models of the Universe

The invention of the telescope made it possible to see beyond the horizon to reveal new details of planets and stars beyond the Earth. This discovery had both immediate and practical impact on naval forces on Earth, enabling them to view ships and headlands far beyond where their eyes could see. The advance was also celebrated by astronomers and allowed final confirmation that Earth was indeed orbiting the Sun. This confirmation came from Galileo’s sketches of new views of the phases of Venus and Jupiter’s moons, which provided unmistakable proof of the heliocentric conception of the universe. Soon after Galileo, the telescope was extended to ever larger forms, pushing the limits of magnification possible with a single lens. The telescopic explorers in the decades after Galileo – Cassini, Hevelius, Huygens, and Bradley – extended the lens telescope to unprecedented length and magnification. These early telescopes revealed stunning detail on the Moon’s surface, made precise measurements of the planets and their motions, discovered many new moons around Saturn, and obtained the first direct measurements of the Earth’s motion by detecting stellar aberration. These achievements were made soon after Galileo’s work and used very long single-lens telescopes. Any telescope provides a combination of magnification and light-gathering power to enable the observer to see smaller and fainter objects, respectively. Increasing the diameter of the telescope lens (or mirror) allows the viewing of fainter objects by intercepting a larger beam of incoming cosmic light. The light-gathering power depends on this cross-sectional area, or the diameter of the telescope squared. Early telescope makers attempted to build larger lenses to increase light-gathering power, but were limited by the ability to construct large lenses with high precision (Fig. 4.1). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_4

53

54 

B. E. Penprase

Fig. 4.1  Figure showing the relationship between focal length f and the size of an image d on the focal plane. Long focal length telescopes can create images of larger size by magnifying the separation of sources in the sky on the focal plane. In the figure, two sources are shown in the image formed in the focal plane, which would have a separation given by d; with longer focal lengths, this separation increases, and therefore the magnification and angular resolution of the telescope increase. (Image by the author)

Many astronomers in the seventeenth century pushed the limits for the longest possible focal length to develop telescopes capable of the highest possible magnification. Longer focal length telescopes require longer tubes or other constructions like aerial cranes to align the lens and eyepiece. Some astronomers even built vertical telescopes that protruded through the floors, ceilings, and roofs of multistory buildings to obtain the highest possible magnification and stability. The most exceptional results from these high magnification observations came from astronomers like Johann Hevelius, Giovanni Cassini, and Cristiaan Huygens, who focused on observing the Moon and planets. Long refracting telescopes were also used to view seasonal shifts in star positions, which enabled astronomers like James Bradley and Friedrich Wilhelm Bessel to make the first observations of Earth’s motions through space and to determine the shift of the positions of stars from the effects known as aberration and parallax.

4  Early Telescopes and Models of the Universe 

55

4.1 Johann Hevelius and His Telescopes Within Europe, instrument makers raced to build more powerful telescopes after Galileo’s work was published. One of the leaders in this effort, who eventually constructed a 150-long lens telescope, was Johann Hevelius. Johann Hevelius was born in 1611 in Dantzig, Poland, and he began with instruction in astronomy from an early age. By 1639, he used some of his wealthy merchant family’s resources to develop his observatory. Hevelius ground his own lenses and built telescopes with ever longer focal lengths. His first refractor was 1.5 inches in diameter but with a focal length of 12 feet. From this instrument, Hevelius constructed observations of the Moon for his lunar atlas known as Selenographia, which was published in 1647 and included 43 large maps of the Moon. His observatory was completed by 1657, but after his wife died in 1663, he faced a crisis in taking care of his family, his business, and his observatory. After he remarried, he restarted his observations, and his second wife, Catherina, served as a full partner in all his efforts, including his astronomy research (English, 2018a, p. 39) (Fig. 4.2). Hevelius continued to upgrade his observatory and soon began to build the largest telescopes anywhere in the world for the time. Telescopic contraptions with longer focal lengths of 30, 40, 50, and eventually even 150 feet- were constructed. To work with such long focal lengths, he housed the lenses on long beams suspended by cables, much like a construction crane. The longer focal length telescopes also included larger objective mirrors, with one of his telescopes featuring an 8” diameter  objective lens. New observations of Mercury’s phases, sunspots, and incredibly detailed sketches of the surface of Saturn and other planets were soon produced and published in his later books. The 150-foot telescope was a monstrosity and must have been incredibly challenging to use, as its enormous length would no doubt cause severe issues with vibration and flexure. But its use was of immense interest to the public and fueled the growing interest in astronomy within Europe (Fig. 4.3).

4.2 Christiaan Huygens and Saturn One fellow pioneer in the “era of long telescopes” is Cristiaan Huygens. Aided by his older brother Constantijn, Huygens worked at his observatory in the Hague, creating ever more powerful telescopes for studying Saturn and other solar system objects. Working closely with glassmakers in their region and master opticians from Germany, Huygens learned the craft of building the

56 

B. E. Penprase

Fig. 4.2  Johannes Hevelius and his assistant at work measuring the angular separation from pairs of stars using a six-foot sextant, as depicted in the work Machinae Coelestis (1673). (Image from Library of Congress  – https://loc.getarchive.net/media/ johannes-­hevelius-­and-­assistant-­using-­six-­foot-­sextant-­to-­measure-­angular-­distances)

highest quality objective lenses and by 1655 had constructed a telescope comparable to Hevelius’ early model – a refractor with an 11-foot focal length and a magnification of over 40 times. Soon after beginning his observations of Saturn, Huygens noticed a bright star near the object, which he concluded was a moon of Saturn. This discovery was obliquely mentioned in a letter to a leading professor of astronomy at Oxford in 1655 but not fully revealed for several more years until the publication of Huygens’ master work Systema Saturnium in 1659. Huygen’s work included sketches of his observations of Saturn’s moon Titan, detailed maps of Saturn’s rings, and an explanation for the varying appearance of Saturn’s rings due to the changing relative positions of the Earth and Saturn in their orbits. These careful observations of Saturn’s moons and their orbits, combined with Galileo’s observation of Jupiter’s moons, gave an additional weight of evidence supporting the heliocentric theory (English, 2018b, p. 42) (Fig. 4.4).

4  Early Telescopes and Models of the Universe 

57

Fig. 4.3  A view of the 150-foot telescope from Hevelius, from 1673. The enormous crane-like structure was used to lift and point the lens, which offered high magnification from its long focal length – but obviously was very difficult to use and operate. (Image from https://en.wikipedia.org/wiki/Johannes_Hevelius#/media/File:Houghton_ Typ_620.73.451_-­_Johannes_Hevelius,_Machinae_coelestis,_1673.jpg)

4.3 Giovanni Cassini’s Planetary and Comet Observations Another key figure in the early days of telescopic astronomy was Giovanni Cassini. He began his work at Bologna, fueled by an early childhood interest and a growing interest and expertise in astrology. Cassini’s first astronomy job was at the invitation of the Marquis Cornelio Malvesia, who was attracted to the young budding astrology expert and placed him at the helm of the newly constructed Panzano Observatory in 1648 at the age of 23. By 1650, he was named chair of astronomy at the University of Bologna and did exacting studies of planetary rotation with the suite of telescopes at his new observatory. Cassini’s notable observations included measuring the length of the Martian day as 24 h and 40 min – only 1 min different from modern measurements (Fig. 4.5).

58 

B. E. Penprase

Fig. 4.4  Images of Saturn from Christiaan Huygens’ masterful work Systema Saturnium (1659), which provided the most detailed sketches of Saturn available (top), and also elucidated the changes in Saturn’s appearance during its 29-year orbit around the Sun (bottom). (Images from https://www.loc.gov/resource/gdcwdl.wdl_04302/?st=gall ery&c=160)

Cassini also studied Jupiter with his telescope and charted the orbits of its moons with unprecedented precision, and viewed the disk of Jupiter with unprecedented clarity. Cassini resolved the planet’s disk into bands known as belts and zones arising from storms at different altitudes in Jupiter’s atmosphere. Cassini used spots on the disk of Jupiter that regularly rotated in and out of view to determine the length of Jupiter’s day. Cassini measured the Jovian day at 9  h and 56  min and charted the orbital periods of Jupiter’s moons with similar accuracy (English, 2018c, p. 50). Cassini enjoyed the resources and facilities of the new observatory, known as the Panzano Castle, which his patron, the Marquis Malvesia, had made possible. The facilities of the observatory were so well suited to Cassini’s work

4  Early Telescopes and Models of the Universe 

59

Fig. 4.5  (Top) Map of the Moon, from Giovanni Cassini’s exquisite sketch based on observations with high-resolution telescopes; (bottom) drawings of Saturn from Cassini, showing the motion of the red spot and other features on Jupiter’s disk. Cassini was able to measure the rate of Jupiter’s rotation, and modern astronomers still examine his detailed sketches to study the long-term evolution of Jupiter’s atmosphere. (Images in the public domain)

that he called it the Italian Uraniborg  – named after the castle that Tycho Brahe had used for his observations 60 years earlier – and it was widely considered the most lavishly appointed observatory in the world at the time (Bernardi, 2017). The timing of Cassini’s appointment also allowed him to play a vital role in recording the details of the appearance of a bright comet in 1652. This comet spent nearly a year moving through the skies, allowing Cassini to measure its positions carefully against the background stars. Cassini’s talents were so impressive that he attracted the interest of the King of France, who invited him to Paris to establish a new observatory in 1669. Cassini accepted the offer and moved to France to become the first Director of the Paris Observatory. At Paris, Cassini worked to extend Huygen’s observations of Saturn to new levels of precision and detected additional moons

60 

B. E. Penprase

around the planet, including the moon Iapetus and Rhea, in 1672. Cassini worked with a 17-foot focal-length refracting telescope for these observations, and his observations were so precise that he could determine the distance to Mars using parallax observations. His observations were used to build an accurate ephemeris for the positions of Jupiter’s moons. The moons of Jupiter had very subtle shifts in their timing of appearance and disappearance behind Jupiter due to the finite speed of light and the varying distance between Earth and Jupiter. Cassini’s discovery of this “light-time effect” was used by his colleague in Paris, Ole Rømer, to calculate the speed of light in 1675. Cassini constructed even more powerful telescopes with heroically long focal lengths of 100 and 136 feet, which enabled discoveries of additional moons of Saturn – Tethys and Dione. Even longer focal length systems were proposed, and Cassini developed optics for telescopes with focal lengths of 210, 300, and 600 feet. However, such long telescopes (which needed to be suspended far above the observer in the air with various cranes and cables) made them nearly impossible to construct and use. Late seventeenth century astronomers like Cassini pushed the long telescope designs to their limits and beyond, eventually devising aerial telescopes suspended atop high towers under which the valiant astronomer would stand looking for the stars they hoped to observe. In what might be the most extreme example, the English astronomer James Pound used a lens built by Huygens in England with a 123-­ foot focal length. To view the planet Jupiter, the lens was mounted on top of a maypole in a public park. An observer noted that they were indeed able to “perceive very distinctly” the planet Jupiter but that “the motion of the air, the shaking of the pole” made it difficult to observe and advised that “not many good observations could be made with a glass of 123 feet long in the open air” (English, 2018d, p. 55) (Fig. 4.6). Toward the end of Cassini’s career – in 1682 – he observed another bright comet and measured its path through the skies with high precision. These observations would prove crucial for another astronomer – Edmond Halley – to combine Cassini’s astronomical observations with Newton’s new theory of gravitation to predict the return of the comet in 1759.

4.4 The Discovery of Parallax One of the outcomes of the experiments with extremely long and high magnification telescopes was to resolve multiple stars into binary systems and to begin to detect the  apparent  motions of the stars themselves. The stars appear to be “fixed” by our experience since their distances are so vast that the

4  Early Telescopes and Models of the Universe 

61

Fig. 4.6  The Paris Observatory at the beginning of the eighteenth century showing the “aerial telescopes” that Giovanni Cassini used to view Jupiter and Saturn. The long focal lengths, while challenging to use, provided extremely high magnification of the surfaces of the planets. (Figure from https://commons.wikimedia.org/wiki/File:Paris_ Observatory_XVIII_century.png)

motions of the Earth in its orbit produce only subtle shifts in positions, far smaller than can be seen without elaborately constructed telescopes dedicated to the highest possible angular resolution. Such an effort was attempted in 1669 by the British scientist Robert Hooke, who is famous for his long feud with Isaac Newton and his ingenious “demonstrations” that he created for the Royal Society in London. Hooke was an eccentric and unusual man prone to flights of nervous energy who worked at the Royal Society and worked to help London recover from the great fire of 1666. He became famous internationally with the publication of his work Micrographia which provided a richly illustrated atlas of the world of the microscopic, based on Hooke’s careful observations with a newly developed microscope. The book also includes some additional astronomical images  – including engravings of the Moon’s surface and the Pleiades star cluster, illustrating Hooke’s growing interest in turning his gaze toward the vastness of the sky. Hooke began his pursuit of parallax with the modest name for his telescope as the “Archimedean Engine.” The term refers to Archimedes’

62 

B. E. Penprase

famous claim that given a lever long enough and a place to stand, he could move the Earth. Hooke was a master experimentalist and recognized the possibility of measuring a star’s wobble or parallax using a vertical telescope. The vertical telescope could be aligned with the Earth’s gravity precisely and then would be able to repeat observations over many months to detect any changes in the position of a star near overhead. By chance, a relatively bright star named γ Draconis passes overhead at the latitude of London. Hooke resolved to view the star and promptly cut a hole in the roof of his apartment to begin his experiment with a ten-foot-long vertical telescope. Hooke obtained a few observations over several months but lost interest and could not demonstrate that his parallax measurement was successful, despite claiming to have measured the parallax to the star γ Draconis as between 27 and 30 arcseconds (Fernie, 1975). We now know that Hooke was wide off the mark, as the star’s actual parallax has been measured recently to be 0.02 arcseconds! (Fig. 4.7).

Fig. 4.7  Diagram of parallax, which results from Earth’s motion around the Sun. The slight shifts of the apparent star positions occur annually as the earth orbits the sun, and amount to less than 1 arcsecond for stars at distances beyond 1 parsec. The first attempts to measure parallax by Robert Hooke in 1669 were unsuccessful, and the first detection of this effect was not until the nineteenth century. (Image by the author)

4  Early Telescopes and Models of the Universe 

63

4.5 James Bradley and Stellar Aberration English astronomer James Bradley revisited the motions of Greek gamma Draconis in 1728 in what became a triumph of observational technique for the “era of long telescopes.” These observations also proved that the Earth was moving through space by directly detecting the lateral shift of light due to the Earth’s motion. This phenomenon is known as stellar aberration and is based on the displacement of apparent positions of stars as the Earth’s direction of travel in its orbit changes through the seasons. The idea, in a nutshell, is that light as it travels through the length of a very long focal length telescope, such as those constructed in the late seventeenth and early eighteenth century, will be shifted sideways by the Earth’s motion. That amount of shift results in a very tiny but measurable angle, which is sensitive to the ratio of the speed of the Earth’s motion to the speed of light. Mathematically, the aberration angle α would be given by: α = atan (vearth/c), where vearth is the speed of the Earth and c is the speed of light (Fig. 4.8). James Bradley, like Hooke, worked with an extremely long vertical telescope to help get precise observations of the position of the star γ Draconis as it appeared near vertical. Bradley used a specially constructed 24.5-foot-long vertical telescope which viewed through an opening aligned with a chimney in his house that gave him a clear view of the sky in day and night. With this device, Bradley could measure the positions of stars within a fraction of an arcsecond and precisely measured the shift in the position of this star along with over a hundred other stars. Bradley observed that the change in star positions correlated with the star’s position in the sky relative to the Earth’s motion – with the largest shifts occurring for stars that appear transverse to the direction of the Earth’s motion. Bradley measured the shift for stars at right angles to Earth’s motion as 20.25 arcseconds, very close to the modern value of 20.47 arcseconds. Even better, Bradley was able to calculate the ratio of the Earth’s speed to the speed of light, which from his observations, came out to be 1/10,200. This measurement is very close to the actual value of 1/10,000 of the speed of light. Bradley had also not only confirmed the motion of the Earth but helped to measure the speed of light, using a technique more accurate than Cassini and Roemer’s earlier measurement based on the moons of Jupiter.

64 

B. E. Penprase

Fig. 4.8  Illustration showing the aberration angle a, which comes from the Earth’s motion and causes the appearance of stars to shift slightly. The angle a is given by the ratio of the Earth’s velocity to the speed of light. (Figure by the author)

4.6 Descartes and Vortex Theory While the observers were pursuing the limits of visual observations with long refracting telescopes, philosophers of the time were musing about the implications of Newton’s theory and the Copernican heliocentric model and the larger structure of the Milky Way and universe. One influential philosophical model for explaining these astronomical matters was the Vortex Theory of Rene Descartes. Rene Descartes was a product of his time and made significant contributions to mathematics, philosophy, physics, and “metaphysics,” which included some topics that bordered on theology. His work included treatises on optics (including explanations of refraction and how rainbows form) development of techniques that made analytic and algebraic geometry possible (including the well-known “Cartesian” graph for visualizing functions). Descartes dabbled in theorizing about how rings of ice had created circles of rainbows around the Sun and worked to bridge from these theories toward thinking about the nature of God and the soul. Descartes found that these investigations were complementary and that he could

4  Early Telescopes and Models of the Universe 

65

“discover the foundations of physics’” from such metaphysical considerations (Hatfield, 2014). Descartes made great strides in pushing forward physics and the atomic theory of matter in describing how the motions of objects and even how the movement of animals result from natural processes. His work was inspired by a dream he had in 1619 to set his mission in life to “reform all knowledge.” His technique systematized how to harness “cognitive faculties” to approach difficult questions in what we commonly term the “scientific method.” Descartes’ scientific method included an approach to complex problems based on methodically moving from general and more straightforward concepts into new ideas. His work from 1637 on the Discourse on the Method for Rightly Directing One’s Reason and Searching for Truth in the Sciences provided the blueprint for applying known ideas to build an edifice of knowledge leading to answers to unknown problems (Wilson, 2022). Rene Descartes naturally contemplated the larger structure of the universe, and he described the universe as filled with “vortices” in his 1644 work Principia Philosophiae. Descartes blended components of Aristotelean theory with newly emerging data on the solar system. Descartes’ system explained the universe as transpositions between three fundamental types of matter – luminous, transparent, and opaque. These forms of matter mixed and coalesced throughout the universe as they were transported by vortex motion to form the Sun, stars, planets, and comets. Stars provided the luminous component of the universe, while planets like the Earth comprised the opaque portions. Transparent regions between these two types of objects transmitted motions through the vortices set up by the stars, which were in the center of these whirlpools (Library of Congress, 2022). The vortex theory was consistent with observations of the heliocentric universe, where planets are confined to a disk and orbit with the same sense of motion. Descartes’ vortex theory also even suggested the possibility of other solar systems, since he recognized that the stars were much more distant than the Sun but were like them and part of a more extensive system of interlocking vortices. In this picture, other stars could even share the sun’s properties and might also have their own planet systems. Descartes described how “the Sun is surrounded by a vast space with no fixed stars in it, and the same must be true of each fixed star” (Bennett, 2017) (Fig. 4.9). Descartes also promoted a heliocentric model for the solar system that provided technical agreement with theologians to not provoke religious authorities who objected to the concept that the Earth moved. In Descartes’ system the Earth was considered to be at rest within a vortex that circled the Sun. In Descartes’ words, “The Earth, properly speaking, is not moved, nor

66 

B. E. Penprase

Fig. 4.9  The  Descartes Universe, as depicted by Coronelli’s Epitome Cosmographica (1700), courtesy of Claremont Special Collections. A hybrid geocentric model, promoted by Tycho Brahe, is also depicted in the same volume. This model was in use by earlier European astronomers and arose from the work of earlier Greek philosophers.

are any of the Planets; although they are carried along by the heaven” (Slowik, 2021). The vortex theory provided a mechanism to connect the Sun with the motions of comets and planets. The theory posited the phenomenon of vortex collapse, where matter flowing between vortices can be seen condensing on the Sun in the form of sunspots, and later such material would be converted into planets and comets. While these details were quite fanciful, Descartes could explain how the motions of the Sun and planets naturally moved in the same general direction. Writing in 1644, well before Newton’s picture of gravity and conservation of angular momentum, Descartes invoked the notion of “heaviness” and the idea of vortex bands moving elements through the universe. His system also posited that the stars were suns like our own – helping bridge our universe farther from our solar system to encompass the larger spaces outside our solar system (Fig. 4.10).

4  Early Telescopes and Models of the Universe 

67

Fig. 4.10  Image showing the Descartes vortex theory, from the work Systema Soare Et Planetarium (1742). The presence of other stars and planet systems is illustrated, which Descartes believed could collapse from cosmic “vortices.” (Image from https://www.loc. gov/collections/finding-­our-­place-­in-­the-­cosmos-­with-­carl-­sagan/articles-­and-­essays/ modeling-­the-­cosmos/stars-­as-­suns-­and-­the-­plurality-­of-­worlds/)

4.7 Isaac Newton and the Principia Newton’s Principia has already been the subject of many books, and it is difficult to overstate the impact this work had on the trajectory of science after its publication in 1686. As mentioned earlier, the publication itself was only possible by an intervention from Edmond Halley, who persuaded a reticent Newton to share his work with the world. Newton may have been persuaded less by an altruistic concern for enlightening the world than by a concern that some of the key ideas within the Principia might be credited to the many others working on the topics at the time. Newton’s main rival within England, Robert Hooke, had already laid claim on some of Newton’s ideas about gravitation. On the continent, Renee Descartes had made significant progress in developing some of the topics discussed within the Principia. In addition, Gottfried Wilhelm Leibnitz had already developed an independent version of the calculus with a different notation than Newton’s.

68 

B. E. Penprase

Newton’s work offered a definitive method for proving and calculating the forces of gravitation and motions of objects that could be extended and applied almost without limit. His well-known laws of motion, along with five corollaries, provided a structure for explaining the forces and motions of the planets and other celestial objects, which stood the test of experimental verification for four centuries after its publication. His laws state that objects remain at rest unless acted upon by an outside force (Law I), the change of motion is proportional to the motive force impressed and is made in the direction of the right line in which that force is impressed (Law II), and that to every action there is always opposed an equal reaction (Law III). In addition to these laws, Newton provided extensive proofs and developed a new mathematical toolkit that included the calculus that remains a core part of all science and engineering curricula to this day. Newton’s notion of absolute space and time and the deterministic universe it implied shaped the science and philosophy of several centuries after Newton. Newton’s work also provided the basis for physics for three centuries until the arrival of Relativity and Quantum theory. These modern theories in no way “disproved” Newton’s laws but instead provided crucial refinements of Newton’s physics at the extremes of space and time where the variability of time for different observers or the wave nature of matter become observable. Rather than repeat the many fascinating details of Newton’s work, it is important to note how Newton presaged some of the controversies and mysteries within space and time that only were solved in the past few decades. Extensive “digressions” within the Principia discuss optics and light and provide methods for computing paths of waves of both sound and light. Newton’s conception of light described light as a “corpuscle” or particle, which conflicted with Christiaan Huygens’s theory of light which was based on light as a wave. This ambiguity was reprised in the twentieth century as matter was found to have a similar particle-wave duality. It is also important to note that Newton himself, not generally known for humility by contemporaries, stated the limitations of his work in the opening of his Principia. He stated that “I here design only to give a mathematical notion of those forces without considering their physical causes and seats” and “the reader is not to imagine that .. I anywhere take upon me to define the kind or the manner of any action, the causes of the physical reason thereof.” Within the Preface, Newton clearly states the limits of his knowledge with the simple quote, “I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses” (Chandrasekhar, 1995, p. 22). Newton’s natural philosophy provided the mathematical tools for predicting planetary orbits, and for explaining the nature of forces on Earth and in space.

4  Early Telescopes and Models of the Universe 

69

That these forces operate in the same way gives us the exhilarating power to precisely calculate the masses and forces operating in our solar system and galaxy. And yet, Newton did not explain the nature of mass, how the force of gravity is transmitted through space, and other mysteries which have been the subject of twentieth and twenty-first-century physics and astrophysics. These answers came with the discovery of the Higgs Boson, which provided the mechanism for mass to exist, and the General Theory of relativity, which gives a mechanism for gravity to transmit through space, as discussed later in this book.

4.8 Thomas Wright and His New Hypothesis One philosopher after Newton who shaped our models of the galaxy and the universe was Thomas Wright. Wright’s 1750 work, An Original Theory or New Hypothesis of the Universe, provided a rhapsodic overview of the universe and the Milky Way that blended science and theology. Wright provided a summary in words (with poetry interspersed) and presented original conceptions of the universe through beautiful images based on some of the prevailing ideas of physics and theology of the time. Wright believed the Milky Way was a vast shell of stars in which we are embedded, or as he put it, the Milky Way was: A broad and ample road, whose dust is gold, And pavement Stars, as Stars to thee appear, Seen in the Galaxie, that milky way, Which nightly as a circling zone thou seest Powder’d with Stars.

Wright described how the ring of stars of the Milky Way was arrayed much like Saturn’s rings, with the distances of the stars too great to resolve individually. He speculated that Saturn’s rings could be “an infinite number of lesser planets” or “a collection of small bodies” – which proved to be a good guess confirmed only with planetary exploration missions centuries later. Wright was also aware of the possibility of other solar systems and depicted multiple planetary systems orbiting nearby stars such as Sirius and Rigel (in his plate XVII in Figure 4.11). His vision also included the possibility of an infinite number of universes, which he called “a finite view of infinity” (Plate XXXI in Fig. 4.11), and a construction of the Milky Way as a series of concentric shells of stars (Fig. 4.12).

70 

B. E. Penprase

Fig. 4.11  Wright’s vision of the universe included other solar systems orbiting nearby stars (left panel), which he showed including our Sun (a) and the nearby stars Sirius and Rigel (b and c, respectively). The middle panel shows Wright’s view of the Milky Way as a ring of stars, which presents different observed densities of stars in different directions. The right panel shows Wright’s vision of an infinite number of universes that might exist – a premonition of the concept we now call the “multiverse.” Wright called this view “a finite view of infinity.” (Images from https://www.loc.gov/resource/ cph.3b18348/?st=image and https://www.loc.gov/item/2003655477/)

4.9 Kant and Island Universe Theory Another very influential thinker who proposed models for the Milky Way and the larger “realm of the nebulae” was Immanuel Kant. Kant was influenced by Wright’s account of the Milky Way and like the majority of eighteenth-­ century thinkers, needed to connect his model of the Milky Way and the larger universe with theological considerations that argued that the Milky Way required to display the perfection of a Creator. Kant summarized his cosmological thought in his 1755 work, Universal Natural History and Theory of the Heavens. Kant posited that the patches of light or nebulae were “island universes” which followed a larger arrangement similar to the solar system, where the motions of stars within a disk resulted from similar forces experienced by planets. Kant, like Wright and Descartes, speculated about the formation of our solar system and suggested that initially static matter collapsed from perturbations to form the planets and stars in our Milky Way. In Kant’s picture, the Milky Way was a lens-shaped distribution of stars rotating about its center, where the Sun had no special place. Kant described his view of the universe:

4  Early Telescopes and Models of the Universe 

71

Fig. 4.12  View of the Milky Way as a shell of stars from Wright’s 1750 work. (Image from https://digitalcollections.nypl.org/items/8927c324-­7920-­93fa-­e040-­e00a18063ca9)

we can picture the system of fixed stars to a certain extent by means of the planetary system, if we magnify the latter infinitely. For if instead of six planets with their ten satellites we assume many thousands of similar bodies, and instead of the twenty-eight or thirty comets which we have observed, we assume a hundred or a thousand times more of them.

Kant suggested that the Sun’s location would be “in exactly the same plane an area made up of countless stars densely lit, in the shape of a very large circle” and “in their densest accumulation on this same plane they project that band of light called the Milky Way.” In Kant’s own words, the shape of stars has “the same systematic arrangement on a grand scale as the cosmic structure of the planetary system on a small scale.” Kant’s model predated twentieth-century cosmology by 250 years but retained many of the same features of the galaxy measured by modern astronomers. However, these discoveries would need to await larger telescopes to gather the precise observational data required to select between theoretical arguments and models.

72 

B. E. Penprase

References Bennett, J. (2017). Principles of philosophy  – Rene Descartes. https://www. earlymoderntexts.com/assets/pdfs/descartes1644part3.pdf Bernardi, G. (2017). Giovanni Domenico Cassini – A modern astronomer in the 17th century. Springer. Chandrasekhar, S. (1995). Newton’s Principia for the common reader (p.  22). Clarendon Press. English, N. (2018a). Chronicling the golden age of astronomy (p. 39). Springer. English, N. (2018b). Chronicling the golden age of astronomy (p. 42). Springer. English, N. (2018c). Chronicling the golden age of astronomy (p. 50). Springer. English, N. (2018d). Chronicling the golden age of astronomy (p. 55). Springer. Fernie, J.  D. (1975). The historical search for Stellar Parallax. Journal of the Royal Astronomical Society of Canada, 69, 222. https://ui.adsabs.harvard.edu/ abs/1975JRASC..69..222F/abstract Hatfield, G. (2014). René Descartes (Stanford Encyclopedia of Philosophy). Stanford. edu; Stanford University. https://plato.stanford.edu/entries/descartes/ Library of Congress. (2022). Physical astronomy for the mechanistic universe | modeling the cosmos | articles and essays | finding our place in the cosmos: From Galileo to Sagan and beyond | digital collections | Library of Congress. Library of Congress. 20540 USA. https://www.loc.gov/collections/finding-­our-­place-­in-­the-­cosmos-­with-­ carl-­sagan/articles-­and-­essays/modeling-­the-­cosmos/physical-­astronomy-­for­the-­mechanistic-­universe Slowik, E. (2021). In E. N. Zalta (Ed.), Descartes’ physics. Stanford Encyclopedia of Philosophy; Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/descartes-­physics/#CartCosmAstr Wilson, F. (2022). Descartes, Rene: Scientific method | Internet Encyclopedia of Philosophy. Internet Encyclopedia of Philosophy. https://iep.utm.edu/rene-­ descartes-­scientific-­method/

5 Early Maps and Models of Galaxies Speculation about the nature of our Milky Way had been part of the discourse of European astronomers for centuries and took new urgency with the invention of the telescope. Galileo looked upon the Milky Way with the telescope and noted that the Milky Way’s light came from stars, declaring “for the Galaxy is nothing else than a congeries of innumerable stars distributed in clusters” (Galileo Galilei & Stillman Drake, 1990). The extremely high magnification telescopes after Galileo enabled giant strides in planetary astronomy, but with their relatively small apertures and tiny field of view, they were far less helpful in studying nebulae and for mapping larger objects like galaxies and regions within our own Milky Way galaxy. To better resolve nebulae and to map our Milky Way, new kinds of reflecting telescopes were needed with larger apertures to gather the faint light and to better observe the ‘congeries of innumerable stars’ within the Milky Way and the nebulae. The greater mechanical precision made possible by the Industrial Revolution gave the eighteenth and nineteenth-century astronomers better tools for mounting the larger diameter refracting and reflecting telescopes. These capabilities all came together in successively larger and more powerful telescopes, such as the Yerkes 40″ refracting telescope and the giant reflecting telescopes of Mount Wilson. From these telescopes finally came the answer to the question of how we fit within the “realm of the nebulae” and whether the Milky Way galaxy was the center of this realm, or just another galaxy. This quest, which William Herschel had termed as “a knowledge of the construction of the heavens” began in earnest with Herschel’s first systematic mapping of the Milky way with some of the first large reflecting telescopes in the world.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_5

73

74 

B. E. Penprase

5.1 William Herschel and His Maps of the Galaxy British astronomer William Herschel began his work mapping out the “construction of the heavens” by developing a catalog that included the positions of thousands of galaxies and faint stars using telescopes of his own design. His astronomical work at first was a hobby that he pursued after long hours teaching music, but his talents were so immense that he soon could devote himself full-time to astronomy. After experimenting to find the best mixture of copper and tin to make the speculum metal mirror, Herschel constructed a 7-foot-long reflecting telescope which he used to survey the skies. On March 13, 1781, his telescope revealed a new star that was conspicuously not on the charts that he had been using, which also appeared to move – providing proof of its planetary nature (English, 2018c, p. 112). Herschel initially named it after King George and gave the planet the name Georgium Sidus, which was later renamed Uranus. This early success earned Herschel the patronage of King George III (who no doubt appreciated the name of the new planet), and the royal support allowed Herschel to pursue his astronomical work as a full-­ time job, with the funding to construct even larger telescopes. King George III agreed to fully fund Herschel’s astronomical work, with the condition that Herschel would be available to occasionally provide views of astronomical objects to the royal family and visitors to the court. William was aided by his sister Caroline, who received a royal allowance to work on astronomy after previously supporting William’s musical efforts as a soloist in his choir while working part-time on astronomical observations. With the new funding, Herschel began to construct a much larger 20-foot-long reflecting telescope with an 18″ diameter mirror, which would have a primary mirror that was more than ten times more powerful in its light-gathering capacity than the 6.2-inch telescope used to discover the planet Uranus. Herschel faced the challenge of designing and building a giant mirror of sufficient quality and reflectivity  for his new reflecting telescope. Newton’s first reflector, which contained a much smaller mirror,  benefitted from Newton’s enthusiasm for alchemy.  After experimenting with different mixtures of copper and tin, Newton created a substance known as “speculum” which was demonstrated at a meeting of the Royal Society in 1672. Newton pioneered the design for the reflecting telescope, which, unlike a lens telescope, could focus red and blue light to the same location, avoiding the degraded image quality with single lens telescopes.

5  Early Maps and Models of Galaxies 

75

Herschel simplified Newton’s reflecting telescope design by tilting the primary mirror to provide a focus at an “off-axis” angle. This design, known as a Herschelian telescope, had the benefit of only needing a single mirror and placed the focus of the primary mirror near the edge of the tube at a convenient location for the observer. Herschel also designed his telescope with a raised platform where he could lay flat for many hours with a view of the skies. He would also cover his head with a cloak to prevent any extraneous light from interfering with his views and compromising his night vision. Herschel experimented with mixtures of metals for the primary mirror to try to increase reflectivity and reduce the amount of polishing needed to maintain the mirror. By the end of the eighteenth century, Herschel and his contemporaries discovered that a mixture of 2 parts copper, 1 part tin, and trace amounts of brass, silver, and arsenic, produced the best reflection (English, 2018a, p. 81). Herschel’s new telescope was ready in 1783, and Herschel worked tirelessly at observing the stars and nebulae with his new 20-foot telescope to create the first scientific map of our galaxy. Herschel structured his observation with “sweeps” of the sky to document all the stars he could see (while discovering many new comets and galaxies), aided by his sister Caroline. Caroline recorded and organized the data from William’s visual observations and after reviewing the results, played an executive role in planning the observations to provide efficient and repeat observations of as many visible fields as possible. Herschel’s systematic sweeps provided the data for a catalog of thousands of nebulae, and he also mapped and recorded clumps of stars and gaps within the Milky Way itself. By 1802, the Herschels had identified no fewer than 2500 nebulae, including a mix of galaxies, reflection nebulae, and star clusters. This catalog provided a more than tenfold increase in previous catalogs, such as Charles Messier’s catalog of nebulae and star clusters, which included 80 objects by 1780, and which later was extended to include 103 objects. Herschel’s gigantic telescope was suspended by ropes and pulleys and resembled a hybrid between an enormous artillery piece and a sailing ship. Observing with the telescope required ingenious techniques to center the view on the sequence of sweep regions while recording observations (which required light for writing). A typical evening would have Herschel suspended outside on top of the enormous telescope, with his sister Caroline seated inside at a desk near a window, ready to record the discoveries and measurements that Herschel would shout out (Fig. 5.1). Herschel could sit for hours under the telescope, and as his telescopes became larger – the telescope was suspended in a frame larger than his house – he would arrange for the telescope to view a fixed altitude and record the

76 

B. E. Penprase

Fig. 5.1  William Herschel’s 48-inch diameter reflecting telescope was used to provide the first systematic survey of the Milky Way galaxy. The telescope had to be suspended within a scaffolding with ship’s rigging, which required a team to maneuver into new positions. (Image from https://upload.wikimedia.org/wikipedia/commons/6/6e/ Astronomy%3B_a_40-­f oot_telescope_constructed_by_W._Herschel_Wellcome_ V0024766.jpg)

objects that would drift through his field of view centered on the meridian. Caroline and William eventually upgraded their communication to include a contraption to transmit the exact telescope position to the room where Caroline was sitting, which she recorded along with the precise time taken from the clock in the same room. As described by a letter from a visitor to the Herschels in 1785, the positions were transmitted “by a piece of Machinery, which continually shows the degree and minute, and is worked by a string actuated by the telescope” and the positions were transmitted by “stretching of the Cord of Communication” between the telescope and the room. Herschel began to chart the observations into a first map of the Milky Way, which documented the lens shape of the stars, as well as a fork in the distribution of stars that arises from dark nebule which absorb starlight (Hoskin, 1987).

5  Early Maps and Models of Galaxies 

77

In addition to mapping the Milky Way and cataloging thousands of nebulae, William and Caroline Herschel had other questions they were trying to answer about the nature of the many nebulae they were observing. Were these nebulae part of our galaxy – or beyond it? And were they nearby or so far that stars could not be viewed within them? Herschel attempted to answer some of these questions by looking carefully at the number of stars per angular area and comparing the  stellar density in different directions to determine the Sun’s location within the Milky Way. Herschel began planning on an even more giant telescope, and with the ample resources from King George, he was eventually able to design and build a 40-foot-long telescope with a 48″ diameter mirror. His observations were used to prepare a map of the Milky Way, which includes the shapes revealed from visual observations, which are heavily affected by the absorption of starlight from interstellar gas and dust within the Milky Way. Herschel’s observations led him to several conclusions on the origins of what we now call globular  clusters in the context of the newly developed Newtonian physics. In his 1785 work entitled On the Construction of the Heavens. Herschel commented that these clusters were: Condensed about a center …of almost a globular figure, more or less regularly so, according to the original distance of the surrounding stars. The perturbations of these mutual attractions must undoubtedly be very intricate, as we may easily comprehend by considering what Sir Isaac Newton fays in the first book of his Principia.

Herschel describes the Milky Way as “the bright path which surrounds the sphere may be owing to stars.” An observer, according to Herschel, “perceives a few clusters of them (stars) in various parts of the heavens and finds also that there are a kind of nebulous patches.” By way of the telescope the observer: increases his power of vision and applying himself to a close observation, finds that the Milky Way is indeed no other than a collection of very small stars. He perceives that those objects which had been called nebulae are evidently nothing but clusters of stars. He finds their number increase upon him, and when he resolves one nebula into stars he discovers ten new ones which he cannot resolve.

Herschel concluded that the Milky Way was constructed of “the most considerable of those numberless clusters that enter into the construction of the heavens” and took the form of a “crookedly branching nebula” (Herschel, 1785). From his tireless effort mapping the content of the skies, Herschel

78 

B. E. Penprase

Fig. 5.2  Figure prepared by William Herschel based on star density observations to reveal the structure of our Milky Way galaxy. The image is entitled “Section of our sidereal system” from Herschel’s 1785 work On the Construction of the Heavens originally published in the journal Philosophical Transactions of the Royal Society of London (Image from https://upload.wikimedia.org/wikipedia/commons/4/49/Herschel-­galaxy. jpg)

discovered that our solar system was part of a much larger nebula (the Milky Way) that only appeared to be a nebula from the immense distance and faintness of the many stars within the glowing regions of the Milky Way. This was a shift in our place in the universe no less profound than the shift that placed the Earth as just one of the planets of our solar system since it meant that the Sun was just one of the vast congeries of stars in our galaxy (Fig. 5.2).

5.2 Lord Rosse and Other Nineteenth-Century Reflecting Telescopes and Their Results While Herschel’s 48″ diameter reflecting telescope presented challenges in pointing and its mirror was subject to warping and tarnishing, it provided a proof of concept for giant telescopes in viewing faint star clusters and resolving the nature of the Milky Way galaxy. A successor to this telescope was the colossal 72″ diameter reflector built for the third Earl of Rosse in Ireland, which was built to help resolve the nature of some of the fainter “spiral nebulae” which we now know of as galaxies that surround the Milky Way. William Parsons, the third Earl of Rosse, graduated with a first-class degree in mathematics in 1822 from Trinity College in Dublin. After a short career in public life, he focused his activities on his ancestral Birr Castle to build the

5  Early Maps and Models of Galaxies 

79

world’s most giant telescope. Parsons was convinced that by developing an even larger primary mirror than Herschel had used, he would be able to reveal new details in the nebulae. He worked toward that end by creating new steam-­ powered metal polishing contraptions to help him maintain the figure and polish on mirrors even larger than Herschel’s primary mirrors. He began with prototypes of 15-inch and 24-inch mirrors, which delivered high-quality images of incredibly high magnification. The 15-inch telescope was of sufficient quality that it produced good images of the Moon at a magnification of 600 times. He also pioneered the use of segmented primary mirrors for these precursor telescopes. These prototype telescopes, in some respects, gave a preview of future segmented mirror telescopes such as the Keck and JWST. However, the speculum metal material (copper and tin mixed in an approximate 2:1 ratio) was a big drawback as it required constant maintenance to retain its reflectivity. Parsons eventually was able to cast a 1.2-ton 36-inch diameter speculum metal mirror, and this telescope began operations in 1840 and was very successful at revealing previously hidden details of nebulae. The telescope was used by various world-class observers, including William Herschel’s son John. The 36-inch was only slightly smaller than William’s largest telescope, but with improved reflectivity of the primary and a new system for polishing and mounting the mirror, it was declared by some to give better image quality than William’s 48″ telescope (Fig. 5.3).  The Earl of Rosse’s masterwork was still to come – a “leviathan” of a telescope with a primary mirror 72-inches in diameter that required a specially constructed furnace to melt over 4 tons of metal into a special mold that was designed to allow for gasses to escape while the molten metal was cooling. Parsons began work on the telescope in 1842, and initial tests were conducted in 1844, with science operations possible by 1845. The resulting telescope was mounted in 58-foot-long tube, which was 7 ft in diameter on the ends and 8 ft in diameter in the middle of the tube. The tube was held between two large brick walls 23 ft apart and 50 ft tall and could be raised and lowered near the meridian over a range of 110 degrees and moved side to side over a range of about 10 degrees. The “leviathan” weighed over 150 tons when completed (English, p. 151). To prevent the enormous mirror from sagging, it was mounted on a set of 81 “equilibrated levers” to distribute the weight of the mirror (Elliott, 2014). No photographs were taken from observations with the giant telescope since that technology was still in its infancy. Our best evidence of the power of the world’s largest telescope (a title it held until 1910 when the Mt. Wilson Hooker telescope was completed) comes from drawings of the nearby galaxies such as M33, M31, M77, and M95, which reveal detailed structures not visible with smaller telescopes (Fig. 5.4).

80 

B. E. Penprase

Fig. 5.3  Sketch of the “Whirlpool Galaxy” (M51) taken by Lord Rosse in 1845 using his large 72″ diameter reflecting telescope nicknamed the “Leviathan.” His views of the galaxies, known as “spiral nebulae” in the day, were the most detailed in the nineteenth century (Image from https://commons.wikimedia.org/wiki/File:Whirlpool_by_ lord_rosse.jpg).

Fig. 5.4  The largest telescope in the world in the nineteenth century, the 72-inch telescope built by the Earl of Rosse in Parsontown. The telescope was so giant it was sometimes referred to as the “Leviathan.” (Image from https://en.wikipedia.org/wiki/ William_Parsons,_3rd_Earl_of_Rosse#/media/File:WilliamParsonsBigTelescope.jpg)

5  Early Maps and Models of Galaxies 

81

Many observers from the time noted the high quality of the image from the telescope – especially when viewing objects close to the zenith. However, the short life to the polish of the mirror proved to be unbearable, as it required over 30 workers to remove the mirror for re-polishing. The 1845 Irish Potato famine disrupted work with the telescope. As the Earl of Rosse shifted his attention to helping alleviate some of the impacts of the disaster, the mirror began to tarnish. Several years later, Parsons began observations again in earnest, but the tarnish on the mirror had taken its toll, and the telescope was extremely difficult to maintain (English, 2018b, p. 156). Despite these challenges, the Leviathan continued to be used by the fourth Earl of Rosse and a variety of visiting observers to study galaxies over the subsequent decades, providing for the discovery of 35 new NGC objects and several galaxy clusters, as well as to observe the existence of Mars’ two moons in 1877, and to conduct studies of the Moon’s heat.

5.3 The Discovery of Neptune and Sirius B One example of the growing prowess of nineteenth-century astronomers was the discovery of Neptune. The French astronomer Urbain Le Verrier predicted the planet in 1845, after studying the tiny unexplained motions of the planet Uranus and applying Newton’s laws to determine the source of the forces resulting in those motions. Uranus itself had been discovered just over a century earlier by William Herschel. With the new observatories of Europe, the motions of the planet could be charted with high precision. Le Verrier predicted that a giant planet outside of Uranus’ orbit was providing the additional forces resulting in the anomalous motions, and astronomers in Berlin were able to quickly observe the new planet right where Le Verrier predicted it would be, less than a year later in 1846. Le Verrier became director of the Paris observatory in 1854, and the Paris observatory was capable of systematically measuring tiny motions of stars and planets with large refracting (lens) telescopes. Le Verrier noted tiny departures from the predicted orbit of the planet Mercury in 1859, which caused the location of its closest approach to the Sun (its perihelion) to move about 43 arc seconds per century after accounting for the regular motion caused by the precession of the equinox. This tiny motion at first appeared to indicate the presence of another planet in the inner solar system. Le Verrier named this predicted planet Vulcan, and astronomers throughout Europe looked for it for decades in the latter part of the nineteenth century, without success. The full explanation of Mercury’s anomalous motion instead had to await the

82 

B. E. Penprase

birth and calculations of Albert Einstein. His theory of general relativity was able to explain Mercury’s unusual motions as part of the warping of space and time due to the Sun’s gravity. Another detective story involved the brightest star in the sky, Sirius, which appeared to be wobbling in the sky in a way that might indicate the presence of an unseen companion star. Both Sirius and Procyon were observed to have wobbling motions by Wilhelm Bessel in 1844. These tantalizing hints of invisible companions became a pattern repeated many times over in the history of astronomy. Le Verrier and Bessel both suspected the presence of an invisible companion to Sirius, and yet,  discovery of the companion star  – Sirius B – would await the surprise discovery on January 31, 1862, by Alvan Graham Clark, the youngest son of Alvan Clark, famous for his large refractor lenses (Holberg, 2008). Clark discovered Sirius B while testing out the primary lens of what at the time was to be the largest refractor – an 18.5-inch refractor telescope. This discovery was confirmed with the Harvard 15-inch “Great Refractor” by George Phillips Bond. As would later be shown, Sirius B is millions of times fainter than its companion Sirius due to its tiny size – about the size of the Earth. Sirius B was a “white dwarf ” and comprised the core of what used to be a full-sized star that exploded and left behind an inert core of Helium and Carbon. Just like Mercury’s motions, the full explanation for Sirius B was awaiting another brilliant scientist, Subramanyam Chandrasekhar, who provided the full description of how  Sirius B resisted further collapse due to quantum forces inside the very dense star core (Soter & de Grasse Tyson, 2020). As telescopes continued to improve, the new technology of photographic plates enabled high-precision measurements to be conducted over many months or even years, with an accuracy far exceeding visual observations. The positions of stars on photographic plates could be measured with traveling microscopes, enabling the detection of parallax and other motions that helped measure the stars’ distances and the Sun’s position in the Milky Way galaxy. The plates could also be studied statistically by counting stars within samples of the Milky Way in different directions. Hugo von Seeliger of the Munich Observatory and Jacobus Kapteyn of the University of Groningen incorporated these star counts into a model for the distribution of stars within the Milky Way. This model, known as the “Kapteyn Universe,” assumed that all the stars were of the same luminosity and therefore appeared fainter and closer together due to greater distances. This model placed the Sun at the center of a vast distribution of stars extending 30,000 light-years in diameter. Kapteyn’s model was the dominant picture of the Milky way at the beginning of the twentieth century, but had no way to constrain the distances to the spiral

5  Early Maps and Models of Galaxies 

83

Fig. 5.5  The Kapteyn Universe, a map of the Milky Way galaxy based on photography compiled by J.C. Kapteyn. The contours showed the shape of the Milky Way disk, with the Sun’s location near the first arrow. His map provided an excellent estimate of the scale of the galaxy but was unable to clarify the relationship between our galaxy and the spiral nebulae, and erroneously placed the Sun’s location too close to the center of the Milky Way due to incomplete knowledge of the effects of stellar extinction from stardust. (From Kapteyn, 1922)

nebulae, where star counts were not possible. The  star count calculations also did not include variations in luminosity due to different populations of stars, and did not account for the powerful effect of extinction which reduces the star counts in regions with large amounts of absorbing dust and gas (Fig. 5.5).

5.4 The Rise of Giant Refracting Telescopes Advances in astronomy and improved models of the universe required ever larger telescopes with larger lenses capable of providing large magnifications and high light gathering capacity, and steady tracking across the skies. Great light-gathering power, which was attempted with the large reflectors of Lord Rosse and William Herschel, allows the astronomer to view fainter objects. On the other hand, long focal lengths enable high magnification and measurement of small angles in the skies. These were primarily seen in the aerial telescopes of Hevelius and Cassini and the refractor used by Bradley to discover aberration and were vital to the discovery of the motions of the moons of Saturn, the rotations of planets, and the tiny offsets of stars due to aberration and parallax. Despite steady advances in both reflecting and refracting telescopes, the combination of high magnification and light-gathering power and the ability to point and track objects across the sky  for long periods required new technologies. By the nineteenth century, telescope makers had mastered the art of making matched pairs of giant “achromatic” lenses that could provide much better results than their seventeenth-century predecessors. Long-focus lenses like

84 

B. E. Penprase

Fig. 5.6  The invention of the achromatic doublet, a pair of lenses with refraction properties that were opposite, allowed for the ordinary dispersing properties of glass to be corrected and to provide a focal point for both red and blue light at the same location. This innovation significantly improved the performance of large refracting telescopes and enabled much higher resolution and magnification for telescopes. (Image from https://upload.wikimedia.org/wikipedia/commons/1/15/Lens6b-­en.svg)

those used by Huygens, Cassini, and Hevelius provided high magnification but were hard to keep focused and presented additional problems with suspending and aiming the very long focal length lenses stably. The blurring of the images compounded the difficulty due to the focus point for red and blue light appearing at different locations. This comes from the fact that a single glass lens not only focuses but refracts the light – just as an ordinary prism would.  Isaac Newton famously demonstrated this effect with two prisms, breaking light into a rainbow of colors with one and bringing it back together with another prism. This demonstration showed that the “refrangibility” into colors was a property of the light itself and that these colors were not given to the light by the glass but brought out by their bending light. The nineteenth-­ century telescope makers took a clue from this demonstration to combine the focusing of one glass lens (made of an ordinary kind of glass) with another lens with slightly different glass (with different refraction properties). If the second lens was carefully chosen and designed to bend the light differently, it could remove the “chromatic aberration” and bring red and blue light to the same focal point. This invention, known as an achromat, revolutionized astronomy and ushered in the era of giant refracting telescopes (Fig. 5.6). With the advent of the industrial revolution, steelmaking and mechanical design reached an apex in the late nineteenth century. The burgeoning industrial might of many European countries and the US made possible much larger telescopes than had ever been contemplated, funded by newly wealthy industrialists building the rapidly expanding infrastructure of the US and

5  Early Maps and Models of Galaxies 

85

Fig. 5.7  Example of one of the great nineteenth-century refracting (lens) telescopes; this one from the Harvard Observatory, which featured a 15″ telescope built in 1847 that featured an achromatic doublet to improve the focus across all colors, and a more advanced and stable mechanical mount, made possible by the growing manufacturing prowess of the nineteenth-century economy. (From https://www.loc.gov/resource/ g3180m.gct00292/?sp=7&r=-­0.136,0.018,1.397,1.06,0)

large European cities. These capacities came together in the large refracting telescopes of the nineteenth century, held aloft on massive steel piers, and balanced precisely to enable exquisitely precise mechanical drives to allow them to track the stars as they moved through the skies. Even better, the invention of photography allowed for long exposures to enable faint targets to be precisely imaged, and the photographic plates could be compared years later to discern tiny motions of the stars due to their orbits in binary systems and due to the apparent wobbling motions from the Earth’s orbit known as parallax. During this time, giant refracting telescopes were built in Paris, in Potsdam, at Harvard, and on mountaintops, like the Lick Observatory in California (Fig. 5.7).

86 

B. E. Penprase

5.5 The Lick Mount Hamilton Observatory The first mountain-top observatory which featured a large Alvan Clark refracting telescope was the Lick Observatory, built on top of Mount Hamilton, which rose 4300 ft above San Jose, California. James Lick, its namesake, was a colorful millionaire whose career was shaped by the unprecedented boom in prosperity in San Francisco from the Gold Rush. As part of his $750,000 donation, Lick included the requirement that he be buried in the base of the telescope, a source of some consternation from observers working late at night with the telescope. The Mount Hamilton site established the principle that mountaintop telescopes near the ocean provided unmatched “seeing,” which was confirmed by the early advance teams of astronomers who took test observations from a variety of mountain locations near San Francisco. Mt. Hamilton was included within a congressional grant from 1876 that included nearly 2000 acres, and the state of California added additional 511 acres. A road was constructed by 1876, and the lens was built by Alvan Clark, who received the contract in 1880, and beat out the leading European glass makers for the job. After 7 years of figuring and polishing, it was delivered to the site, and for 9 years, from 1886 to 1895, the 36″ Lick Observatory telescope was the world’s largest telescope. It featured a 75-foot dome and a movable elevating floor – both unprecedented at the time. By 1895, the refractor was joined by a 36-inch reflecting telescope on the summit, and the telescope began its campaign of observations with unprecedented quality of both optics and sky. The first galaxy images of Lick Observatory showed unprecedented clarity. The Lick refractor was as superior to the other telescopes of the time as the Hubble Space Telescope was compared to ground-based telescopes a century later. A 1938 article on the history of Lick Observatory cites its major contributions in the first 50  years as including the discovery of four additional moons of Jupiter, the discovery of 4000 visual double binary stars and 400 spectroscopic binary stars, observations of a tiny “white dwarf ” companion to the star Procyon, 33 comets, and a variety of eclipse expeditions (Smiley, 1938). This inventory reflects on the nature of late nineteenth-century and early twentieth-century astronomy and how the Lick 36″ refractor was optimized for high angular resolution, which enables viewing of objects like binary stars often lost in the blur of atmospheric seeing. Lick’s clear and relatively high-altitude site and the refractor’s long focal length and high magnification made these discoveries possible.

5  Early Maps and Models of Galaxies 

87

With the benefit of an additional century of hindsight, one of the most significant impacts of Lick Observatory was as a proof of concept that inspired astronomers to build other mountaintop observatories, such as Mt. Wilson and Palomar; Lick Observatory was also the home of Heber Curtis, who would play a prominent role in the history of astronomy in his debate with Harlowe Shapley about the nature of nebulae which marked a turning point in our understanding of the Milky Way and the larger universe of galaxies. Curtis felt his most significant contribution to astronomy was his 1918 paper “Descriptions of 762 Nebulae and Clusters,” – which provided examples of Lick Observatory’s stunning image quality from both of its 36” telescopes. The Lick observatory has since been expanded to include the 120″ Shane reflecting telescope, constructed in 1960, and a suite of robotic reflecting telescopes, indicative of the evolution of astronomy toward extragalactic astronomy and surveys of extrasolar planets (many of which were discovered at Lick Observatory). Lick Observatory also made lasting contributions to astronomical spectroscopy with the Keeler spectroscope, which helped clarify the nature of stellar and galaxy spectra and provided some of the first spectra of galaxies that later would help determine the expansion of the universe. George Ellery Hale, founder of the Yerkes and Mount Wilson observatories, considered astronomical spectroscopy the future of astronomy and the dividing line between astronomy and astrophysics and made this a central feature of his new Mount Wilson Observatory, built with the Lick Observatory as inspiration (Fig. 5.8).

Fig. 5.8  Evolution of astronomical spectroscopy at Lick Observatory mirrors the evolution of astronomy over the past century. Left, the Brashear-Keeler astronomical spectrograph attached to the Lick Observatory 36″ refractor, and Right, the Shane Adaptive Optics and ShARCs instrument on the 120″ Shane reflecting telescope. (Images from https://www.lickobservatory.org/explore/research-­telescopes/shane-­telescope/ and https://digitalcollections-­i mage.library.ucsc.edu/iiif/2/g7_32_dd_34_x/full/1000,/0/ default.jpg)

88 

B. E. Penprase

5.6 The Yerkes Giant 40-Inch Refractor The culmination of the era of the giant refracting telescope was embodied in the gigantic 40″ telescope at Yerkes Observatory, still the world’s largest refracting telescope. The Yerkes refractor was funded by generous donations from Charles Yerkes and was housed at the ornately decorated observatory at Williams Bay, Wisconsin. The observatory was the brainchild of George Ellery Hale. He had been recruited at the age of 24 to head the astronomy program at the University of Chicago, a new university funded in 1890 by the newly wealthy Charles Rockefeller. The University of Chicago itself was only a few years old when it began the project of building the world’s largest telescope under Hale’s direction. Glass disks from the Mantois company in Paris were purchased, and Alvan Clark, the master optician, was contracted to finish the lenses into a doublet to correct for chromatic aberration. Alvan Clark and Sons were acknowledged as the world’s best opticians for telescope lenses. The Clarks in 1886 had successfully made the 36-inch Lick Observatory telescope, then the largest in the world, providing amazing results from atop Mount Hamilton in California after its opening in 1888. Previous telescopes from the Clarks were used at the best observatories in the world, including the 24″ telescope at Mt. Lowell in Arizona, the 24″ at Harvard’s observatory, and smaller telescopes at the best universities and colleges in the US, including a 23″ Clark telescope at Princeton University, built in 1882. From a quick start in 1890, the Clarks completed the 40″ lens and sent it to Yerkes for use in 1897. Once the lens was delivered, it was housed in a massive riveted 63-foot-long steel tube and clock drive – which had been completed earlier and displayed at the 1892 Columbian exposition. The instrument towered four stories high, and included a 50-ton pier, and would be housed in a 90-foot-wide dome at Yerkes Observatory in Williams Bay, Wisconsin (Fig. 5.9). The Yerkes telescope weighs 38 tons and features a movable 75-foot diameter floor that can be moved to raise observers to the height of the eyepiece (Fentress, 2019). The location of the Yerkes observatory in rural Wisconsin was a compromise to enable convenient access from the University of Chicago by rail – and also a sufficient distance from Chicago and its soot and light. After its dedication in 1897, it was ready to become the premiere facility in the world for discovery. Yerkes Observatory was home to the Astrophysical Journal – itself founded in 1895 by Hale – and hosted the inaugural meeting of the American Astronomical Society, founded by Hale in 1899. Observers at the telescope also played a crucial role in unlocking the distances to stars. The extreme

5  Early Maps and Models of Galaxies 

89

Fig. 5.9  The Yerkes Observatory, in Williams Bay, Wisconsin, is home to George Ellery Hale’s 40″ refractor, which was built in 1897 and still reigns as the world’s largest refracting telescope. (Image from https://loc.getarchive.net/media/birthplace-­of­astrophysics-­the-­university-­of-­chicagos-­yerkes-­observatory-­williams)

stability of the 40-inch lenses allowed for the positions of stars to be measured using long photographic exposures that could then be analyzed later to determine the positions of the stars to a small fraction of an arc second. The long focal length of the telescope provided high magnifications that were needed to measure parallaxes to determine the distances and masses of stars and star clusters. With the Yerkes telescope, Hale and his team of astronomers used the enormous magnification of the 40-inch refractor to see the parallax of stars at ever greater distances, which was helpful for them to map the locations of stars in three dimensions to determine the basic structure of the Milky Way. The Yerkes astronomers measured the parallaxes and distances to hundreds of stars and then by comparing their apparent brightness with their distance, were able to derive their absolute luminosities (Kron, 2018). With this new technique, dozens of binary stars were observed for their distances and luminosities, and the orbits of those binary stars could also be measured. These observations extended the accuracy of our models for stellar structure and evolution and provided accurate measurements for the masses of stars (Struve, 1947).

90 

B. E. Penprase

The knowledge of distance, luminosity, mass, and even size of the stars, unlocked many of the mysteries of the stars. Combined with the spectra of the stars, new techniques were also developed for measuring the temperatures and sizes of the stars directly from a low-dispersion spectrum. The resulting “spectral types” could be calibrated for luminosity and provided a basis for measuring distances to even more distant stars in a technique known as “spectroscopic parallax.”

5.7 Percival Lowell’s Observatory Another famous refracting telescope was the Lowell Observatory 24-inch refractor built in 1894  in a specially equipped observatory placed on what Lowell named Mars Hill in Flagstaff, Arizona. Percival Lowell made his fortune in Boston and moved to Arizona to fuel his obsession with the planet Mars. His observatory was built to map the surface of Mars (which he suspected harbored an advanced civilization) with the hope for the discovery of a new planet in the outer solar system. In the end, Lowell’s aspirations were realized with important caveats. His maps of the surface of Mars were indeed the most detailed prepared to date – albeit with some additional features such as the famous “canals” that appear to have arisen from the active imagination of Lowell and his team, who were convinced that Mars was inhabited (Fig. 5.10). As is so often the case in research, one of the most lasting contributions of Lowell Observatory came from an unexpected source. Lowell hired a young observer named Vesto Slipher in 1901 as a fresh graduate from Indiana University. Slipher began as a temporary observer and worked his way through his graduate studies, receiving a Ph.D. in Indiana University by 1909 and becoming an acting director and Director after Lowell’s sudden death in 1916. Lowell had equipped his observatory with the best instruments and equipment of the day, including spectroscopic instruments for furthering his interests in planetary astronomy. Slipher went to work acquiring photographs and spectra of planets and pioneered some of the advanced approaches to measuring spectra that recovered the rotational velocities for the planets Mars, Jupiter and Saturn (Omeka, 2022). As a side project, Vesto Slipher trained the Lowell 24″ telescope and spectrograph on some of the “spiral nebulae” and noticed that spectra of nebulae appeared to have absorption lines indicating familiar elements seen in stars. Slipher also observed that within the nebular spectra were telltale signatures of motion. His first paper on galaxies reported spectra from the Andromeda

5  Early Maps and Models of Galaxies 

91

Fig. 5.10  Map of Mars taken with the Lowell Observatory 24-inch telescope, commissioned by Percival Lowell, and prepared from visual and photographic observations. The image shows “canals,” which observers at the time insisted were real features of Mars and possible indications of a civilization on Mars. Map prepared from drawings and photographs of Percivall Lowell and E.C. Slipher. (Image from https://tile.loc.gov/ image-­s ervices/iiif/service:gmd:gmd3:g3182:g3182m:ct003805/full/pct:12.5/0/ default.jpg)

galaxy, and he correctly noted that the Andromeda galaxy was moving toward us. As Slipher continued to study more galaxies, he noticed that the spectra of galaxies resembled that of stars, but with evidence of rotation within the galaxy (which he reported in 1914), and with wavelengths of the known spectral lines all offset systematically to the redward for most of the galaxies. This redshift was consistent with the interpretation that most galaxies appeared to be moving away from our location. Slipher acquired a catalog of spectra for 25 spiral galaxies by 1917 and discovered that 21 of the 25 galaxies showed redshifts. These observations provided the first evidence of large-scale and systematic Doppler redshifts of galaxies, with speeds as large as 1100 km/s. His work makes him an under-celebrated hero in the history of astronomy. Slipher’s spectra were included in Hubble’s later work that combined new measurements of distances to galaxies with spectra to discover the apparent

92 

B. E. Penprase

expansion of the universe. Slipher interpreted these apparent motions as evidence of the Earth’s movement through the realm of galaxies and advanced the idea that our galaxy was like the other “island universe” nebulae he was observing (Ostriker & Mitton, 2015).

5.8 Models and Maps of Galaxies With new and superb refracting telescopes, nineteenth-century and early twentieth-century astronomers could see the apparent motions of stars, which included both parallax and “proper motion” of stars as they moved across the sky over many years. The monumental patience of the astronomers extended across decades – and often exceeded the lifespans of the observer, who carefully tended to the plates of observations and passed them on to posterity for future astronomers to measure. The parallax and proper motion catalogs of the twentieth century were made possible by the many stars studied by the Yerkes refractor and its nineteenth-century cousins and the newer telescopes at Lick Observatory and across Europe. The picture of the Milky Way that emerged placed the Sun and our Earth within a vast system of stars that was considered our own “star cluster.” Photographic studies suggested our location to be near the center of this cluster, based on star counts. Additional observations revealed hundreds of new nebulae – some appearing within the disk of our own “cluster” and others apparently separate, especially the “spiral nebulae” which were primarily seen in the regions outside of the disk of our Milky Way. The nature of the spiral nebulae and whether they were in orbit around our own star cluster, or independent and separate objects, was unknown. To add to the picture, our galaxy is orbited by a swarm of star clusters known as “global clusters” – which also trace the center of the Milky Way Galaxy (Figs. 5.11 and 5.12).

5  Early Maps and Models of Galaxies 

93

Fig. 5.11  Presentation from Burritt’s atlas of 1856 entitled “Burritt’s Geography of the Heavens.” illustrating the shapes of observed nebulae.  Of particular note are the images of “our cluster,” the Milky Way, which suggests our location in the middle of a blob of stars, and the “supposed structure of the universe” which shows clumps of stars and nebulae extending outwards from our location. (Images from Library of Congress at https://www.loc.gov/resource/g3180m.gct00292/?sp=6&r=0.207,0.084,0.663,0.366,0)

94 

B. E. Penprase

Fig. 5.12  The rapid advances in galaxy images in the nineteenth century are shown in these two images. (left) representation of Lord Rosse’s view of the M51 “Whirlpool” galaxy, hand-drawn using the 72-inch telescope in the middle of the nineteenth century. (right) the first photograph of a galaxy, taken in 1888 by amateur astronomer Isaac Robert’s with his own 20-inch reflector at prime focus. The image is a 4-hour exposure made on December 29, 1888. The image below shows the prevailing model of the Milky Way in the nineteenth century, showing the “galaxy region of stars” surrounded by the “region of the nebulae.” (Images from Library of Congress)

5  Early Maps and Models of Galaxies 

95

References Elliott, I. (2014). Parsons, William. Biographical Encyclopedia of Astronomers, 1653–1655. https://doi.org/10.1007/978-­1-­4419-­9917-­7_1057 English, N. (2018a). Chronicling the golden age of astronomy (p. 81). Springer. English, N. (2018b). Chronicling the golden age of astronomy (p. 156). Springer. English, N. (2018c). Chronicling the golden age of astronomy (p. 112). Springer. Fentress, S. (2019). Yerkes observatory: Home of largest refracting telescope. Space.com. https://www.space.com/26858-­yerkes-­observatory.html Galileo Galilei, & Stillman Drake. (1990). Discoveries and opinions of Galileo: Including the starry messenger (1610), letter to the grand duchess Christina (1615), and excerpts from letters on sunspots (1613), the assayer (1623). Anchor Books. Herschel, W. (1785). On the construction of the heavens. Read at the royal society. London. Holberg, J. (2008). Le Verrier and the discovery of Sirius B. Sky and Telescope, February, 2008. Hoskin, M. (1987). John Herschel’s cosmology. Journal for the History of Astronomy, 18(1). Kapteyn, J. C. (1922). First attempt at a theory of the arrangement and motion of the sidereal system. The Astrophysical Journal, 55, 302. https://doi. org/10.1086/142670 Kron, R. (2018). The scientific legacy of Yerkes Observatory. Physics Today, 6 July 2018. https://doi.org/10.1063/pt.6.4.20180706a Omeka. (2022). Vesto Melvin Slipher · The Slipher brothers · Lowell observatory archives. Collectionslowellobservatory.omeka.net. https://collectionslowellobservatory. omeka.net/exhibits/show/slipherbrothers/biography%2D%2Dvm-­slipher-­ Ostriker, J. P., & Mitton, S. (2015). Heart of darkness : Unraveling the mysteries of the invisible universe. Princeton University Press. Smiley, C.  H. (1938). The history of the lick observatory. The Scientific Monthly, 47(2), 128–135. https://www.jstor.org/stable/16834 Soter, S., & de Grasse Tyson, N. (2020). Friedrich Bessel: Discoverer of white dwarf Sirius B | AMNH. American Museum of Natural History. https://www.amnh.org/ l e a r n -­t e a c h / c u r r i c u l u m -­c o l l e c t i o n s / c o s m i c -­h o r i z o n s -­b o o k / friedrich-­bessel-­sirius-­b Struve, O. (1947). The Yerkes observatory: Past, present, and future. Science, 106(2749), 217–220. https://doi.org/10.1126/science.106.2749.217

6 The Discovery of the Big Bang

While the Yerkes, Lick, and Lowell refractors made great strides in unlocking the secrets of the cosmos with their exquisite angular resolution, the possibility of ever more powerful reflecting telescopes drew George Ellery Hale out to California. He had seen the benefits of moving a telescope to a mountaintop – as the Lick observatory with its 36″ refractor had produced unprecedented image quality in the thin California mountain air. The Lowell Observatory in Flagstaff, founded at about the same time as Yerkes Observatory, was already producing much clearer images of planets and enjoyed much better weather than the murky skies above rural Wisconsin. Advances in telescope technology, photography, and stellar astronomy enabled techniques for deriving accurate distances and temperatures for the stars. Most of the mysterious “spiral nebulae” were still beyond the reach of the refracting telescopes of the era. A new and transformative tool was needed to extend our reach into the “realm of the nebulae” (the title of Edwin Hubble’s 1935 book).

6.1 Building the Mount Wilson Observatory The convergence in technologies and the proof of concept of a mountaintop telescope offered the opportunity to revolutionize astronomy and astrophysics with a large mountain observatory. Not one to let a great opportunity pass, George Ellery Hale began actively planning this observatory in 1903, just a few years after the Yerkes Observatory was completed. Hale visited California that same year and met up with W.W.  Campbell, the Director of Lick Observatory. Hale and Campbell explored the region around Mount Wilson © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_6

97

98 

B. E. Penprase

above Pasadena, riding on burros and discussing the possibilities for the future of astronomy. Campbell vouched for the benefits of locating telescopes on a mountain so close to the ocean, as the cool air flowing down from the mountains and above the ocean can provide a “lid” on the warmer surface air. This “inversion layer,” which torments cities like Los Angeles and Santiago, Chile, traps air pollution near the surface but also creates a transparent layer of flat air above the mountains, providing superior seeing and improving the clarity and sharpness of images taken with the telescope. Hale recognized the need for a generation of much larger reflecting telescopes which could provide the massive increase of light-gathering power necessary for obtaining photographs and spectra of galaxies. Hale was aware of the technique of aluminization that had been demonstrated by Le Verrier and Foucault in Paris decades earlier. By using a tiny layer of silver or aluminum on the surface of a glass mirror, the French opticians could obtain much higher reflectivity than polished metal, and these aluminized mirrors could be easily resurfaced and provided greater thermal stability than the quirky solid metal mirrors of Lord Rosse and Herchel. By 1862, Le Verrier had built a 32-inch aluminized reflecting mirror telescope at the Paris Observatory, which pioneered the techniques for building the large reflecting telescopes that would enable a revolution in astronomy in the twentieth century. It was left to Hale to construct the first of these giant telescopes on Mount Wilson in California. Hale already owned the giant 60-inch diameter glass blank built by the French firm St. Gobain in France that his father had purchased for him many years earlier, and now he was ready to use it for his new telescope. Hale left Chicago in 1904, bringing his own 60″ mirror blank with him, to begin building the new largest telescope in the world. The Yerkes 40″ telescope had pushed the limits of size for refracting telescope, as it required an enormous 63-foot long and 23.5-ton tube to hold the lens steady. The large and thin lenses of the Yerkes refractor faced enormous stresses as they were only supported on their edges in the refracting telescope. Hale’s new reflecting primary telescope mirror could be supported from below, enabling much heavier mirrors with diameters far greater than the 40″ Yerkes telescope. Unlike the Yerkes refracting telescope, Hale’s new reflecting telescope would use a much more compact Cassegrain configuration. Using a hyperbolic secondary lens, the Cassegrain telescope provided the same focal length in a structure that could be one-fifth the length of a corresponding refracting (lens) telescope tube (Fig. 6.1). Mount Wilson’s first reflecting stellar telescope was the 60″ telescope, completed in 1908, with a diameter sufficient to allow for spectrographs to

6  The Discovery of the Big Bang 

99

Fig. 6.1  The giant Mount Wilson 100″ Hooker  telescope, the world’s largest  at the time, was built in 1917 and used by Edwin Hubble to discover the linear correlation between distance and velocity for galaxies, now known as “Hubble’s Law,” which gives rise to our modern picture of the expanding universe. (Image from https://www.digitalcommonwealth.org/search/commonwealth:dz011372h)

analyze the starlight and measure velocities of motion of star clusters and galaxies. By 1917, the 100″ Mount Wilson Hooker telescope provided even more powerful capabilities, with nearly three times the light-gathering power of the 60″ telescope. By 1923, Edwin Hubble was able to use the 100″ telescope to measure individual Cepheid variable stars in the Andromeda galaxy and began using these stars to measure distances to dozens of other major galaxies. Hale’s new Mount Wilson Observatory also included massive solar observatories, which used towers and mirrors to direct the Sun’s light to huge underground spectrographs. The solar observatory arose from Hale’s original passion for solar astronomy, which he had pursued since childhood when his father allowed him to equip a research-grade Kenwood Astrophysical Observatory in the backyard of their Chicago home in Hyde Park. High atop Mount Wilson, Hale enjoyed much clearer skies and could map the details of solar prominences and sunspots with unprecedented clarity. Hale built a horizontal solar telescope at Mount Wilson in 1904 and then a 60-foot tower telescope in 1907, followed by an even larger 150-foot vertical solar tower in 1912. Hale’s enthusiasm for solar astronomy was so great that he also built a

100 

B. E. Penprase

solar telescope in his home in San Marino, California, where he tested some of his instruments, like the spectrohelioscope he initially developed at his earlier home Kenwood Observatory in Chicago. Hale’s solar telescopes enabled the sunlight to be resolved by spectrographs to unprecedented precision, revealing microscopic details of the Sun’s light that gave clues to surface motions, the dynamics of sunspots, and the variation in the Sun’s magnetic fields, which were discovered to be the source of sunspots (McKee, 2003). With the development of Mount Wilson, astronomers were on the brink of a second paradigm shift about our place in the universe as profound as the earlier shift provided by Copernicus. The new Mount Wilson telescopes allowed for unprecedented details in images of galaxies, star clusters, and nebulae, combined with more advanced instruments used for spectroscopy and photographic analysis. These techniques were beginning to challenge earlier notions of our place in the Milky Way and the nature of the Milky Way itself. Was the Milky Way just one of many nebulae? And how is our Sun situated within the Milky Way? How can we know the nature of distant nebulae and the larger structure of the universe? Answers to all these questions began to become possible through the patient work of astronomers using the giant new mountaintop telescopes in California. Harlowe Shapley was one of the observers at Mount Wilson carefully working on his data and played a crucial role in answering these questions.

6.2 Harlowe Shapley’s Studies of the Milky Way If Edmond Halley represented many of the emerging qualities of British science in the late seventeenth century, Harlowe Shapley was an exemplar of the emerging American science of the early twentieth century. Born on a small farm in rural Missouri, Shapley worked through school as a reporter and transferred to University of Missouri after attending the Presbyterian Carthage College to study journalism. When he discovered that the journalism school was not available for another year, Shapley shifted toward astronomy. Shapley’s work at the local Laws Observatory in Columbia, Missouri, earned him a recommendation that garnered a fellowship at Princeton to work with one of the top astronomers of the world, Henry Norris Russell. Shapley made the best of the opportunity and did heroic work measuring variable stars, eventually cataloging the orbits of 90 eclipsing binary stars, and working to clarify the nature of the enigmatic Cepheid variable stars. Henrietta Leavitt at Harvard had discovered that the periods of these Cepheid stars were proportional to their luminosity, making them an invaluable tool for measuring

6  The Discovery of the Big Bang 

101

distances by comparing their apparent brightness with luminosity. Leavitt’s work developing this Period Luminosity Relationship enabled astronomers like Shapley to begin unlocking distances to these stars across the Milky Way galaxy and beyond. After graduating from Princeton, Shapley moved to Mount Wilson in 1913 to work with George Ellery Hale in a new program using the new 60″ telescope to study Cepheid variable stars in the distant objects known as globular clusters. Shapley’s work produced dozens of papers that confirmed Leavitt’s period-luminosity relationship and extended the known data to include Cepheid variable stars in dozens of globular star clusters. This data allowed Shapley to measure the distances to these faraway globular star clusters for the first time. To Shapley’s surprise, the locations of these clusters were at enormous distances, which by his estimate, exceeded 60,000 light years. The distributions of these clusters were bunched together on one side of the sky, which created an additional surprise – the Sun and solar system appeared to be at the edge of the system of orbiting globular clusters and presumedly also at the edge of the Milky Way (Fig. 6.2). Shapley’s observations also pointed to a much larger size for our Milky Way galaxy – perhaps as large as 300,000 light years (Hoskin, 2016). This shift in our location to the edge of the Milky Way was at odds with the views of many leading astronomers, including William Herschel, who had placed our Sun at the center of the Milky Way system. John Herschel, William’s son, noticed that the globular clusters were clumped on one side of the sky near Sagittarius in his work in the 1830s. Using data on the motions and counts of stars, Jacobus Kapteyn estimated that the Milky Way would be a flattened disk and placed the Sun slightly off-center. However, Shapley’s model was completely different and placed the Sun at the edge of a vast disk of stars far greater in extent than had been previously imagined. Shapley’s proposed dimensions of the Milky Way galaxy also raised questions about the nature of other of the “spiral nebulae” seen in the sky. Whether these spiral nebulae, like the globular clusters, were in orbit around our Milky Way galaxy or if they were separate galaxies was a mystery at the beginning of the twentieth century.

6.3 The Shapley-Curtiss Debate To resolve the question of the nature of the “spiral nebulae” and showcase some of the new results from the spectacular American observatories, the US National Academy of Sciences convened a debate on April 26, 1920, in

B. E. Penprase Auriga

90°

180°



325° 270°

Sagittarius

Taurus

102 

Scorpio

Fig. 6.2  The Milky Way galaxy, showing the locations of measured globular clusters (dots) and the location of the Sun (red X), based on the observations of Harlowe Shapley. The large circles have radii increasing by 10,000 parsecs, and the diagram shows the radius of the Milky Way as over 30,000 parsecs or over 90,000 light years, far above the modern value. (Image from https://history.aip.org/exhibits/cosmology/ideas/ larger-­image-­pages/pic-­island-­1919.htm)

Washington DC to discuss “the distance scale of the universe.” Shapley represented the Mt. Wilson Observatory and presented his work measuring the locations of the globular clusters and the scale of the Milky Way. His opponent was Heber Curtiss, a former Latin and Greek professor and observer from the Lick Observatory whose photographs of galaxies had convinced him that the “spiral nebulae” were full-fledged galaxies like the Milky Way. Their debate focused on whether the nebulae were additional galaxies, fully the size of our Milky Way galaxy, and the Sun’s location within the Milky Way. Many accounts of the “great debate” noted that Shapley’s delivery was halting and ineffective and that Shapley had advanced the mistaken idea that spiral nebulae were gas clouds within our Milky Way (Hoskin, 1976). Shapley was influenced both by his measured size of the Milky Way and by some recent data from Mt. Wilson, which seemed to indicate that the spiral nebulae were visibly rotating between observations separated by just a few years, suggesting they must be much smaller systems. This data, taken by Adriaan van Maanen from the 100″ Mt. Wilson telescope, turned out to be in error, and reexamining the plates showed that there was no visible rotation of the spiral nebulae (Fernie, 1970). Shapley’s careful studies of globular clusters did, however, give the correct information about the relative location of the Sun within the Milky Way.

6  The Discovery of the Big Bang 

103

Shapley was correct in shifting our location away from the center of our galaxy, and much like Copernicus, challenged our conception of the Sun at a privileged and central position. Curtis was correct in noting that the spiral nebulae were indeed Milky Way-sized systems but incorrect in arguing for the Sun’s location at the center of a much smaller galaxy. For his part, Curtis observed several nebulae with novae – stellar explosions that had been seen earlier in our Milky Way. These novae were much fainter than corresponding explosions in our galaxy – some ten times fainter – suggesting that the distances to the galaxies must be much farther away than the size of the Milky Way. Harlowe Shapley returned to Mount Wilson and discovered dozens of new Cepheid variable stars in distant globular clusters, in the Milky Way disk and the nearby Andromeda “nebula.” Working closely with Milton Humason, Mount Wilson’s former mule driver and handyman, Shapley focused on the Andromeda galaxy, resulting in a 1920 paper entitled “A Seventeenth Nova in the Andromeda Nebula.” At this time, the nature of the Andromeda “nebula” was still under debate, so Shapley and Humason adopted the term “spiral nebulae” instead of considering it as a galaxy like ours. The remaining steps needed to confirm the distances and motions of the spiral nebulae would be taken by another colorful figure in the history of astronomy, Edwin Hubble. Edwin Hubble’s work was able to prove beyond a doubt the immense distances and sizes of these spiral nebulae, which were often the size of our galaxy, or even larger, as in the case of the Andromeda galaxy.

6.4 Edwin Hubble’s Discovery of the Expanding Universe Edwin Hubble was a fascinating combination of the ideal astronomer and a dashing adventurer. His studies included undergraduate work at the University of Chicago (where he also played basketball, football and baseball, and track), receiving a Rhodes scholarship to Oxford from 1910 to 1912, where he obtained a British accent and other Anglophile affectations, and rapid completion of his Ph.D. thesis at Chicago before shipping off to World War I in 1917. After the war, Hubble briefly entertained the idea of becoming a professional boxer (apparently knocking out the German heavyweight champion in one bout) and traveled through Europe contemplating other careers, such as law, before deciding to return to his interest in astronomy (Myint, 2015). Hale invited Edwin Hubble to join the staff at Mt. Wilson in 1919, and Hubble’s return to the US and astronomy was fortuitous. Hale and Hubble

104 

B. E. Penprase

shared a talent for matching opportunities with preparation. In Hubble’s case, he arrived just as the Mount Wilson telescopes were operating at full capacity, with a talented team in place that included Harlow Shapley and Milton Humason, who by this point was a master observer, photographic technician, and mechanical wizard. Vesto Slipher had already reported the motions of galaxies with his pioneering work at Lowell Observatory. Hubble was able to incorporate and extend Slipher’s spectroscopy to include even more galaxies. Hubble also benefitted from Shapley’s work and applied Shapley’s carefully observed calibrations of Cepheid luminosities to the Cepheids in nearby galaxies. Hubble was able to derive estimates of the galaxy distances to be in many millions of light years – far beyond the edge of even Shapley’s overly large estimate of the Milky Way diameter. Hubble’s contribution verified the immense distances of the galaxies using both Cepheid variable stars and novae and then combined those distances with an extended catalog of galaxy spectra that included results from both Mt. Wilson and the Lowell Observatory. Hubble’s breakthrough was to show that the apparent velocities of galaxies increased proportionally with their distances. This discovery, the first evidence of the Big Bang, extended our models of the universe far beyond the Milky Way into a vast expanse of expanding space filled with billions of galaxies. The proportionality of the expansion velocity with distance is now known as the Hubble constant in his honor. Hubble’s discovery was only made possible by building on the earlier work of Slipher and Shapley, and especially from his partnership with former mule driver Milton Humason. Together, Hubble and Humason pushed the new 100″ Mt. Wilson telescope to its limits to unlock the secret of our universe’s origin. Humason’s tireless observations included watching the 100″ telescope throughout the cold California nights in complete darkness with exposures lasting for many hours, and then developing the tiny photographic plates to recover the spectra of distant galaxies. From careful observations of the large galaxies M33 and M31 (The Triangulum and Andromeda Galaxies), Hubble and Humason detected new Cepheid variable stars to enable an accurate determination of the distances to these galaxies by 1924. With the new 100″ telescope, Hubble and Humason could also measure the spectra of these and other galaxies with much more precision than was previously possible. The results from Hubble and Humason’s new spectra from Mount Wilson were combined with Vesto Slipher’s spectroscopy of galaxies from Lowell Observatory to provide a complete picture of how the motions of spiral nebulae (or galaxies) correlated with distance. The galaxies, nearly all apparently flying away from us, at first seemed to reinstate our privileged location at the center of the universe. However, Hubble’s data showed that the expansion

6  The Discovery of the Big Bang 

105

continued to increase linearly over vast distances, and the continuing linear increase in expansion rate proved that the expansion was a property of space itself  – and not just the motions of individual galaxies in space. From this insight, it was clear that our privileged position is just an illusion – and that all other locations in an expanding universe would see all the other parts of space moving away in all directions in the same way. Hubble’s discovery of cosmic expansion revolutionized cosmology and our knowledge of the universe (Fig. 6.3). One of Hubble’s essential discoveries was finding new Cepheid variable stars in the galaxies M33, NGC 6822, and the Andromeda galaxy. These variable stars had been calibrated previously by Harlowe Shapley, and when Hubble applied Shapley’s calibration to calculate their distances, Hubble recognized the galaxies were millions of light years away – which placed them well beyond the reach of our own Milky Way. This revelation showed galaxies to be separate “island universes” and not satellites of our galaxy. Hubble’s results were first published in the New York Times in 1924 and later presented to the American Astronomical Society in 1925. The results made Hubble an international celebrity and demolished Shapley’s previous conception of a much smaller universe centered on the Milky Way. Indeed, when Shapley

Fig. 6.3  Edwin Hubble’s original plot of the velocity of galaxies as a function of galaxy distance that formed the basis of the Hubble velocity distance relation. The slope of the line, which is in units of kilometers per second per Megaparsec, became known as the Hubble Constant or Ho and is one of the critical measurements that describe the rate of expansion of our universe. This image is from the 1929 paper by Hubble presented at the National Academy of Sciences. (Image from https://imagine.gsfc.nasa. gov/features/yba/M31_velocity/hubble_law/more.html)

106 

B. E. Penprase

received the letter from Hubble about this discovery, Shapley responded that it was “the letter that has destroyed my universe” (Voller, 2021a, p. 122).

6.5 Hubble Expansion and the Hubble Constant Once the distances and velocities of the galaxies were put on a graph, the linear relationship between their distances and velocities was clear. This kind of graph became known as a “Hubble Plot” and the slope of the relationship became known as the Hubble Constant or Ho, the most important indicator of cosmology for decades afterward. The Hubble Constant is in units of km s−1 Mpc−1, and in Hubble’s original plot the Hubble Constant had a value of 500 km s−1 Mpc−1, well off the actual value due to an error in the calibration of Cepheid star distances. Hubble had assumed the Cepheid stars in the Andromeda and other galaxies were like those in our galaxy before it was discovered that the effects of heavy elements in the stellar atmosphere systematically decreased stellar luminosity. With the new calibration of the stars, accounting for these different stellar “populations,” the value of the Hubble Constant was corrected and has converged in recent years to a value of Ho = 70 km s−1 Mpc−1. The Hubble constant can be used to measure distances to galaxies with just a measurement of the “redshift” velocity. By measuring the redshift z (as determined by the fractional shift in the observed wavelength based on the apparent  motion) the distance d of the galaxy in Mpc is simply d  =  cz/Ho. The redshift z can be determined from the shift in wavelengths δλ derived from galaxy spectra and can be computed using δλ/λ = v/c = z. The simplicity of this technique enabled the measurements of the distances to thousands of galaxies, based on the derived value of the Hubble constant Ho (Fig. 6.4).

6.6 Models of Cosmology in Early Twentieth Century The new universe that was arising at the time of Hubble’s discovery was one of massive upheaval – both from a revolution in physics from Einstein’s work, and from the traumatic events of World War I. Springing from the ashes of World War I was an outpouring of scientific discovery that rewrote our notions of time and space and set the stage for modern cosmology. One of the pioneers of this revolution came from Russia, in the person of Alexander

6  The Discovery of the Big Bang 

107

Fig. 6.4  Results of cosmic expansion on galaxy spectra. The absorption lines from elements can be measured in a laboratory and observed in the atmospheres of stars in distant galaxies. As those stars move away from us, these wavelengths get shifted toward the longer or redward wavelengths from the Doppler effect. The shift in wavelength divided by the laboratory wavelengths is known as the redshift and can be used with the Hubble Constant to measure distances to galaxies. (Image courtesy of ESO at https://supernova.eso.org/exhibition/images/1111_DUM_2/)

Friedman, who was working with theoretical models of the universe using the newly developed ideas from Einstein’s theory of general relativity. Alexander Friedman began working in 1917 on providing equations to describe the universe with Einstein’s theory. By 1922, he published a model for the large-scale structure of space which included an adjustable constant that could provide either a static or expanding universe – an innovation that predated Hubble’s discovery by several years. Another key character in the early days of astrophysical cosmology was Belgian scientist and ordained priest Georges Lemaitre, who worked closely with Harlow Shapley at Harvard, and with Arthur Eddington at Cambridge. Lemaitre was shaped by the war and received a medal for his courage in surviving German gas attacks before he was able to return to his studies after the

108 

B. E. Penprase

war. Lemaitre’s education began at a Jesuit school in Belgium, followed by university study in engineering in 1911, and a jarring stint at the front lines of World War I with the Belgian army. After the war, Lemaitre received a doctorate in 1920 in physics and then enrolled at a seminary where he was ordained in 1923. Lemaitre’s quest was a relentless campaign to find truth, no doubt made stronger by his experiences in the war. When asked whether his religious practice conflicted with his scientific investigations, he quipped that “there were two ways of arriving at the truth. I decided to follow them both.” (Voller, 2021b, p. 124). Lemaitre combined the careful study of Einstein’s new theory with the new observations of the expansion of space-time from Mount Wilson. In 1925 Lemaitre met with Hubble at Caltech, with Slipher in Flagstaff (where he carefully examined Slipher’s latest spectra of galaxies) and then combined the best available observational data with a physical model that incorporated general relativity, which he had worked on in his Ph.D. studies years earlier. Lemaitre’s 1927 paper describes the universe as one created from the explosion of a primordial “atom,” and his model tracked the expansion of the universe to the present moment and future eras, accounting for the effects of gravity which would slow the expansion with time (Ostriker & Mitton, 2015). Einstein regarded Friedman’s and Lemaitre’s models with strong reservations, as he was uncomfortable with a non-static and evolving universe. Einstein’s response was to incorporate a value of the constant in Friedman’s equation which kept the universe from expanding – which later he regarded as “his greatest blunder.” Einstein also was slow to accept Lemaitre’s model and commented to him at the 1927 Solvay in Brussels, “Your calculations are correct, but your physics is abominable.” (Voller, 2021b, p. 124). As telescopes extended our maps of the sky and began to measure the structure of the universe, our conceptions of time and space were rewritten. The profound insights into the interconnections between space and time from special relativity and general relativity enabled the development of new models of our physical universe. A key element in developing these models was a deeper understanding of the nature of light itself, which we will explore further in the next chapter.

References Fernie, J.  D. (1970). The historical quest for the nature of the spiral nebulae. Publications of the Astronomical Society of the Pacific, 82, 1189. https://doi. org/10.1086/129028

6  The Discovery of the Big Bang 

109

Hoskin, M.  A. (1976). The “great debate”: What really happened. Journal for the History of Astronomy, 7(3), 169–182. https://doi.org/10.1177/002182867 600700302 Hoskin, M. (2016). Harlow Shapley: The making of an observatory director. Journal for the History of Astronomy, 47(3), 317–331. https://doi.org/10.1177/00218286 16660046 McKee, M. (2003). A timeline of Mount Wilson Observatory. Astronomy.com. https:// astronomy.com/magazine/2003/07/a-­timeline-­of-­mount-­wilson-­observatory Myint, B. (2015, December 29). Edwin Hubble: 7 facts about the man who changed the universe. Biography. https://www.biography.com/news/edwin-­hubble­biography-­facts Ostriker, J. P., & Mitton, S. (2015). Heart of darkness: Unraveling the mysteries of the invisible universe. Princeton University Press. Voller, R. (2021a). Hubble, Humason and the Big Bang (p. 122). Springer. Voller, R. (2021b). Hubble, Humason and the Big Bang (p. 124). Springer.

7 The Nature of Light

Modern telescopes allow us to capture the mix of light accessible to us, which is a combination of light from nearby stars from relatively recent times, and light from more distant galaxies from millions or even billions of years in the past. This light travels through space at a finite speed and samples the space as it travels by bending in the presence of mass or differences in density, and sometimes gets interrupted in its journey by bouncing off of atoms or dust particles or being absorbed by electrons within atoms through quantum transitions. Our telescopes give us a tool to explore all of the observable universe around us, which also includes views of the distant past from our observations of faraway galaxies. While we have derived the properties of light itself from centuries of physics experiments, the nature of light and its origins was a mystery to past generations of scientists, who interpreted the origins of light and vision in the context of their time and culture.

7.1 Ideas About Light Over the Centuries It is helpful to examine how our conceptions of what light is have evolved over the centuries. Many of the Western ideas about the nature of light, as well as early models of the universe, were inherited from thinkers in ancient Greece. One of the common historic conceptions of light is that, like sound, it requires a medium to be transmitted. This medium, popularly known as aether, ­originates in very early Greek thought. This aether had its place within the cosmology of ancient Greece as the medium outside the sublunary sphere that would not only transmit light but also help with the motions of the spheres. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_7

111

112 

B. E. Penprase

The aether as a medium would be expected to make its presence known in some ways, such as by providing a “drag” on light moving along with the Earth’s motion. However, since the speed of light was so fast, early scientists lacked any possibility of testing their ideas about light and the aether. Another common notion in earlier centuries involved the origin of light rays. Since our vision is so integrally connected with our mind and eyes, many ancient thinkers believed that the rays of light emanated from our eyes  – something like a radar system – and then bounced off of the objects we could see. Euclid of Alexandria wrote in his work on optics that light arose from rays from our eyes. This idea of “rays,” which later plays a central part in optics, correctly notes that light travels in straight lines through space. The practical problems of what happens when our eyes close or why there is darkness apparently did not bother Euclid. Another interesting consequence of this theory involves our ability to view stars – which in this conception, would require a round trip from our eyes to the stars and back. This idea would require the speed of light to be nearly infinite for it not to create easily observed problems. Other thinkers suggested that the light we see emanates from luminous objects, with examples including Acragas (490–435 BC), who spoke of light traveling from the stars, and other scholars who described the nature of light from the Sun and stars through an analogy with fire, which clearly arises as a property of the source itself. The actual speed of light, if it is finite, should enable a clever experiment to measure its delay. Early thinkers from Greece and the Islamic world all recognized that the speed of light would be finite. Some European philosophers were concerned that this travel time would impact the widely believed astrological forecasting. Two top Islamic scientists were Sina (980–1072) and Al-Hasan Ibn Al-Haitham, and both believed that light travelled with a finite speed. Galileo proposed one experiment for measuring the speed of light in 1638. Galileo suggested that a pair of experimenters with lanterns separated by a large distance could uncover one lantern and determine the time delay after the distant observer immediately uncovered the second lantern. This experiment was apparently tried in 1667 by the Florentine Academy with observers separated by a distance of 1 mile, but the team did not observe any delay. This experiment also needed to account for the reaction time of the experimenter – which would be far greater than the travel time of the light beam for 1 mile!  After Galileo, the best minds in the world debated about the nature of light. Newton and Descartes believed that light was a particle, which Newton called “corpuscles,” which would be expected to travel faster in dense media. This prediction was based on observations of how sound waves propagated in materials, showing an apparent increase in speed within denser materials. The

7  The Nature of Light 

113

early physicists expected that the speed of the wave would be proportional to the square root of the “elastic modulus” or Young’s modulus divided by density. This created a debate on the density of the aether that could enable the rapid travel of light without providing friction for celestial objects. Other physicists like Huygens, Fresnel, and Young believed that light was a wave that traveled slower in water or glass. The division of thought between early physicists – whether light is a particle or wave – bears some superficial resemblance to the controversy about matter that arose centuries later when quantum mechanics presented evidence that matter (and light) could behave both as a particle and a wave. Newton’s thoughts on the particle nature of light and aether are expressed in his 1704 work Opticks. Newton suggested that “Æther (like our Air) may contain Particles which endeavour to recede from one another (for I do not know what this Æther is) and that its Particles are exceedingly smaller than those of Air, or even than those of Light.” Newton did, however, make estimates for the density and elasticity of the aether in his treatise, suggesting that the aether would be very elastic and rarified, in fact, “700,000 times more elastick than our Air, and above 700,000 times more rare.” With this prescription, Newton was comfortable that the very thin aether would not get in the way of planetary motions, since “its resistance would be above 600,000,000 times less than that of Water,” which would “scarce make any sensible alteration in the Motions of the Planets in ten thousand Years.” Speculation about the mysterious aether continued throughout the nineteenth century, and Lord Kelvin, in his Baltimore Lectures from 1884, estimated that the density of the aether was about 10–20 pounds per cubic foot and commented that the aether was “the only form of matter about which we know nothing at all.” Aether as a medium for carrying light was included in all physical models prevailing in the nineteenth century, partly from a philosophical preference for consistency with models of how sound propagated and also from  the lack of any other known mechanism for propagating light. The lack of a medium for propagating gravity also presented problems, and Lord Kelvin, in an 1893 book, noted that the gravitational force from the Sun on the planets “seemed to violate the supposed philosophical principle that matter cannot act where it is not.” (Spence, 2019a).

7.2 Roemer’s Speed of Light Measurement While little progress was made in measuring the density or other properties of the aether, excellent progress in measuring the speed of light was accomplished soon after Galileo’s failed lamp experiment. Ole Roemer, in 1676, developed an ingenious method for measuring the speed of light, which made use of

114 

B. E. Penprase

Galileo’s discovery of the moons of Jupiter. Astronomers noticed that Jupiter’s moons have frequent and regular transits – with Io transiting every 42.5 h and Europa every 85 h. This regularity could form the basis for a sort of astronomical clock that mariners could use since Jupiter’s moons could be checked from any location on Earth. Combining the observations from a given location with a tabulation of the timing of Jupiter’s moons could provide a method to find longitude – an urgent problem of the late seventeenth century. Giovanni Cassini led the effort to use this new astronomical clock with his staff at the Paris Observatory, which included the young Danish astronomer Ole Roemer. After the team compiled a table of transit timings in 1670, Cassini sent some of his astronomers to a remote location to test the technique for longitude determination. The group, led by Jean Picard and accompanied by a young Ole Roemer, set off for the site of Tyco Brahe’s old observatory known as Uraniborg, over 1000 km away in Denmark. By this time, Uraniborg was already a historic observatory famous for its role in providing the data for Kepler’s determination of planetary orbits nearly a century earlier. These observations concluded shortly after Brahe’s death in 1601, and the observatory was largely abandoned when Picard and Roemer conducted their new experiments. After compiling a set of measurements from Uraniborg, Roemer went back to Paris to analyze his data and found that seasonal variations in the period of orbits were observed. Mysteriously, the timing of the transits appeared to be a bit early when Jupiter was moving toward opposition (or at its closest approach) and a bit late as Jupiter slid past opposition into the earlier evening skies. In September 1676, Roemer presented a paper on the measurement of these delays and suggested it was due to the finite speed of light. Roemer explained the delays as the simple result of Earth’s moving closer and farther from Jupiter, causing the transit to be viewed earlier or later depending on whether the distance between Earth and Jupiter was decreasing or increasing, respectively (Spence, 2019b). Calculating the speed of light required the help of another great seventeenth-­ century astronomer, Christiaan Huygens. Huygens combined Roemer’s measured delays of Jupiter’s moon transits with the known distance between the Earth and the Sun and the distance to Jupiter (calculated using Kepler’s law). Putting the data together, Huygens used Roemer’s measurements to estimate the speed of light, accounting for the seasonal variations in the distances between Earth and Jupiter. Huygens estimated the speed of light in 1690, using the best-known values for Earth’s orbital radius (or the astronomical unit). Huygens’ measurement came to 214,000  km/s for the speed of light: far from the modern value of 300,000 km/s but a fantastic feat of astronomy for the time.

7  The Nature of Light 

115

7.3 The Transit of Venus and the Astronomical Unit One of the vital missing ingredients for determining the speed of light using Roemer’s method was a more accurate measurement of the astronomical unit, the average distance between the Earth and the Sun. Kepler’s laws allowed for calculating the relative spacing of the planets in terms of the Earth’s distance, but the actual value of the Earth’s distance to the Sun was not known accurately. The irrepressible Edmond Halley played a role in helping solve the puzzle. Halley noticed that the 1761 and 1769 transits of Venus would provide a perfect opportunity for measuring the parallax of Venus  from two widely separated locations on Earth – giving a chance to measure the distance between Earth and Venus directly, thereby revealing the precise size of the solar system. Halley’s earlier experience as a young man mapping the stars in Saint Helena in 1676 coincided with a transit of Mercury – which convinced him of the possibility of using planetary transits to measure the distance scale of the solar system. While Halley did not live to see these Venus transits, he set plans in motion to equip expeditions to use these rare Venus transit events to measure the astronomical unit. Well before the 1761 transit, ships from England and France were equipped and ready to traverse the Earth to conduct these observations. One of the British crews was dispatched to Saint Helena, Halley’s earlier home, and the astronomer Nevill Maskelyne set up a new observatory there to take observations. A second British crew was dispatched to Sumatra in present-day Indonesia but unfortunately was attacked by a French frigate and had to return; fortunately, they were able to return to Cape Town and make it in time to see the transit in 1761. These observations were ultimately unable to provide an accurate result, partly due to weather issues and some uncertainties about the longitudes of the various sites (Spence, 2019c, p. 41). As unfortunate as the British expeditions were, the misfortune of French astronomer Le Gentil was even more dramatic, as he spent nearly a decade attempting to measure both Venus transit events and failed both times miserably. Le Gentil set off for India for the first 1761 event but stopped at Mauritius on the way and suffered from dysentery. His destination in India, Pondicherry, was under siege from British forces, so he instead decided to observe from the tiny island of Rodrigues, 560 km east of Mauritius. His ship to Rodrigues got re-routed back to Mauritius, however, and without a ship back to Rodrigues, he instead attempted again to reach Pondicherry, but his ship was becalmed, and with Pondicherry under complete British control, he was forced to return to Mauritius and was forced to make his observations from the ship. Needless to say, the rolling ship could not give quality observations for the 1761 transit.

116 

B. E. Penprase

To prevent any further problems, Le Gentil moved to India to prepare for the 1769 transit. With ample time, Le Gentil was able to build an observatory and awaited the day of the eclipse while in residence in India for nearly 8 years. Finally, when the day of the Venus transit came in 1769, cloudy weather prevented Le Gentil from getting any data. Dejected, he returned to France in 1771, but his long absence had convinced his supervisor that he was dead and so his position was given to someone else, and his belongings had been distributed to his family. In the end, Le Gentil recovered his position and estate and wrote a book about his tragic misadventures. His book, A Voyage in the Indian Ocean, was published in 1779 and 1791 in two volumes, and he lived until 1792. As additional consolation, the crater Le Gentil on the Moon was named for him in 1961. Fortunately for science, many other astronomers from around the world successfully observed the second 1769 Venus transit. The 1769 transit benefitted from a cessation in hostilities in the 7 Years’ War, and a global effort was made to capture this event, building on the experience gained from unsuccessful measurements in 1761. Even better, Venus’s predicted path across the Sun’s disk in 1769 was more favorable for accurate measurement. French astronomers were based in Baja California, led by Jean-Baptiste Chappe. The French astronomers were joined by Russian astronomers, funded by Catherine the Great, who brought additional equipment and telescopes. The French transit observation team was caught in a storm and barely reached Baja California. The leader of the French team, Chappe, did succeed in getting good observations, but he and many of his expedition died of typhus soon afterward in the Mission San Jose del Cabo (Nunis et al., 1982). The survivors did return to Paris with the notebook of data, however. Some British observations of the Venus transit were taken by the explorer James Cook from Tahiti. Cook and his crew on the ship the Endeavor were on the Island of Tahiti in 1769, and the observations were helped by the fact that Tahiti’s longitude and latitude were well known. Cook and his team set up a portable observatory on Tahiti that included a telescope and pendulum clock to observe the timing of the entry and exit of Venus on the disk of the Sun (Fig. 7.1). As Cook wrote in his diary: This day prov’d as favourable to our purpose as we could wish, not a Clowd was to be seen the whole day and the Air was perfectly clear, so that we had every advantage we could desire in Observing the whole of the passage of the Planet Venus over the Sun’s disk (Phillips, 2022).

7  The Nature of Light 

117

Fig. 7.1  Cook’s portable observatory used in Tahiti to measure the 1769 Venus transit. The pendulum clock was used to provide precise timing of the duration of the transit, which was used to measure the location on the Sun’s disk of the apparent position of Venus. Over 300 astronomers from around the world measured the event, and it provided the first accurate measurement of the astronomical unit and helped improved our estimate of the speed of light. (Image from https://en.wikipedia.org/wiki/1769_ transit_of_Venus_observed_from_Tahiti#/media/File:Observtory.jpg)

In total, on June 3, 1769, about 250 astronomers around the world worldwide observed the event at about 130 different locations. The resulting publication by Thomas Hornsby in 1771 derived an estimate of 93,726,900 English Miles, which is within 1% of the modern value of the astronomical unit (Spence, 2019d, p. 45). The observations by Cook and dozens of other scientists around the world of the Venus transit provided the first ever accurate measurements of the astronomical unit, which improved all astronomy calculations, and greatly improved the estimates of the speed of light using Roemer’s method. By the end of the eighteenth century, the astronomical unit was known to within 1%, which, when combined with the observations of Jupiter’s moons, enabled the speed of light to be much more accurately determined (Fig. 7.2).

118 

B. E. Penprase

Fig. 7.2  This figure shows the locations on the Sun’s disk of the four most recent Venus transits. The 1761 and 1769 events triggered a global response from the world’s astronomers and provided the first accurate measurements of the astronomical unit. Venus transits occur roughly once a century and in pairs. British, German and American scientists used the subsequent 1874 and 1882 transits to improve the measurements to 0.007% of the accepted value, and the most modern Venus transits were in 2004 and 2012. (Image from Proctor, 1874)

7.4 Measurement of the Speed of Light in the Nineteenth Century By the nineteenth century, new ideas about light, the phenomenon of refraction and diffraction, and geometric optics were developed by Young, Snell, Fresnel, and others. Laboratory measurements enabled physicists to describe light’s key properties, such as refraction and diffraction. Even as optics and the wave properties of light were developed, the nature of light itself was still unknown. The properties of refraction, which earlier physicists called “refrangibility,” were well known and could be used to develop lenses

7  The Nature of Light 

119

and prisms, and to disperse the colors of light. However, the physicists of the time would invoke the aether in explaining how the speed of light was slower within objects like glass or water. Fresnel thought that “glass carries some of the Aether along with it, leading to a higher Aether density within the glass,” which leads to the slower observed speed in glass. Improved techniques in the nineteenth century allowed scientists to make rapid progress in measuring the speed of light, however. British scientist Charles Wheatstone in 1834, made a rotating mirror to view the optical light from a spark and compare that with the spark from electrical signals sent on a half-mile detour. Wheatstone’s apparatus included a mirror rotating at 800 revolutions per second and measured the speed of light to be 250,000 miles per second. The French physicist Francois Arago took up Wheatstone’s idea and in 1838 proposed a similar technique to measure light beams directly. Arago was aided by a young assistant named Hippolyte Fizeau, who had been studying light and published his own 1848 account of the “Doppler effect” based on his own measurements of light and sound. In 1849 Fizeau reported on their speed of light measurement, which was the first ever measurement based solely on laboratory light sources. The technique involved rotating wheels with 720 teeth that would intermittently cut into a beam of light on its way toward a distant mirror. By adjusting the rotation speed, the teeth could be timed to admit both the outgoing and incoming beam of light – and the short interval of time whereby the tooth is traveling in the beam is the time of round-trip travel. Fizeau’s compatriot Leon Foucault improved Fizeau’s technique by having the light reflect from a rotating mirror whose speed can be adjusted to reflect the light back to a source. Offsets in the beam will be sensitive to the travel time across the apparatus (Spence, 2019e) (Fig. 7.3). The French scientist Marie Alfred Cornu kept up the work by extending the beam’s length to longer distances, thereby improving accuracy. This refinement meant moving the beams outside the laboratory, where they would travel over the rooftops of many houses in Paris (with cooking chimneys providing a big source of difficulty!) (Spence, 2019f ) (Fig. 7.4). L

R

M

S

Fig. 7.3  Foucault’s rotating mirror system for measuring the speed of light. (Image from https://en.wikipedia.org/wiki/Fizeau%E2%80%93Foucault_apparatus#/media/ File:Michelson’s_1879_Refinement_of_Foucault.png)

120 

B. E. Penprase

Fig. 7.4  Evolution of the measurements for the speed of light, beginning with the measurements by Roemer and calculations from Huygens in 1690, including more modern efforts by Fizeau and Foucault, as well as by Michelson. The value obtained by Michelson in 1935, 299,792 km/s, is very close to the contemporary value. The speed of light is known within one part in a billion (Verberck, 2018). (Figure by the author with modern points based on data from Morgan and Keith, 2008)

7.5 The Physics of Light Michael Faraday performed a wide range of experiments on electricity and magnetism as well as on the nature of light from his base at the Royal Institute in London. His experiments included demonstrating the rotation of the polarization of light in a magnetic field in 1831 (confirming a relationship between light and magnetism) and mapping the magnetic field lines in his laboratory using iron filings. Faraday suggested that if the magnetic fields were vibrated, they could radiate energy. Faraday used graphical maps to represent invisible force lines and suggested that field lines could exist independently outside of the charges and propagate through space. He also conclusively demonstrated the interconversion of electricity and magnetism and made various moving coils that were some of the first electrical generators. Faraday experimentally determined many of the concepts of induction and that a changing magnetic field can induce voltages. While Faraday was making great strides in experimenting with electricity and magnetism, it was up to James Clerk Maxwell to fully develop the theory of electricity and magnetism. Maxwell started with the idea of an elastic substance, the aether, filling the universe and conveying the field lines measured by Faraday. Faraday’s suggestion that field lines could vibrate inspired Maxwell, and after corresponding with Faraday, Maxwell focused on bringing magnetic and electric force fields together in a unified theory. In 1855 he wrote that the

7  The Nature of Light 

121

aether could be likened to tubes of massless fluid, which allowed Maxwell to use some of the mathematical ideas of fluid mechanics, which by this time were well developed. In a second paper in 1861, Maxwell added Newton’s laws to his theory, and included a mechanical model for electromagnetics, which included invisible vortices, wheels, and gears in the vacuum, somewhat reminiscent of ancient models of the cosmic system from Aristotle and Ptolemy. By 1865, Maxwell had kept the mathematical machinery but removed the mechanisms and focused on the energy transport within the fields. In this paper, Maxwell’s equations first appear, which were written out in coordinate components, and not using the vector notation taught in every undergraduate physics course today (Fig. 7.5). While Maxwell’s reliance on the aether blocked a complete understanding of electromagnetism, his equations provided a brilliant, crisp, and completely accurate formulation of the interconnections between electric fields and magnetic fields and how they are created from stationary and moving electric charges. Within his equation was also the prediction of the speed of light,

Fig. 7.5  (Left) An illustration of Maxwell’s model of the elastic aether with spinning vortices; vortex cells are shown surrounded by wheels which, when they move sideways, provide electrical current in response to stretching the assembly. The vortices would be associated with what modern physicists call the “curl” of the field, and the translation of stretches in the assemblage into current would be interpreted mathematically through Maxwell’s equations as the conversion of changes in magnetic field into the electrical field. (Right) The mathematical version of Maxwell’s formulation expressed with more modern vector notation developed by Heaviside. The first equation states the relationship between the electric field E and the charge density ρ, which provides the theory of electrostatics. The second equation states the absence of a corresponding magnetic charge. The third and fourth equations express how electric and magnetic fields E and B arise from changes in the magnetic field (dB/dt) and current and changes in the electric field (J + dE/dt), respectively. (Images from (Sarkar & Salazar-­ Palma, 2016) and https://en.wikipedia.org/wiki/Maxwell%27s_equations)

122 

B. E. Penprase

Fig. 7.6  Image showing the nature of light as a self-propagating electromagnetic wave. An initial disturbance of electric or magnetic fields creates changing electric or magnetic fields that propagate away from the source of the disturbance according to Maxwell’s laws. (From https://commons.wikimedia.org/wiki/File:Electromagnetic_ waves.png)

written as the fundamental constant c, along with a mechanism for generating light. Light, as we know it now, is an electromagnetic wave in which the changes in the magnetic field generate electric fields, which in turn provides magnetic fields. This self-perpetuating generation of magnetic and electric fields then moves forward with the speed c that Maxwell predicted. This is the reason physicists so revere Maxwell for his accomplishments, and often in jest speak of Maxwell’s equations with the Biblical language of “let there be light.” As Maxwell himself put it in his 1864 paper, “The agreement obtained seems to show that light and magnetism are affectations of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electrodynamic laws” (Fig. 7.6).

7.6 Albert Michelson and the Michelson-Morley Experiment Putting Maxwell’s ideas to the test and confirming or denying the existence of the aether was the job of master experimenter Albert Michelson. Albert Michelson was born to Jewish parents from Poland in 1852, who moved to California in 1855 when he was just 2 years old. He grew up in mining camps

7  The Nature of Light 

123

in California and Nevada, as his father was a merchant. Michelson was a very bright student, but his parents were of minimal means, so his path to college was unclear. Taking matters into his own hands (and showing the resourcefulness that would make him one of the greatest experimenters of all time), Michelson took a train to Washington DC as an eager 16-year-old to appeal to President Ulysses Grant for admission to the US Naval Academy. Only ten spots for this kind of “appointment at large” were available, and while Michelson was initially rejected, he did manage to get an appointment a year later, in 1869, and graduated in 1872. Afterward, Michelson served on ships for 2 years and became a physics instructor at the Academy. As part of his physics instruction at Annapolis, Michelson devised an experiment for measuring the speed of light as a classroom demonstration in 1877. This experiment was later refined and provided an estimate of the speed of light of 299,940 meters per second, within 0.05% of the modern value (Livingston, 1979). Michelson’s effort was inspired by a lecture by the visiting British physicist John Tyndall, and from reading the new book by James Clerk Maxwell. His experimental design was based on the techniques invented by Foucault but with improvements based on his genius for mechanical and optical instrumentation. Maxwell’s talents caught the attention of European scientists, and he left the US in 1880 to study and conduct research in Berlin, Heidelberg, and Paris and then returned to the US to establish the physics program at Case University in Cleveland, Ohio. Michelson focused his work on establishing the nature of the aether that had been discussed by Maxwell but which resisted experimental verification. Like the “dark matter” of its time, the physics and astronomy community suggested that the aether would exist but had little or no reasonable experimental evidence of its existence or properties. Maxwell had suggested that the motion of the Earth and Sun would pass through the aether, which could leave detectable imprints on the electromagnetic waves. Michelson was determined to test this idea with his own device which could measure minute differences between waves traveling along the direction of the Earth’s motion and those moving perpendicular. Working with his colleague at Case University, Edward Morley, Michelson invented an apparatus with a set of mirrors precisely configured to detect interference of these two beams, which became the basis of the famous “Michelson-Morley experiment.” In an 1878 paper, Michelson mused that “the hypothesis of an aether has been maintained by different speculators for very different reasons” which included “philosophical principles” that required existence of a “plenum” that could both remove nature’s “abhorrence of a vacuum.” Michelson also noted

124 

B. E. Penprase

that authors had invented many uses for the aether, including as a substance “for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, till all space had been filled three or four times over with Aethers.” While Michelson acknowledged that some of these notions of aether were absurd, the existence of a “luminiferous aether” was necessary to explain the electromagnetic phenomena described so elegantly by Maxwell (Spence, 2019g, p. 115). Maxwell initially proposed to measure the speed of light using Roemer’s method at two points in the Earth’s orbit on opposite sides of Jupiter, providing the maximum signal from the Earth’s motion. Maxwell believed that the different relative velocities of light would provide two different answers for the speed of light and allow for detection of the aether. Michelson constructed his interferometer to send two beams in perpendicular directions – one along the direction of the Earth’s motion and one perpendicular to the Earth’s motion. This device, now commonly referred to as a “Michelson interferometer”, became the basis for the twentieth-century detector of gravitational radiation known as LIGO – a topic discussed in Chap. 14 (Fig. 7.7).

Fig. 7.7  Michelson’s interferometer experiment set up to measure the effects of the Earth’s motion on the speed of light as it passed through the aether. Michelson sent two beams of light at right angles, one parallel to Earth’s motion and one perpendicular to the Earth’s motion. The device is sensitive to microscopic differences in timing for the beams, which result in interference with the light waves. Michelson found no effects from the aether, and this negative result was a great challenge for physics and prompted Einstein’s Special Theory of Relativity. (Images from https://www.loc.gov/ pictures/resource/cph.3b41499/ and https://commons.wikimedia.org/wiki/Category: Michelson_interferometer#/media/File:Michelson-­Morley_experiment_conducted_ with_white_light.png)

7  The Nature of Light 

125

Michelson estimated the time interval T2 of a beam traveling between a fixed distance L based on the apparent speed of light while traveling “upstream” (c – v) and then “downstream” (c + v) (with Earth’s velocity v parallel to the light beam). This timing would be predicted to be T2 = L/(c + v) + L/(c−v). His calculation assumed that the timing for the beam moving perpendicular to the Earth’s velocity would simply be T1 = 2 L/c. The slight difference between T1 and T2 would cause the two beams to arrive at slightly different times, causing a detectable shift in the phase of the light. Michelson’s calculations of the time difference included his best estimate for the speed of light c and the Earth’s orbital motion of 30 km/s. To maximize the sensitivity, the beam was folded so that it traveled a total of 11 meters before the two paths were brought together. Using these values in the equations above, we can compute that T2 is expected to be longer than T1 by just less than a thousandth of a trillionth of a second! This tiny shift in arrival times could be detected by the exquisite sensitivity of the interfering light beams, which would be shifted by 0.4 wavelengths and provide confirmation of the aether if this effect were present. After 8 years of effort improving his device, by 1886, Michelson and Morely could definitively show that there were no differences in the beams, regardless of their direction of travel. Michelson concluded: … the interpretation of these results is that there is no displacement of the interference bands. The result of the hypothesis of a stationary Aether is thus shown to be incorrect (Spence, 2019h).

After many more years of improving Michelson’s experimental design, the negative result persisted. This negative result was perhaps the most impactful null result in the history of physics, and theories abounded as to why the aether could not be detected. Some, including Michelson, proposed that the Aether was being “dragged” by the Earth, and other mechanisms were proposed. Mathematical arguments provided by Heinrich Lorentz in his 1895 paper suggested that the lengths of the two paths in Michelson’s device are actually not the same. In this way, by having the two arms contract or expand depending on their orientation, the timing of the beams (with the corrections for “lumiferous aether”) would preserve Michelson’s null result. Lorentz explained “that the motion of a rigid body…would have an influence on the dimensions throughout the aether.” Lorentz acknowledged that “as strange as this hypothesis would appear… electrical and magnetic forces are transmitted through the aether..” and that these forces would act on a molecular level so that “a change of the dimensions is inevitable.” The argument of Lorentz is

126 

B. E. Penprase

often referred to as the “Lorentz contraction,” and turned out to be mathematically correct, but wrong conceptually for reasons that Einstein clarified in his 1905 paper that introduced his theory of Special Relativity to the world (Brown, 2001). Since the aberration of starlight had already been demonstrated by Bradley more than a century earlier, it was known that the Earth’s motion could be detected from light beams. It was left to Einstein to fully explain these apparent contradictions, and Einstein mentions the “failed attempts to detect the motion of the Earth relative to the “light-medium (Aether)” in his 1905 paper. This failure to detect the motion of the Earth, Einstein would show, comes from the inextricable linking of time and space which reduces the length of the path of light traveling in the direction of motion but also changes the nature of time to preserve the constant speed of light, regardless of the observer’s motion.

References Brown, H. R. (2001). The origins of length contraction: I. The FitzGerald–Lorentz deformation hypothesis. American Journal of Physics, 69(10), 1044–1054. https:// doi.org/10.1119/1.1379733 Livingston, D. (1979). The master of light: A biography of Albert A.  Michelson. University of Chicago Press. Morgan, M. G., & Keith, D. W. (2008). Improving the way we think about projecting future energy use and emissions of carbon dioxide. Climatic Change, 90(3), 189–215. https://doi.org/10.1007/s10584-­008-­9458-­1 Nunis, D., Geiger, M. J., Engstrand, I., Geiger, M., & Donahue, J. (1982). The 1769 transit of Venus: The Baja California observations of Jean-Baptiste Chappe d’Auteroche, Vicente de Doz, and Joaquín Velázquez Cárdenas de León. Natural History Museum Of Los Angeles County. Phillips, T. (2022). James cook and the transit of venus | science mission directorate. Science.nasa.gov; NASA. https://science.nasa.gov/science-­news/ science-­at-­nasa/2012/02jun_jamescook Proctor, R. (1874). Transits of venus: A popular account of past and coming transits. Longmans Green. Sarkar, T. K., & Salazar-Palma, M. (2016). Maxwell, J.C. Maxwell’s original presentation of electromagnetic theory and its evolution. In Z. Chen, D. Liu, H. Nakano, X. Qing & T. Zwick, T. (Eds.), Handbook of antenna technologies. Springer, Singapore. https://doi.org/10.1007/978-­981-­4560-­44-­3_12 Spence, J. (2019a). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 15). Oxford Academic.

7  The Nature of Light 

127

Spence, J. (2019b). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 21). Oxford Academic. Spence, J. (2019c). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 41). Oxford Academic. Spence, J. (2019d). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 45). Oxford Academic. Spence, J. (2019e). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 79). Oxford Academic. Spence, J. (2019f ). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 85). Oxford Academic. Spence, J. (2019g). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 115). Oxford Academic. Spence, J. (2019h). Lightspeed: The ghostly Aether and the race to measure the speed of light (p. 123). Oxford Academic. Verberck, B. (2018). As fast as it gets. Nature Physics, 14(12), 1232–1232. https:// doi.org/10.1038/s41567-­018-­0374-­7

8 Spacetime and Curved Space

As has been described in numerous other books and accounts, the dawning of the twentieth century provided a nearly complete edifice of physics, in which the universe seemed well described by the combination of Newton’s laws of physics, which could predict the motions of objects and the forces of gravity with exquisite precision,  and Maxwell’s laws of electromagnetism, which unlocked the nature of light and unified both electric and magnetic fields. Maxwell’s equations reconciled the formerly separate forces of electricity and magnetism into a single electromagnetic field, elegantly expressed mathematically. And yet, several troubling loose ends appeared just as the edifice of physics seemed complete. One concerned the strange behaviors of atoms and light that eventually arose to form the new quantum theory, and the other concerned Michelson’s discovery of the strange absence of evidence of the aether, which suggested that the understanding of the nature of light (as transmitted through the aether) was seriously flawed. It was the work of Albert Einstein that tackled many of these unsolved problems. In the single year of 1905, he rewrote our models of space and time with his theory of special relativity. He also reformulated our ideas of how light works on a quantum level with his work on the photoelectric effect, which earned him his Nobel Prize in 1921.

8.1 Einstein’s Special Theory of Relativity Albert Einstein was born in Ulm, Germany, in 1879 and showed interest from a young age in the problems of light and magnetism. In 1894, at age 15, he wrote a paper on “The State of Aether in a magnetic field.” He was famously © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_8

129

130 

B. E. Penprase

a rebellious student and yet managed to place in the Zurich Polytechnic. He obtained a teaching diploma in 1900 and began working at the Swiss patent office in 1902 while building a family with his wife, Mileva Maric, who was one of his classmates. His three children were born in the years 1902, 1904, and 1910, and amid this time, Einstein had his famous annus mirabilis or miracle year in 1905, in which he published four papers that changed all of physics. His Nobel Prize was awarded for the first of the papers that year that discussed the photoelectric effect, which explained the observed quantization of light and provided some of the foundational ideas for quantum mechanics. The second of the year’s papers discussed Brownian Motion, the jostling of atoms and molecules at a small scale due to collisions with other atoms and molecules. He published his famous relationship that described the equivalence of mass and energy via the famous expression E = mc2 in his fourth paper that year. And yet his third paper, entitled “On the Electrodynamics of Moving Bodies,” set the stage for the most substantive adjustment of our understanding of time and space through what is now commonly known as the special theory of relativity. Einstein responded to the challenge of the undiscovered motion of the Earth through the “aether” with a completely original take on reality. Einstein’s theory inverted the notion of a uniform, steady forward march of time. It replaced it with a new conception in which the constant speed of light was primary, and time itself secondary and variable depending on the observer and their motions through space. Einstein’s theory of special relativity was developed to respond to issues related to how light could be observed by observers moving with different velocities. It led to a new formulation of space and time where time itself was relative to one’s choice of reference frame and was therefore affected by an observer’s motion through space. The reference frame referred to the status of the observer, whether in motion or at rest, and provided a basis for evaluating the equivalence of the laws of physics in these different cases. Einstein’s explanation was independent of the notion of ether as the medium for light’s travel. Einstein instead used the invariance of Maxwell’s laws (and therefore the behavior of light) as a central consideration and adopted the assumption that, above all else, the speed of light would remain constant for all observers, regardless of motion. This paradigm shift centered on the fundamental reality of light itself and replaced the assumptions embedded within Newtonian physics of the absolute nature of time. For Newton, time was assumed to be uniform and linear and independent of one’s speed, while for Einstein, the time interval between two events depends on one’s choice of reference frame, but the physical laws describing light and electromagnetism would be the same for all observers.

8  Spacetime and Curved Space 

131

Einstein’s work on the theory known as special relativity was not wholly his own invention  – his insights were inspired and aided by others who were making partial progress on solving some of the mysteries that the Michelson-­ Morley experiment had revealed. The partial solution proposed by Lorentz suggested the possibility of frame-depenence of time and a correct mathematical expression for the contraction of moving objects. The Russian mathematician Hermann Minkowski was also crucial in reformulating some of Einstein’s ideas by combining space and time into a mathematically unified whole called spacetime. And some commentators have noted that Einstein’s wife, Mileva Maric, undoubtedly played a crucial role in inspiring and refining Einstein’s thought, as she was a physics student along with Einstein, and all too often, men have not given full credit to the women who worked alongside them. Since Galileo’s time, it was known that physical laws should apply to all observers, and in the motions of everyday life, this can be observed quite easily. The idea was described by Galileo himself – hence the idea of “Galilean relativity,” which describes the equivalence of physical laws for an observer moving at a uniform velocity. Galileo drew on the metaphor of a steadily moving ship in a famous thought experiment (Galileo, 1970): Shut yourself up with some friend below decks on some large ship, and have with you some flies, butterflies, and other flying animals. Have a large bowl of water with some fish in it... With the ship standing still, observe carefully how the little animals fly with equal speeds to all sides of the cabin. The fish swim indifferently in all directions... have the ship proceed with any speed you like, so long as the motion is uniform and not fluctuating this way and that. You will discover not the least change in all the effects named, nor could you tell from any of them whether the ship was moving or standing still.

The Galilean conception of relativity, combined with Newton’s physics, made a strong  statement about the universality of physical laws for all observers. Einstein took the next step and considered how the newly formulated Maxwell equations would transform for moving observers – even if they were moving at speeds close to that of light. For Maxwell’s equations to hold for all observers, the same speed of light would need to be measured since this speed c is predicted to be a fundamental constant within Maxwell’s equations. Einstein’s theory of special relativity is most easily explained with a more modern thought experiment using clocks that measure time by using flashes of light bouncing between two mirrors – much as was the case in Michelson’s experimental apparatus. As Lorentz suggested, a pulse of light bouncing up and down between two mirrors will experience different travel distances if

132 

B. E. Penprase

Fig. 8.1  A light clock observed in a frame where it is at rest, in which the timing for a round trip by a beam of light provides a basis for measuring time in a frame where it is at rest. (Figure by the author)

light-clock’ is moving instead of being at rest. But what Einstein added to the picture was the new idea that the time elapsed for an observer would depend on whether the observer was moving or stationary. While the rate of time’s passage as measured by such a clock would depend on the clock’s motion, the speed of light would always be absolutely the same. This idea that time is “relative” to the observer’s situation or “reference frame” made the theory of relativity revolutionary. It replaced time as an absolute, objective, and unchanging quantity for all observers with the speed of light as absolute for all reference frames. In the Fig.  8.1 below, an observer seeing the light clock at rest would observe light bouncing up and down and covering the distance 2Lo in one round trip, at the speed of light c and with the travel time Δt. Put into an equation, 2Lo = cΔt, where Δt is the time of one round trip, or Δt = 2 Lo/c. This time interval defines the rate at which this clock ticks. When this clock is viewed in a reference frame in which it moves at speed v to the right, as shown in Fig. 8.2, the distance traveled by the light would increase to

 2 Lo    vt ’  2





2

. This distance travelled will be equal to cΔt′, or

cΔt′ =  2 Lo   vt ’ . Without thinking too much about the math involved, it is clear that the time intervals Δt and Δt′ are different. Since the speed of light c in both frames are identical, we can note that Δt′ is greater 2

2





than Δt, since  2 Lo   vt ’  2Lo. Since the “clicks” of our light clock take longer in one frame, time appears to advance more slowly in the moving reference frame where the time interval is Δt′. This phenomenon is sometimes referred to as “time dilation,” and causes the time in a moving frame to appear slower compared to an observer in a stationary reference frame. It is possible to solve for the factor of time 2

2

8  Spacetime and Curved Space 

133

Fig. 8.2  In a frame where the light clock moves with speed v to the right, the light-­ flash travels a larger distance between ticks. Because the speed of light is the same in both frames and the flash must cover a greater distance in this frame, the observer in this frame observes the clock to tick more slowly. This is one of the key consequences of special relativity. (Figure by the author)

dilation between the two reference frames, which results in the factor of γ, which includes the velocity of the frame and the speed of light. A clever bit of algebra shows that by squaring both sides of the equation and solving for Δt′:

ct  

2

2

.

t   4 L / c  v / c t  2



 2 Lo    vt 2 o

2

2

2



2



Then grouping all the terms in Δt′ on one side gives us the factor of γ that relates Δt and Δt′.



4 L2o / c 2  t 2 1  v 2 / c 2



And since Δt = 2Lo/c, we can see that: Δt2 = Δt′2 (1 − v2/c2) and so Δt′ = Δt 1 1 , where γ =  and is known as the Lorentz factor, 2 2 1− v / c 1 − v2 / c2 that determines the amount by which time appears to slow down for a moving object relative to a frame at rest. Einstein’s theory requires the speed of light c to be measured as the same for all observers, independent of their motion. This however means that the time between clock ticks would be different, such that the rest frame time between clicks Δt would be Δt = 2 Lo/c, while the observer in whose frame the clock is moving would measure Δt′ = γ Δt. Both observers are correct, and for this to be possible, Einstein realized the duration of time perceived by observers at rest and in motion are different and that Δt ≠ Δt′. More precisely, the observer in a frame where the clock is moving at speed v observes the time between

134 

B. E. Penprase

clock ticks to be longer by a factor of γ than the observer for whom the clock is at rest. Here the Lorentz factor γ = 1/ 1 − v 2 / c 2 is the same expression that Heinrich Lorentz had proposed as a physical contraction in the apparatus to explain Michelson’s negative result for detecting the aether. The idea that objects “contract” while moving, commonly known as the “Lorentz contraction” arises entirely from the effects on time in moving frames that were discussed earlier. If we consider the same light clock rotated and moving sideways, we can show that an observer viewing the clock will see what appears to be a contracted beam with a reduced separation between the mirrors L′o . Referring to the rotated version of the light clock shown below in Fig. 8.3, we can imagine that the clock “ticks” every time that the flash bounces off the left (rear) mirror. The distance that the flash must travel to get to the right (front) mirror is L′o + vΔ t R′ , where Δ t R′ is the time required to reach that y´

Mirror v

Mirror ∆tR´

v

Lo´+v∆tR´ x´



Mirror v

Mirror ∆tL´

v

Lo´-v∆tL´ x´ Fig. 8.3  A light clock for a moving observer, in which the timing for a round trip by a pulse of light along the direction of motion provides a basis for measuring time in the moving reference frame. The top panel shows the pulse of light traveling a slightly longer distance as the right mirror moves away, while the bottom panel shows a shorter time as the mirror is moving toward the pulse of light. The sum of these two times gives the duration of a “click” from the light clock, and the dilation of time observed within the moving frame is accompanied by a shortening distance between the mirrors to Lo’. This shortening along the direction of motion, known as the Lorentz Contraction, is another of the key consequences of special relativity. (Figure by the author)

8  Spacetime and Curved Space 

135

mirror. Note that c = (L′o + vΔ t R′ )/Δ t R′ , which simplifies to c = L′o /Δ t R′ + v. We can then solve for the time for light to reach the right mirror and discover that Δ t R′ = L′o /(c − v). The total time for one “click” of the light clock is the sum of the time to travel to the right mirror Δ t R′ combined with the time to travel to the left mirror, Δ t L′ . After bouncing off the right mirror, the distance required to return to the left mirror is L′o − vΔ t L′ , so by similar math, Δ t L′ = L′o /(c  +  v). The total time between ticks in the primed frame is therefore Δt′ = Δ t R  Δ t L′ = L′o /(c − v)  Lo /(c + v). With a little bit more math, this simplifies to Δt′ = (2 L′o /c)/(1 − v2/c2). This algebra has revealed a factor of γ2, as Δt′ = (2 L′o / c )γ2. Since the clock should tick at the same rate whether it is vertical or horizontal, and since we know that the time between ticks in the primed frame is Δt′ = γΔt, we can make the identification that Δt = (2Lo/c), and that Δt′ = γΔt, so that Δt′ = γ (2Lo/c) = γ2 (2 L′o / c ). Comparing Loand L′o , we conclude that L′o , the length in the moving frame, is given by L′o = Lo / γ. Therefore, we see that to ensure that the moving clock ticks at the same rate irrespective of its orientation, the horizontal distance between the mirrors must be “Lorentz contracted” by a factor of 1/γ = 1 − v 2 / c 2 . Another result of special relativity is that Einstein showed that making the laws of conservation and energy consistent with relativity requires that an object’s mass is a form of energy, which is what his famous eq. Erest  =  mc2 implies: the energy of an object at rest is equal to its mass times the speed of light squared, or c2. He also showed that an object’s energy in motion must be E = γmc2 = mc2/ 1 − v 2 / c 2 , which means that the energy required to increase an object’s speed goes to infinity as the object’s speed approaches that of light. This formula has been abundantly verified by experiments where we accelerate particles to speeds close to that of light, as we will see in Chap. 12 when we consider the discoveries made by the Large Hadron Collider (LHC) in Geneva: the LHC accelerates protons to 99.9999991% of the speed of light (CERN, 2008). The infinite amount of energy required to accelerate anything to the speed of light provides a practical “cosmic speed limit” that prevents objects with nonzero mass from ever reaching the speed of light. The profound impacts of Einstein’s special theory rippled through the global physics community shortly after its publication in 1905. Russian mathematician and physicist Hermann Minkowski took Einstein’s theory further with his mathematical formulation of “spacetime,” which placed time as an equivalent dimension to the other three spatial dimensions. As Minkowski noted in his work Space and Time, “from now onwards space by itself and time by itself will recede completely to become mere shadows, and only a type of

136 

B. E. Penprase

union of the two will still stand independently on its own” (Minkowski, 1910). With time as an equivalent dimension to up and down and left and right, the past and future are simply locations that exist along with spatial positions all around us. As Einstein noted, “the distinction between past, present, and future is only an illusion, however persistent.” While Einstein’s special theory of relativity appears to create a disorienting new reality, it also includes a solution that preserves the laws of physics (and the speed of light) for all observers, regardless of their locations and speed of travel.

8.2 Worldlines and the Light Cone As mentioned, a key takeaway from Einstein’s theory is that the three dimensions of space and time form the basis for a four-dimensional spacetime. A diagram of motion through three-dimensional space can be represented as a line that moves through space and through time, with the slope of the line depending on its velocity. For light – the fastest thing in the universe – the slope of the worldline is set by the speed of light c and forms an uncrossable boundary in spacetime. The locus of points in space and time that an object, a person, or a particle describe is known as a worldline. For a stationary particle, the worldline occupies a single spatial coordinate and marches onwards through time or vertically in a plot that presents time as the vertical dimension. A particle moving in a constant velocity moves diagonally in this plot – showing its changing position in space with time as a line whose slope is equal to dt/dx or inversely to its velocity (Fig. 8.4). The limiting case of the worldline is set by the speed of light, which forms the basis of our “light cone” when the possible worldlines of light are rotated about the time axis. All of the objects which we can see and interact with at the place and time we call “here and now” lie in the bottom half of our light cone, which we call our “past light cone.” All objects that we might affect in the future lie in the top half of our light cone, which we call our “future light cone.” Objects and events that occur outside of this light cone can neither be observed or influenced by us “here and now:” these events and objects are outside our observable universe. The past light cone is the region of space and time in which all potentially observable phenomena are embedded and from which we determine all our knowledge of the physical universe. This reality arises from a fundamental law of physics that prevents us from knowing what is happening in a distant location at this exact moment; instead, the best we can hope for is a delayed measurement offset by the light travel time. This means that our knowledge of the state of things across the room is

8  Spacetime and Curved Space 

137

dated by a few billionths of a second (which we tend not to notice), our knowledge of the state of the moon is delayed by 1  s, the Sun by 8  min, nearby stars by years and centuries, and for galaxies, our views come from millions of years in the past. The locus of points in space and time traversed by light particles arriving at “here and now” from all directions form the boundary of our “observable universe.” The diagram in Fig. 8.4 imagines space to be one-dimensional. If we imagine space to be two-dimensional, so that the xy plane is horizontal, then the worldlines of light particles arriving “here and now” form a cone facing downward: this is our past light cone. The worldlines of light particles leaving “here and now” provide form a cone facing upward: this is our future light cone. This representation is shown in Fig. 8.5. It is easy to visualize the boundaries of our universe as being cones if we represent space with two dimensions.

Fig. 8.4  Figure showing the time and space diagram representing the “worldlines” of objects within our universe. Any object exists inside the boundaries of the light cone, and this boundary effectively becomes a barrier that defines the edges of our observable universe. An object at rest would move vertically in the spacetime diagram, and objects moving through space would follow diagonal or curved lines such as those shown in the figure. (Figure by the author)

138 

B. E. Penprase

Fig. 8.5  Representation of the “light cone,” which can be constructed by rotating two spatial dimensions around the axis of time, which is vertical in the diagram. The light cone defines the boundaries of the observable past, as well as our future. The diagram also shows the location of Alpha Centauri on the surface of the light cone – at a distance in space and in the past. (Figure by the author)

However, it is much harder to visualize the four-dimensional shape of the light “cone” when we consider that space is actually three-dimensional! The light cone diagram represents each moment of time as a location on the vertical axis. If we consider all the space in that single moment in time, we are considering a plane in the diagram. Each slice could be considered a “hypersurface” as it would intercept a three-dimensional spherical set of points but at different time coordinates. Light rays arriving at our location would be at a different distance corresponding to the larger diameter of the circle intercepted on the light cone (Fig. 8.6).

8  Spacetime and Curved Space 

139

Fig. 8.6  Diagram of the Light cone showing “slices” of the cone at various distances in the past, representing different epochs of the history of the universe. Within the light cone are nearby galaxies at a short distance in the past, more distant galaxies farther away, and the earliest galaxies in formation along with the Cosmic Microwave Background Radiation (or CMBR) at the larger distances in the past of the light cone. (Figure by the author)

In reality, the “light cone” is a sphere – which moves throughout time and increases its radius. This four-dimensional sphere, or hypersphere, is hard to visualize, so it is more common to represent light with the z-axis representing time. The “hypersphere” can be visualized with a set of nested spheres, each tagged with a different time. Figure 8.7 shows this representation of a light hypersphere, in which each sphere represents a different time corresponding to points farther down the light cone in Fig. 8.6. Once we have established the centrality of the speed of light in physics, it also becomes clear that the limits to our knowledge of the universe are also entrained within these light beams of our light cone (or light hypersphere). The observable universe for astronomers exists in that thin membrane of the light cone extending back into the past, which is all that we can learn about the distant universe with telescopes that gather electromagnetic light waves. Since other forms of radiation, such as gravitational waves, are also known to travel at the speed of light, we realize that our knowledge of the universe is fundamentally shaped and limited by the extent of our past light cone. The region between past and future in the “forbidden” zone of the light cone is as inaccessible to us as the space outside of a black hole to any unfortunate observers trapped within it.

140 

B. E. Penprase

Fig. 8.7  An alternative representation of the light cone as a 4-dimensional “hypersphere,” which includes nested spheres that would be the light from various distances – 2, 3, and 4 years ago. The light from Alpha Centauri would be present at 4.2 light years, as shown in the diagram. (Figure by the author)

Since Einstein’s theory has shown that no massive particles can obtain speeds beyond the speed of light, the set of points in the light cone represents the limit of our knowledge  – a combination of spatial locations and times resulting from the beam’s journey of light through space. For any beam of light reaching us as observers, we necessarily are viewing the end of this journey – and therefore, the origin of the beam comes from a time in the past. For galaxies and the distant universe, these beams of light can arise millions or billions of years ago. Hence, telescopes serve as virtual time machines, allowing us to see the past directly. Einstein’s lessons about light, time, and space have been part of the daily reality for twentieth and twenty-first-century physics, and even have practical implications as we adapt to the meaning of a finite speed for transmitting information of all sorts. The delay applies not only to light beams but also includes the forces from Newton’s gravity, radio waves from moving charges, or the timing of beams used for cell phone communications, radio transponders, and images from spacecraft as they explore our universe. Einstein’s fundamental realization, which forms the underpinnings of the models of modern physics, is that space and time are inseparable. In fact, the consideration of time as an additional dimension of space can explain the curvature of light in the presence of stars and black holes, as well as some unexpected effects that

8  Spacetime and Curved Space 

141

occur when objects begin moving close to the speed of light or approach regions of highly curved space such as near black holes and neutron stars.

8.3 Curved Space and General Relativity For our physical laws to work, forces must be able to transmit “action at a distance” to provide observable effects of those forces. Newton’s physics, electromagnetism, and early twentieth-century modern physics all advanced with the notion of action at a distance without fully working out the mechanisms by which these forces could transmit through a medium of empty space. Some of these more profound questions about “action at a distance” began almost as soon as Newton had published his epic work, the Principia. Leibniz, a contemporary of Newton’s, who is generally credited with a simultaneous discovery (with Newton) of calculus, was one of the most enthusiastic questioners. Leibniz questioned Newton’s formulation by asking how masses can transmit forces through space. Newton himself had admitted this mechanism was unknown. In Newton’s own words, “I have not been able to discover the cause of those properties of gravity from phenomena, and I feign no hypotheses.” Leibniz and other philosophers objected to how the force of gravity operated across empty space and suggested that this kind of unexplained force at a distance had a “scholastic occult quality” (Eamon, 2010). The answer required Einstein’s theory of general relativity, which he developed soon after his miracle year of 1905, and released in 1915. Here again, Einstein had an answer, this time with his “general” theory of relativity, which went further to describe the nature of spacetime and its interactions with mass and energy, and how both could impart “curvature” to spacetime. With Einstein’s theory of spacetime, a natural geometric mechanism for transmitting forces was provided in the form of curvature and vibrations of spacetime. Masses bend spacetime, and objects respond to this curvature by following the straightest possible paths within that curved spacetime. Einstein predicted that flexures of this spacetime due to rapid changes in mass distributions move through space at the speed of light, forming gravitational waves (Fig. 8.8). While the theory of special relativity concerns objects at rest or in constant motion, Einstein continued his journey to consider objects accelerating or in the presence of large masses that might provide significant forces and accelerations. These “non-inertial” or accelerated reference frames are the subject of Einstein’s theory of general relativity, which was developed from 1907 to 1915, and was first described in his 1915 paper “the Field Equations of Gravitation” (Einstein, 1915a).

142 

B. E. Penprase

Fig. 8.8  Graphical description of gravitational radiation as arising from two colliding black holes. Einstein predicted the propagation of the disturbances in spacetime in 1915, and they were first directly detected a century later in 2015. (Image from R. Hurt, Caltech-JPL – https://www.ligo.caltech.edu/video/gravitational-­waves)

Einstein’s theory of general relativity incorporated mass and energy equivalence by introducing the concept of spacetime curvature resulting from both mass or energy density. Energy (as might be found in dense accumulations of electromagnetic fields) was shown in general relativity to warp space and cause gravity just like mass, according to the equation E = mc2. The central equation of general relativity (the Einstein equation) links a mathematical expression of the local curvature of spacetime (the Einstein tensor) to the local density of energy and momentum called the stress-energy tensor. An equation called the geodesic equation then describes how to calculate the straightest possible worldline for a particle moving through that curved spacetime. As American physicist John Wheeler put it succinctly, “Spacetime tells matter how to move; matter tells spacetime how to curve.” Einstein’s general relativity theory put space, time, and matter on an equal footing, with all three interacting dynamically to create the forces of gravity and to provide invisible contours of our universe that determine the motions of stars, galaxies, and the larger expansion of space itself. To return to Galileo’s parable of the ship – we can imagine that same ship, with its bowl of goldfish and friendly passengers, in the case where a huge mass is just outside the ship or the ship is accelerating. Einstein realized that the laws of physics needed to be the same for this accelerating ship so that if they did not have windows, they would not be able to differentiate between the ship accelerating in one direction, or suddenly encountering a new source

8  Spacetime and Curved Space 

143

of gravity outside of the ship (Cowen, 2019, p. 27). This principle is known as the Equivalence Principle, and is an extension of Galileo’s equivalence principle that explains how the laws of physics are the same for accelerating observers and those in a curved spacetime. To accomplish this extension of his theory, Einstein could draw from some of the recent work in mathematics that could describe shapes and paths on curved surfaces. Using his notion of time as a fourth dimension, Einstein used the idea of a four-dimensional curved spacetime as the canvas on which all the forces and motions of the universe exist. Einstein’s general theory of relativity is elegantly expressed in the form of “tensors” (like higher dimensional vectors) that describe the curvature of space as determined by the presence of matter and energy. Einstein’s general theory of relativity can be concisely expressed with a single equation, known as the Einstein Equation. This equation links the Einstein tensor G describing the local curvature of space and time with the stress-­ energy tensor T describing local density of energy and momentum:

G  k T where k  8 G / c 4

The Einstein Equation provides a unified and elegantly simple picture of how gravitation and all forms of energy interact with space and time. Some of the solutions to the Einstein Equation were derived decades or even a century before their discovery (as in the case of gravitational waves and black holes). The Einstein Equation is used routinely to interpret the observations that have detected the mysterious dark matter and dark energy in more recent times. As was the case for special relativity, beams of light provide the essential information for both the experimental verification of the theory and the conceptual basis for its origin. Within spacetime curved by mass and energy, light moves along “geodesics,” which are the straightest paths through this warped spacetime (Oyvind and Sigbjorn, 2007). Just as a great circle path is the shortest travel distance on the curved Earth, the geodesic and its curved path represent the shortest distance within curved space. The theory predicts beams of light that will bend near stars and black holes, resulting in gravitational “lenses” and other observable evidence of mass that may not otherwise be detectable. General relativity also explains the curved trajectories of objects responding to gravity as the shortest paths along a curved spacetime and provides something like a medium for transmitting gravitational and electromagnetic forces. While spacetime itself does not have a corresponding substance – like the aether – it can transmit energy and light in the form of ripples of spacetime (gravitational waves) and ripples of the electromagnetic field (light waves) (Figs. 8.9 and 8.10).

144 

B. E. Penprase

Fig. 8.9  A beam of light in curved space will travel along a “geodesic,” which is the straightest possible path in a curved spacetime. Within the curved space this path is a straight line.

Fig. 8.10  Within our three-dimensional space, the motion of an object or a beam of light around a star or black hole follows a curved path that is the projection onto three spatial dimensions of the object’s or light flash’s worldline in the curved spacetime. For light, this bending results in the effect of gravitational lensing.

8  Spacetime and Curved Space 

145

8.4 The Perihelion of Mercury and Einstein’s Theory One of the first verifications of Einstein’s theory of relativity came soon after its publication, by comparing Mercury’s calculated orbit in the new general relativity with the orbit computed with Newtonian mechanics. It was long known that the orientation of Mercury’s elliptical orbit would “precess” (that is, slowly move around) relative to the Sun due to the forces from the other planets. This was first observed by the French astronomer Jean Baptiste Chevalier, and calculations by Urbain Le Verrier in 1859 suggested that the amount of the planet’s motions, 570 arcseconds per century, after accounting for Earth’s celestial precession, was too large based on the known forces from the planets. This discrepancy was an object of great concern for the European astronomical community, who suggested that there could be another undiscovered planet in the solar system, which they named Vulcan. Approximately 532 arcseconds per century could be accounted for using Newtonian mechanics, but the remaining 43 arcseconds per century were completely unexplained (Cyril, 2020). After many decades of searching, no planet had been discovered, and it was a mystery why Newtonian physics could not explain Mercury’s motion (Janssen and Renn, 2022). Einstein focused on Mercury’s orbit as one of the test cases for his new theory, and Einstein’s first calculations on Mercury came well before he published his general relativity theory in 1915. An early version of Einstein’s theory, entitled the “Draft of a Generalized Theory of Relativity and of a Theory of Gravitation,” was published by Einstein and Grossman in 1913. This led scientists to try to apply this revolutionary theory to Mercury’s orbit. Dutch astronomer Willem de Sitter calculated the precession using this preliminary version of the theory and derived an additional advance of 18 arcseconds per century using Einstein’s preliminary version of general relativity, which failed to account for the total precession. Einstein worked with his friend Michele Besso to fully account for Mercury’s motion, and the exercise of calculating Mercury’s orbit enabled a breakthrough. On the eve of a series of four Thursday presentations to the Prussian Academy in November 2015, Einstein revisited his calculations and worked feverishly to get his equations to fully account for all of Mercury’s precession (Einstein, 1915b). Finally, a week before the third of his famous four November lectures, he derived a result that included the entire 43 arcseconds of extra motion. Einstein presented these results on November 18, 1915 and

146 

B. E. Penprase

Fig. 8.11  The highly eccentric orbit of Mercury precesses and changes its orientation with respect to the Sun by 570 arcseconds per century. Still, only 532 arcseconds per century could be explained using the forces from the known planets using Newtonian mechanics. The discrepancy prompted the search for a new planet, provisionally named Vulcan, and was ultimately explained by Einstein’s theory of general relativity. (Figure by the author)

was so excited about the result that he reported to his friends that his heart shuddered in his chest and he was “beside himself with joy” (Baum and Sheehan, 2013, p. 173) (Fig. 8.11).

8.5 The Solar Eclipse of 1919 Another pivotal moment for validating Einstein’s theory of relativity came soon after its publication, with the solar eclipse of 1919. The 1919 solar eclipse not only made Einstein and British scientist Arthur Eddington scientific celebrities but confirmed that massive objects like the Sun could bend space and time and therefore can deflect light. Einstein had written about the possibility of deflection of light passing near the Sun as early as 1911 and derived an initial estimate of the shifted positions in the range of 0.87 arcseconds. These tiny shifts of light arise from the spacetime curvature due to the

8  Spacetime and Curved Space 

147

Sun’s mass. They could be detected by carefully studying the positions of background stars made visible by a solar eclipse, and then comparing their positions with the same star field without the presence of the Sun. Einstein updated his calculation with a full accounting of the curvature of space in 1916 after he had published his entire theory of general relativity. These revised calculated shifts from general relativity amounted to 1.75″ at the Sun’s limb, well within reach of most telescopes of the time (Fig. 8.12). Einstein realized the powerful confirmation that solar eclipse observations would provide for proving general relativity. He wrote to many of the world’s leading scientists as early as 1913, urging funding and observations to be taken (Kennefick, 2009). In one letter to George Ellery Hale from October 14, 1913, Einstein includes a diagram of the deflection of starlight and asks for help observing this deflection. One eclipse expedition took up Einstein’s call. Led by German astronomer Erwin Freundlich, the team attempted to observe the solar eclipse in the Crimea on August 21, 1914. Unfortunately, the astronomers were crossing Russia when World War I broke out, and the group was arrested by the Tsar’s police with all their equipment confiscated (Landau, 2019). Einstein persisted in his campaign for eclipse observations. Another letter dated December 15, 1915, was addressed to Otto Naumann, who managed the Royal Prussian Observatory in Neubabelsberg. Einstein urged Naumann to observe the upcoming 1919 eclipse, that would show that “a light ray passing by a celestial body is deflected by it,” which Einstein described as “the most interesting and astonishing” of the consequences of this theory. Einstein urged German and other astronomers to take on the task during upcoming solar eclipses and acknowledged that the impact of World War I had so far prevented much progress. As Einstein noted in his letter, “war (and weather) deprived this expedition of success” (Einstein, 1915c).

Fig. 8.12  Einstein included diagrams of light deflection like the diagram above in his letters, which he sent to leading astronomers of the time to urge them to conduct observations of the stars surrounding the Sun during a solar eclipse. While the 1914 eclipse observations were impossible due to the disruption of World War I, Einstein’s observations were completed during the solar eclipse of May 29, 1919. (Figure by the author)

148 

B. E. Penprase

Better days were ahead for the eclipse viewers as World War I neared its end. In the solar eclipse of May 29, 1919, the Sun was predicted to be in eclipse for a full six minutes and would be located in a region of the sky near the bright star cluster known as the Hyades, which would make locating and measuring the stars near the Sun much easier. Arthur Eddington, director of the Cambridge University Observatory, had convinced Frank Watson Dyson, the British Astronomer Royal, and the Royal Greenwich Observatory director, to organize and fund an expedition to capture images of the event. Eddington’s work at the observatory during the war was in lieu of military service, as Eddington was a devout Quaker and a conscientious objector. Dyson agreed to sponsor Eddington for the eclipse expedition just at the moment when a military tribunal was reviewing whether Eddington should be conscripted to serve in July 1918. Thankfully for science and for Eddington, the tribunal was convinced to release Eddington for the expedition. Eddington began preparations in earnest as the final stage of World War I was concluding. Eddington set up telescopes on the island of Principe, off the coast of West Africa, and a second station was developed at Sobral in Northern Brazil. The images taken by their makeshift observatories would be compared with images of the same star field taken at night on different dates, without the Sun’s bending of spacetime. Eddington’s observations from Principe were almost clouded out, but at the last moments of the totality, Eddington got some images (Fig. 8.13). The images from Sobral were also successful, but some problems with the larger of the two telescopes produced images that were out of focus. Eddington could compare the best images from Principe with some plates taken in England before the eclipse. As Eddington described it: Arrangements had been made to measure the plates on the spot, not entirely from impatience, but as a precaution against mishap on the way home, so one of the successful plates was examined immediately… Three days after the eclipse, as the last lines of the calculations were reached, I knew that Einstein’s theory had stood the test and the new outlook of scientific thought must prevail.

Eddington returned to England and did some additional analysis of the plates from both sites, and then was ready to announce his results on November 19, 1919, at a meeting of the Royal Society in London. Ernest Rutherford, the famous physicist, was present at the event and recalled the nature of Eddington’s 1919 announcement and the impact it had on the world; at that point eager for some good news after being shaken by a cataclysmic world war. In Rutherford’s words:

8  Spacetime and Curved Space 

149

Fig. 8.13  The actual image taken of the 1919 solar eclipse from the site in Sobral, Brazil, digitized and enhanced using modern techniques by astronomers in Germany. An enormous solar prominence is seen at the top of the Sun, and the stars of the Hyades star cluster are labeled and indicated with circles. (Image Credit: ESO/ Landessternwarte Heidelberg-Königstuhl/F. W. Dyson, A. S. Eddington, & C. Davidson; available at https://www.eso.org/public/usa/images/potw1926a/)

The war had just ended; and the complacency of the Victorian and the Edwardian times had been shattered. The people felt that all their values and all their ideals had lost their bearings. Now, suddenly, they learnt that an astronomical prediction by a German scientist had been confirmed by expeditions to Brazil and West Africa and, indeed, prepared for already during the war, by British astronomers. Astronomy had always appealed to public imagination; and an astronomical discovery, transcending worldly strife, struck a responsive chord.

While the precise division of credit between Eddington, Dyson, and Einstein could be debated, it was unquestionably a monumental advance of science. The British physicist J.J. Thompson declared that the eclipse provided “the most important result obtained in connection with the theory of gravitation since Newton’s day.” The measurement was made possible by the convergence of astronomical techniques  – photography, precision measurement of star positions – and astrophysical theory in the form of Einstein’s general relativity. The precision of the 1919 eclipse measurements was about 1.6 arcseconds, as precise as was possible using photographic plates from a portable telescope in the early part of the century. As we discuss later in Chap. 9, the current

150 

B. E. Penprase

spacecraft like JWST and GAIA can measure angular locations over 1000 times more accurately – enabling astronomers to see tiny deflections of stars and even extrasolar planets from more modest gravitational sources. The gravitational lensing technique is also routinely used with the HST and soon the JWST to detect large masses of dark matter that bend the light of distant galaxies from their imprint on spacetime. These sources of gravitational microlensing have enabled large-scale maps of dark matter to be created, which we will discuss in Chap. 13.

8.6 The Schwarzschild Solution and Black Holes The proof of general relativity from the observations of Mercury’s orbit and the solar eclipse of 1919 was crucial for confirming Einstein’s theory with multiple sources of precise experimental data. General relativity also stimulated thoughts of how light could bend around even more intense sources of gravitation, such as the objects we now know of as “black holes.” British and French astronomers, over a century earlier, had contemplated the limits of gravity from the Sun or a planet and approached the idea of a black hole using classical physics. The British scientist John Michell considered the effects of a huge star that had a gravitational field so large so that it would capture its own light. In a 1784 paper, he wrote (Montgomery et al., 2009): … if the semi-diameter of a sphere of the same density with the Sun were to exceed that of the Sun in the proportion of 500 to 1, a body falling from an infinite height towards it, would have acquired at its surface a greater velocity than that of light, and consequently, supposing light to be attracted by the same force in proportion to its vis inertiae, with other bodies, all light emitted from such a body would be made to return towards it, by its own proper gravity.

French philosopher Pierre-Simon Laplace in 1798 had also independently conducted a thought experiment of the extreme case of “escape velocity” using Newton’s mechanics. If one considers an object launched from the surface of any planet, such as Earth, the planet’s gravity will pull it back unless it exceeds a limited velocity known as the escape velocity. Using simple math and Newton’s law of gravitation, one can show that this escape velocity Vesc is given by

Vesc = 2GM / R



8  Spacetime and Curved Space 

151

For the Earth, the escape velocity is 11.2 km/s; for larger planets like Jupiter and Saturn, the escape velocity is 60.1  km/s and 36.9  km/s, respectively. Laplace considered the case of an object so dense that even light would not be able to escape. In Laplace’s 1796 work on the nature of the solar system he wrote: The gravitation attraction of a star with a diameter 250 times that of the Sun and comparable in density to the Earth would be so great no light could escape from its surface. The largest bodies in the universe may thus be invisible by reason of their magnitude.

This idea of such a fantastically dense star was one of speculative imagination for over a century, until Einstein’s theory of general relativity provided a firm foundation for extrapolating the limits of time and space in the presence of extremely dense gravity. This idea was given shape by the German scientist Karl Schwarzschild in 1916. Having just received a copy of an initial draft of Einstein’s general relativity theory, Schwarzschild left his position as Director of the Astrophysical Observatory in Potsdam to serve in the trenches of World War I as a volunteer in the German army in 1914, just as the war broke out. While based at a weather station in Belgium, he put his technical skills to work calculating trajectories of shells for the grueling artillery battles that characterized World War I, and eventually was moved to the front lines. To pass the time at the front, Schwarzschild studied Einstein’s theory and worked on solutions to observational problems, such as the discrepancy in Mercury’s orbit. Schwarzschild’s experience on the front lines made him philosophical, and he wrote, Often I have been unfaithful to the heavens. My interest has never been limited to things situated in space, beyond the moon, but has rather followed those threads woven between them and the darkest zones of the human soul, as it is there that the new light of science must be shone.

To live up to his goal of providing the “new light of science” in a dark time, Schwarzschild worked on problems in general relativity and quantum mechanics while on the Russian front. Schwarzschild worked on solving the field equations of general relativity for Mercury’s orbit. While he was unsuccessful in finding an exact solution, he could derive a solution near a “singularity” – an ultra-dense source of gravity much like the superdense stars contemplated by Laplace and Michelle over a century earlier. He immediately wrote to

152 

B. E. Penprase

Einstein, and his letter dated 22 December 1915 was delivered to Einstein several weeks later, rumpled from its long journey. Schwarzschild described his location as “at the Russian Front” and opened with a cheery greeting: Dear Mr. Einstein! In order to become familiar with your theory of gravitation, I have dealt more closely with the problem you posed in your work on Mercury’s perihelion and solved it in the 1st approximation.

In trying to solve the problem of Mercury’s orbit, Schwarzschild noticed that he could obtain a solution to Einstein’s equations and noted as follows: there is only one line element that satisfies your conditions… and is singular at the origin and only at the origin.

Schwarzschild concluded his letter with the following quote (Merriam, 2022): It is a wonderful thing that the explanation for the Mercury anomaly emerges so convincingly from such an abstract idea. As you see, the war is kindly disposed toward me, allowing me, despite fierce gunfire at a decidedly terrestrial distance, to take this walk into this your land of ideas.

Schwarzschild’s results provided a solution to general relativity at a “singularity” – a location we would now call a black hole. The solution came unexpectedly during his calculations for Mercury’s precession and is now recognized as the first mathematical description of how space and time near a black hole could be described in general relativity. This solution is now known as the “Schwarzschild metric” – the solution to the Einstein equation in the vacuum surrounding any non-spinning and spherical object, including a black hole (Einstein, 1915d). Einstein was amazed that Schwarzschild could determine an exact solution to his equations just a month after his theory was published. This formulation has been modified and extended to the case of rotating black holes decades later by the mathematicians Roy Kerr, Hans Reissner, and others. Within Schwarzschild’s solution is the boundary between the interior of the black hole and the rest of the universe, known as the event horizon. This boundary designates the point at which light can no longer escape the singularity’s gravity. Einstein’s theory describes how time and space change approaching the event horizon, with the increasingly curved spacetime causing an effect known as gravitational time dilation. As an object crosses the event horizon, time would appear to stop to an outside observer, and the object passes through the

8  Spacetime and Curved Space 

153

horizon to leave our observable universe to join the intense gravitational singularity – leaving only an imprint of light on the event horizon, and a slight boost in mass for the black hole. Any light emitted by an object falling through the event horizon decays exponentially in brightness, with a decay constant roughly equal to the light travel time across the black hole event horizon’s diameter. So, for a solar mass black hole, one receives the last photon from the falling object within microseconds of its passing through the event horizon. The radius of the event horizon of a spherically symmetric, non-rotating black hole is called the Schwarzschild radius, and is given precisely in terms of the mass of the black hole. It can be calculated directly from the equation for escape velocity, with the value of the escape velocity set to the speed of light. The mathematical expression in terms of the gravitational constant G and the speed of light c is below and works out to 3 km for a black hole with mass M equal to one solar mass. The equation varies directly with the mass M, so larger or smaller black holes would be smaller or larger in RS in proportion to their mass. This radius can be expressed very simply as: RS =



2GM c2

8.7 Time Dilation in Curved Space General Relativity predicts how the behavior of time and space alters in curved spacetime, resulting in the effect known as gravitational time dilation, which in some ways is analogous to the time dilation in special relativity. This is the basis for the prediction (since confirmed observationally) that time would slow down in the vicinity of dense stars like white dwarf stars, neutron stars, and black holes. The observational evidence comes from the measurement of spectral lines from atoms near the surface of these massive stars. Since time in these regions advances more slowly, the atoms and their electromagnetic light appear to oscillate more slowly – resulting in the observed effect of “gravitational redshift.” To quantify this effect, we can derive an expression for gravitational time dilation  – the factor by which time slows near a massive object. This expression is:



T  t 1 – 2GM / Rc 2



1/ 2

or T  t 1 – R S / R 

1/ 2



154 

B. E. Penprase

Here T would be the time measured by someone at rest a distance R from an object of mass M, t is the time measured by an observer at rest very far from the gravitating object, G is the gravitational constant, and c is the speed of light. By bundling up the values into the Schwarzschild radius RS the equation simplifies considerably, making calculations simple for time dilation effects near black holes. To give one example close to Earth, an atomic clock will run 1 part in 1010 slower on the surface of Earth compared to a clock at an altitude of 30,000 feet. This has been confirmed by flying atomic clocks in airplanes – in one experiment conducted in 1971, a pair of scientists, Joseph Hafele and Richard Keating, took four cesium-beam atomic clocks aboard commercial airliners. The pair flew twice around the world, once in the eastward direction and once in the westward direction. The traveling clocks were compared to an identical “stationary” clock based at the US Naval Observatory. The eastward-moving clock was moving fastest since the plane’s motion combined with the Earth’s rotation. Special relativity predicted that fastest moving clocks would have the most time dilation, amounting to a maximum value of 184 billionths of a second during the experiment. Both moving airborne clocks were in slightly lower gravity than the one at the surface, and would run faster, resulting in a shift of 150 billionths of a second from general relativity compared to the clock at the surface. These corrections, while small, were confirmed and have measurable effects in our daily lives due to time shifts between the Earth’s surface and orbiting GPS satellites (Hafele and Keating, 1972). Much more extensive time dilation near a black hole is predicted by general relativity. George Lemaitre was the first to observe in 1933 that if one were to enter a black hole, the time dilation would be so extreme that to an outside observer, time would stop when crossing through the event horizon. This conclusion can be easily seen from the extreme value of the time dilation equation. If one is at rest at the Schwarzschild Radius – so that R = RS = GM/ c2 and substitutes this into the time dilation equation, the time dilation factor goes to 1/0 or infinity. This is a simple mathematical confirmation that time stops at the edge of the event horizon. The extreme time dilation near a black hole is now part of our popular culture and a mainstay of science fiction movies. For example, the movie Interstellar uses time dilation to explain how the main characters age more slowly near the black hole named Gargantua. Kip Thorne, a Nobel prize winner and top theorist in general relativity, consulted for the movie and detailed his calculations in the recent book The Science of Interstellar. To accomplish the needed time dilation, Thorne had to imagine that the Gargantua black hole was 100 million solar masses, which creates enough of a warp in

8  Spacetime and Curved Space 

155

spacetime to cause measurable time dilation without destroying the planet used by the heroes of the movie (known as Miller’s planet). For a black hole of this size, the event horizon would be about 300 million km in radius, using our equation for RS mentioned earlier. Thorne worked backward from a requirement by the director to provide sufficient time dilation so that 1 hour on the planet matches seven years on Earth. To make this possible, the Gargantua black hole was required to spin at rates barely allowed by theory to prevent the planet from falling into the black hole. The spin rate set at only one part in 100 trillion smaller than the maximum permitted by theory, which allowed the planet to get close enough to the black hole so that time was 60,000 times slower near the black hole than on Earth. Such an extreme time dilation required the planet to hover just kilometers from the edge of the event horizon (Thorne and Nolan, 2014). To provide a more realistic example, we can consider the time dilation for an observer approaching a more reasonably sized 62 M⊙ black hole, where 1 M⊙ represents one solar mass. The first detected gravitational wave event in 2015, which resulted in Kip Thorne’s 2016 Nobel Prize, came from the formation of such a black hole from merging two smaller black holes. A 62 M⊙ black hole will have an event horizon radius of 186 km (assuming it is not rotating very fast). An observer at rest at about 205 km from the black hole (just 10% outside the event horizon) will appear to have clocks that run three times slower than a distant observer’s clocks. For an observer at rest at 186.1 km, just 1% outside the event horizon, the time dilation factor would increase, and time appears to be ten times slower. As the observer crosses through the event horizon, distant observers would see the blinking rate of lights on the traveler’s spacecraft decrease asymptotically toward zero. The observer crossing the horizon (if still alive) would see the rate of distant events rapidly increase until that observer is crushed into the singularity a short time later. Physicists like Steven Hawking and Roger Penrose very effectively created graphical representations of the physics near black holes and in the early universe. Their diagrams show how the curved spacetime in both cases effectively rotates a light cone, thereby changing the space within and reducing the visibility and time within these areas. In the limiting case at which an object or a person enters the event horizon of a black hole, the light cone becomes tangent to the event horizon, and eventually the visibility of the object and observer disappear entirely (Fig. 8.14).

156 

B. E. Penprase

Fig. 8.14  A spacetime diagram of the light cones as they approach a black hole, representing two spatial dimensions and the time dimension on the vertical axis. The matter as it converges on the black hole is incorporated into the singularity inside the event horizon. Light cones of observers are also rotated as they approach the singularity so that their edges become tangent to the event horizon and then wholly contained within the event horizon. (Figure by the author, based on R. Penrose, SpaceTime and Cosmology)

8.8 General Relativity and Models of the Universe The effects of general relativity can also be seen across the entire volume of spacetime that constitutes our expanding universe. The bundle of worldlines of all objects in the universe can be represented as an envelope of points in an expanding spacetime. A visualization of the model can be made with one of the three spatial axes representing time, just as we did with the idea of the light cone. In such a model of the universe, we can cut a horizontal cross-­ section across the model that would represent an instant of time in the universe. As presented in Figs.  8.15 and 8.16, models of the universe can be

8  Spacetime and Curved Space 

157

Fig. 8.15  Diagram of a “closed” universe – which emerges from the early universe, then decelerates, and eventually reverts back to a singularity in a “Big Crunch.” The curvature of space in this model is quite high and is set by the high density of matter, reversing the initial expansion from gravitational deceleration. (Figure by the author)

represented to show the evolution of spacetime expansion, along with a slice of time that represents the three-dimensional volume of space at that instant. Within our universe, we have the local illusion of flat Euclidian space, just as any point on the surface of our curved Earth gives the illusion of flat planar space. The expansion of space will show a linear increase with distance for nearby objects. However, at greater distances we can sample changes in the rate of expansion – acceleration or deceleration – that would indicate curvature. Our universe, like our earth, reveals its curvature with the perspective from greater distances. Just as the ancient navigators on the Earth see the world curving away, astronomers detect the universe’s curvature in the form of a departure from Hubble’s linear relationship between velocity and distance

158 

B. E. Penprase

Fig. 8.16  Model of an ever-expanding universe, in which a lower matter density allows for the universe to continue to expand indefinitely. Current observations of the matter density of the universe have ruled out the closed universe, and the lower curvature of universe detected from observations of distant galaxies suggests a universe that will continue to expand forever. (Figure by the author)

for the most distant galaxies. Such observations required giant telescopes like the Palomar 200″ telescope to be constructed, but cosmological models were derived much earlier, soon after the theory of general relativity was published. The earliest cosmological models were proposed by Friedmann in 1922 and included a complete set of equations to describe the expansion of the universe, but they escaped notice from much of the scientific community (Belenkiy, 2012). Lemaitre published his own cosmological models in 1927, independently of Friedmann, and consulted with astronomers to integrate the latest observations that suggested that space was expanding proportionally to distance. When Hubble’s results were published in 1929, Lemaitre’s model was taken more seriously. By 1931, Lemaitre began to recognize that “the present state of quantum theory suggests a beginning of the world very different from the present order of Nature” and suggested that the universe started in “a primeval atom.” Lemaitre presented this idea at a 1931 conference sponsored by the British Science Association and introduced the notion of a

8  Spacetime and Curved Space 

159

“fireworks” theory that gave rise to the description of the expansion of the universe that became fondly known in later years as the Big Bang (Soter and deGrasse Tyson, 2019). Lemaître included several adjustable parameters in his model, which included the overall mass of the universe (which would cause a “closed” universe if there was sufficient mass to reverse the expansion) and a cosmological constant that provided a source of repulsive force that countered the attractive force of gravity. When both constants were included, with a mass density below the critical density for closing the universe, Lemaître’s model reproduces our currently observed model of the universe quite well. Lemaître presciently interpreted the cosmological constant as a real vacuum energy, unlike Einstein, who included a constant within his equations to prevent cosmic expansion, which he famously considered to be his “greatest blunder”) (Borissov, 2018, p. 20). These new models of the universe were made possible by general relativity. They gave a mathematical description of the beginnings of the Big Bang and predictions for cosmic expansion that could produce dramatically different outcomes depending on the overall mass of the universe. These potential outcomes included a closed universe ending in a cataclysmic “Big Crunch” as gravity pulled back the universe on itself. Another possibility was expansion forever, or an “open” universe, in the case where the overall amount of matter was not sufficient to reverse the Big Bang. Another famous twentieth-century model was the “steady state” cosmology, which bypassed the difficulties of the universe’s origin from a “primordial atom.” With the proliferation of models for the universe, it was time to develop a new generation of telescopes powerful enough to view the distant horizons of spacetime, where the curvature of spacetime would be observable. These powerful new telescopes would be able to provide evidence of the universe’s ultimate fate, from looking back in time to distant horizons of spacetime. These telescopes would eventually reach the most distant limits of our past light cone and detect entirely new and expected forms of invisible matter, which we will explore further in the next chapter.

References Belenkiy, A. (2012). Alexander Friedmann and the origins of modern cosmology. Physics Today, 65(10): 38–43. https://doi.org/10.1063/PT.3.1750 Baum, R. P., & Sheehan, W. (2013). In search of planet Vulcan (p. 173). Springer. Borissov, G. (2018). The story of antimatter: Matter’s vanished twin (p.  20). World Scientific Publishing.

160 

B. E. Penprase

CERN. (2008). CERN  – LHC: Facts and figures. Cern.ch. https://public-­archive. web.cern.ch/en/LHC/Facts-­en.html Cowen, R. (2019). Gravity’s century : From Einstein’s eclipse to images of black holes (p. 27). Harvard University Press. Cyril. (2020). Advance of the perihelion of mercury. Einsteinrelativelyeasy.com. http:// einsteinrelativelyeasy.com/index.php/general-­relativity/174-­a dvance-­o f-­t he-­ perihelion-­of-­mercury Eamon, W. (2010, December 6). Gravity: Manifest or mechanical? Revisiting the Leibniz-Clarke correspondence | the official website of author William Eamon. William Eamon. https://williameamon.com/?p=382 Einstein, A. (1915a). Volume 6: The Berlin Years: Writings, 1914–1917 (English translation supplement) page 117. Einsteinpapers.press.princeton.edu. https://einsteinpapers.press.princeton.edu/vol6-­trans/129 Einstein, A. (1915b). Volume 6: The Berlin Years: Writings, 1914–1917 page 242. Einsteinpapers.press.princeton.edu. https://einsteinpapers.press.princeton.edu/ vol6-­doc/270 Einstein, A. (1915c). Volume 8: The Berlin Years: Correspondence, 1914–1918 (English translation supplement) page 157. Einsteinpapers.press.princeton.edu. https://einsteinpapers.press.princeton.edu/vol8-­trans/185 Einstein, A. (1915d). Volume 8: The Berlin Years: Correspondence, 1914–1918 (English translation supplement) page 163. Einsteinpapers.press.princeton.edu. https://einsteinpapers.press.princeton.edu/vol8-­trans/191 Galileo, G. (1970). Dialogue concerning the two chief world systems, ptolemaic. University of California Press. Hafele, J. C., & Keating, R. E. (1972). Around-the-world atomic clocks: Predicted relativistic time gains. Science, 177(4044), 166–168. https://www.jstor.org/ stable/1734833?origin=JSTOR-­pdf#metadata_info_tab_contents Janssen, M., & Renn, J. (2022). How Einstein found his field equations. Springer. Kennefick, D. (2009). Testing relativity from the 1919 eclipse—A question of bias. Physics Today, 62(3), 37–42. https://doi.org/10.1063/1.3099578 Landau, E. (2019, May 24). A Total solar eclipse 100 years ago proved Einstein’s general relativity. Smithsonian; Smithsonian.com. https://www.smithsonianmag.com/ science-­nature/total-­solar-­eclipse-­100-­years-­ago-­proved-­einsteins-­general-­relativity-­ 180972278/ Merriam, A. (2022, April 19). Karl Schwarzschild’s Letter to Albert Einstein. . https:// www.cantorsparadise.com/karl-­s chwarzschilds-­l etter-­t o-­a lbert-­e instein­6661734dd3e Minkowski, R. (1910). Space and time Minkowski’s papers on relativity free version (p.  43). https://www.minkowskiinstitute.org/mip/MinkowskiFreemium MIP2012.pdf

8  Spacetime and Curved Space 

161

Montgomery, C., Orchiston, W., & Whittingham, I. (2009). Michell, Laplace, and the origin of the black hole concept. Journal of Astronomical History and Heritage, 12(2), 90–96. Oyvind, G., & Sigbjorn, H. (2007). Einstein’s general theory of relativity: With modern applications in cosmology. Springer. Soter, S., & deGrasse Tyson, N. (2019). Georges Lemaitre: Father of the big bang. American Museum of Natural History. https://www.amnh.org/learn-­teach/ curriculum-­collections/cosmic-­horizons-­book/georges-­lemaitre-­big-­bang Thorne, K.  S., & Nolan, C. (2014). The science of interstellar. W.W.  Norton & Company.

9 Mapping Space to the Edge of the Observable Universe

With the sensational discoveries of the expansion of the universe at Mount Wilson and regular discoveries coming from large mountaintop telescopes, public interest and excitement in astronomy was at a peak in the early twentieth century. The development of modern physics, the first models of the expanding universe using general relativity, the first maps of the local group of galaxies, and the discovery of the overall expansion of space offered the promise of understanding the nature of space and time through astronomy. The growth of private foundations and the federal government in the US and the rise of consortia of observatories and the European Union also ushered in a new era of “big science,” which was applied to building monumental telescopes on Earth and space for mapping the universe. This challenge has been met through a succession of space telescopes that have broadened our knowledge of the entire electromagnetic spectrum, from radio waves to gamma rays, and with telescopes on Earth in remote mountaintops in Chile, Hawaii, and around the globe. Beginning with the Mount Palomar 200″ telescope and culminating in the recently launched James Webb Space Telescope and the GAIA astrometry mission, our maps of the universe have reached a place where we can map space to the limits of the observable universe.

9.1 The Mount Palomar 200″ Telescope The discovery of the Big Bang from observations with Mt. Wilson’s 100-inch telescope validated Hale’s vision for a mountain observatory that could become an astrophysical laboratory that would revolutionize astronomy. One © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_9

163

164 

B. E. Penprase

of Hale’s mottos, which drove him to build the world’s largest telescope several times, was “make no small plans. dream no small dreams.” Hale lived by this credo and set his sights on building an even larger observatory in 1926 that would be the world’s largest telescope, his fourth time on such a quest. The powerful new telescope would have twice the aperture of the Mt. Wilson 100” telescope. It would be located on a new site over 100 miles away, in a remote part of the California wilderness not touched by the encroaching lights of Los Angeles. The new observatory – to be on Palomar Mountain – has been named the George Ellery Hale telescope in his honor and pushed the scale of engineering of the time to its limits – both in terms of sheer scale and in terms of its precision. Hale consulted with his Mount Wilson staff on the best approaches for the new telescope. One of his staff members, Francis Pease, had experimented with a “Michelson Interferometer” fully 20 feet in diameter that provided encouraging data on the power of a giant telescope to provide direct data on the diameters of stars, which suggested the new telescope could be as large as 300 inches. However, since Hale had already successfully built the world’s largest telescope three times, he remembered all too well the many challenges and setbacks in building the 60” and 100” telescopes. He urged a more moderate approach using a 200″ diameter mirror. Hale took his case for the giant 200″ telescope directly to the public with a popular article from 1928 in Harper’s Magazine entitled “The Possibilities of Large Telescopes” (Hale, 1928). The article provides a fine example of Hale’s lofty vision that makes a case for a new telescope in poetic and nationalistic terms that would appeal to the public and potential donors. Hale’s article drew on the urge for exploration that runs through all of science. It stated that “like buried treasures, the outposts of the universe have beckoned to the adventurous from immemorial times” to explore the “lure of the uncharted seas of space.” Hale then shifted the focus to the rising prowess of US industry and noted that after the major European powers had a chance to build larger and larger observatories, “the initiative seems to have passed chiefly to American leaders of industry.” Hale’s article celebrated his earlier benefactors, Charles T. Yerkes, John D. Hooker, and the Carnegie Institution, who funded the 40” Yerkes telescope, the 100” telescope mirror, and the completion of the 100” telescope, respectively. He then mentioned that “the opportunity remains” that “some other donor” can satisfy his own curiosity regarding the “nature of the universe and the problems of its unexplored depths.” He also even suggested that “a far-sighted industrial leader, whose success may depend in the long run on a complete knowledge of the nature of matter and its transformations, would hardly be willing to be limited by the feeble range of

9  Mapping Space to the Edge of the Observable Universe 

165

terrestrial furnaces,” perhaps suggesting even a possible practical utility in astronomy research. The “far-sighted industrial leader” to take up this opportunity was Charles Rockefeller. Rockefeller agreed in 1928 to pay $6 million for the construction costs, with additional funding to enable the new facility to be operated by Caltech and the Mount Wilson Observatory. The 200-inch mirror blank was to be built by the Corning glass company (using a revolutionary new material we now know as Pyrex). Corning began work on the mirror in 1934, followed by casting the huge 20-ton mirror in 1935, with a full year of annealing and cooling. The first blank failed due to bubbles within the glass, and the second blank was threatened by flood waters that threatened the Corning plant during the annealing process. Despite the challenges, the completed mirror was transported across the country in a highly publicized tour and was greeted in small towns by officials and marching bands. Seven more years of careful grinding and polishing were needed when it arrived in Pasadena, which continued through 1942. In total, the 200″ mirror required over 180,000 person-­ hours for grinding and polishing (Voller, 2021, p. 169). Before the project was completed, World War II interrupted, and Hale did not live long enough to see his vision realized. Finally, in November 1947, the mirror was ready for transport to Palomar Mountain, and on June 3, 1948, the Palomar Observatory was dedicated. Edwin Hubble was among the first observers to use the 200″ telescope, and regularly scheduled observations began on November 12, 1949 (Leverington, 2017a, p. 9). The combination of the titanic scale and delicate precision of the Hale telescope was unprecedented and remain inspiring today, even if there are telescopes with larger apertures. The Hale telescope towers 55 feet above the observing floor  – the height of a five-story building. The enormous steel horseshoe-shaped yoke weighs over 500 tons but is balanced so precisely it can be moved with the force of one finger. The precision balance of the telescope is accompanied by minimal friction since the entire horseshoe assembly floats on a thin film of oil. This means a tiny 1/12 horsepower motor can move the Palomar Hale telescope. The device’s pointing is so accurate that it can place a star precisely on its detectors by aiming the back of the telescope within a tenth of a millimeter. The huge mirror was polished to perfection and provided a light-gathering power 8-times more powerful than the 100” telescope, which could detect the light from a candle at 10,000 miles (Bowen, 1951). Hale’s telescope was located 5,600 feet above sea level on Palomar Mountain, a site selected after considering competing sites in Arizona, Texas, Hawaii, and South America. The entire effort of designing, building, polishing, and assembling the telescope spanned over 20 years and a cataclysmic world war

166 

B. E. Penprase

Fig. 9.1  The Hale 200″ Palomar telescope – a marvel of science and one of the most beautifully designed instruments of all time. Its 500-ton mass towers over five stories above the observing floor but is so finely balanced that it can be moved with one hand. (Image from https://www.jpl.nasa.gov/images/pia13033-­hale-­telescope-­palomar­observatory)

(Fig. 9.1). The dedication of the telescope in 1948 brought over 1000 people to Mount Palomar and included Nobel laureates, scientists, celebrities, and a swarm of photographers and reporters, who all marveled at the spectacle of Hale’s vision come to life (Rucker, 1948). The audience gasped as the 1000-­ ton dome rotated into position and the telescope moved for the audience (Voller, 2021, p. 169). The legacy of the Hale Palomar 200″ telescope includes mapping galaxies to much greater distances while also extending Hubble’s velocity distance relationship out to billions of light years. In this process, the large scale “curvature” of the universe is visible – enabling the Hale telescope to help constrain models of the universe by measuring the rate of gravitational deceleration. The Hale telescope also improved our knowledge of stellar evolution through its discoveries of distinct populations of stars that have different compositions of elements arising from the gradual accumulation of new elements over the eons of galactic evolution. The location of these distinct populations of stars also traces the formation of our Milky Way as it collapsed from an enormous cloud of gas billions of years ago and settled into its contemporary

9  Mapping Space to the Edge of the Observable Universe 

167

disk shape. The high-resolution spectra of stars made possible by the Palomar telescope allowed Walter Baade and Jesse Greenstein to measure variations of elemental abundances in stars and correlate these elemental abundances with the nuclear physics of stars and supernovae. Such observations from Palomar were crucial for the calculations within a famous paper by Fowler, Burbidge, and Hoyle that helped clarify the nature of nucleosynthesis in stars. The Palomar telescope also provided much more accurate distances to nearby galaxies, such as M31, M33, and NGC 6822, with new corrections for extinction and adjustments for stellar properties due to their heavy element abundances. These corrections significantly improved our estimates of the size and age of the universe. They fixed Hubble’s systematic errors, which had overestimated the rate of expansion and size of the universe by nearly a factor of 10. Using these results, Allan Sandage provided measurements of the Hubble Constant (Ho) at 71 ± 7 km/s/Mpc in 1961, which is very close to the modern value determined by the Hubble Space telescope. The Palomar telescope expanded the toolkit for astronomers measuring distances to galaxies, allowing the “cosmological distance ladder” to have multiple checks and a greater range. Palomar could detect both Cepheid variables and the much fainter RR Lyrae variable stars in other galaxies. Since the RR Lyrae stars could be seen throughout the galaxy, as well as in globular clusters and small galaxies, the Palomar telescope could accurately measure distances to all these objects and thereby provide multiple calibrations of the cosmic distance scale. With the work of Walter Baade and Allan Sandage, Palomar’s 200″ telescope provided new observations of Cepheid stars in a variety of stellar regions and populations. The 200″ telescope not only provided redshifts of much fainter galaxies than other telescopes, but also expanded our knowledge of galactic nebulae with more sensitive spectra that were used to constrain the astrophysical properties of star-forming regions, such as their sizes, distances, masses, and temperatures. The Palomar telescope had an overwhelming transformative impact on our astrophysical knowledge, similar in magnitude to the impact of the Mount Wilson telescopes in the early twentieth century. In addition to groundbreaking new measurements of the universe, Palomar Observatory provided a wonderful harvest of serendipitous discoveries. One of the most consequential serendipitous discoveries of the Palomar Observatory included optical observations of quasars which began the whole new field of study of quasar absorption lines and active galactic nuclei. The quasars were initially discovered in radio wavelengths, and Palomar provided the first optical images and spectra of the quasars, which confirmed them to be supermassive black holes that are at the edge of our universe.

168 

B. E. Penprase

The name quasar derives from “quasi-stellar radio sources,” and their optical counterparts were observed extensively with the Palomar telescope. A brilliant set of observations of quasars during lunar occultations allowed for proof of their coincidence with the lower angular resolution radio sources. Observations of the bright quasar 3C 273 at Palomar included spectroscopic observations by Martin Schmidt, who revealed that the quasars have extremely high redshifts and therefore were at vast distances. The relatively bright optical images and very high redshift suggested that quasars had titanic luminosities, fully exceeding the output of our Milky Way, and concentrated into a source about the size of a solar system. This combination of energy density and luminosity, we know now, is only possible from supermassive black holes that generate billions of solar luminosities by absorbing vast amounts of matter. Since the quasars provide brilliant beacons of light at the edges of our universe, they also provide a basis for mapping the structure of much of the unseen universe through the absorption of light from them by millions of clouds of Hydrogen gas between us and the distant quasars (Fig. 9.2). This field of quasar absorption line spectroscopy, pioneered at Palomar Observatory,

Fig. 9.2  Example of a quasar absorption line spectrum. The light from a background quasar intercepts thousands of clouds of gas in its billions of years of travel to our location. In each cloud is a record of the history of the universe’s element formation. The quasars were first observed optically at Palomar observatory and Caltech and Palomar astronomers pioneered the techniques of quasar absorption line astrophysics. (Image from Pettini, 2011)

9  Mapping Space to the Edge of the Observable Universe 

169

has enabled astronomers to map the distribution of Hydrogen across the universe and chronicle the buildup of the first heavy elements from the formation and death of the first stars in the universe (Sandage, 1999). Another serendipitous discovery from Palomar came from mapping the skies with the much smaller 48-inch Schmidt telescope, which provided the Palomar Sky Survey. The Palomar Sky survey was used worldwide as a baseline image of the sky for finding faint objects, identifying supernovae, and finding new variable sources. The Schmidt telescope helped to discover many thousands of supernovae and asteroids, and in more recent years, also has expanded our solar system to include new Kuiper Belt objects such as Quaoar, Sedna, and objects like Orcus and Eris, which are classified as “Plutinos” and “dwarf planets.” (Leverington, 2017b, p.  14). In recent years, an upgrade of this 48-inch Palomar Schmidt telescope added a vast array of electronic CCD detectors to create the Zwicky Transient Facility (ZTF). The greater efficiency of the CCD cameras enables a complete Palomar Sky Survey to be conducted weekly – thereby discovering hundreds of variable stars, supernovae, asteroids, comets, and gamma-ray bursts every week. The Zwicky Transient facility was named after Fritz Zwicky, who was born in Bulgaria to Swiss parents and worked with Robert Millikan at Caltech in the 1930s. Zwicky was eccentric and energetic and was prescient in imagining many of the critical features of our twenty first-century astrophysics, such as dark matter, which he predicted in 1933, and the description of supernovae, which he wrote about in 1934 with Mt. Wilson astronomer Walter Baade. Zwicky connected supernovae with the transition of massive stars into neutron stars and the production of cosmic rays. The new ZTF, in addition to its discoveries of new asteroids and comets, has been instrumental in discovering new supernovae, including over 3000 supernovae since 2013, and ZTF even detected optical flashes of light from neutron stars that were detected using gravitational radiation (California Institute of Technology, 2021).

9.2 Telescopes on Earth and in Space Since the construction of the Palomar telescope, a profusion of telescopes on Earth and in space have extended our view to encompass nearly the entire electromagnetic spectrum – from gamma rays to radio waves. The vistas provided by each wavelength can lock into different kinds of objects ranging from very high-temperature objects found in the shortest wavelengths like gamma rays and x-rays, to hot stars visible in the ultraviolet to optical to cooler stars and the cold clouds of the Milky Way in longer infrared and radio

170 

B. E. Penprase

wavelengths. In the infrared, telescopes can detect low-temperature asteroids, comets, and minor planets at the edge of the solar system, cold star-forming clouds in the Milky Way, and the most distant galaxies, whose light has shifted into the far red of the optical spectrum. Radio wavelengths provide a view of the coldest objects, such as dark interstellar clouds emitting low-energy photons produced by molecules and atomic Hydrogen. The infrared and radio wavelengths also can peer through vast distances without being scattered by dust and gas and give an unimpeded view of the center of the Milky Way and the central regions of large galaxies. The radio wavelengths can also map the enormous expanses of interstellar and intergalactic gas surrounding galaxies in vast halos. The fundamental physical law that governs which wavelengths are emitted from objects as a function of temperature is known as the Wein Displacement Law, which provides a simple relationship between the wavelength of peak emission λmax and the temperature T of the object. Put simply, hot objects emit short wavelengths, and colder ones emit longer wavelengths in direct proportion to temperature. This relationship, shown below, helps determine the best wavelength for observations based on the temperature of the cosmic object under study.



max

b T

The emission of light from dense and hot sources produces a continuous spectrum with a peak wavelength that shifts toward shorter and bluer wavelengths with increasing temperature. The plots in Fig. 9.3 show the light emitted by objects at 3000K, 4000K, and 5000K, temperatures typical for stars, and how the light fits within the UV, visible and infrared spectrum. This distribution is often known as a “black body spectrum”. It is a result of quantum mechanics, which differs from the classical theory, which predicts a much higher amount of light in blue and UV wavelengths. The other critical physical mechanism for producing astrophysical light is the emission or absorption of light from atoms within stars and gas clouds. Atoms in hot clouds emit light in discrete wavelengths that are a property of their atomic composition, temperature, and density. Each atom or molecule has a distinct set of spectral lines that are fingerprints of their energy levels set by quantum mechanics. The use of spectrographs allows for the identification of these elements and molecules, and the shifts in wavelengths from their “rest” wavelengths give a measurement of the velocity. Absorption lines are

9  Mapping Space to the Edge of the Observable Universe 

171

Fig. 9.3  The emission of light from dense and hot sources produces a continuous spectrum with a peak wavelength that shifts toward shorter and bluer wavelengths with increasing temperature. These plots show the light emitted by objects at 3000K, 4000K, and 5000K, temperatures typical for stars, and how the emitted light fits within the UV, visible and infrared spectrum. This distribution is often known as a “black body spectrum,” resulting from considering quantum mechanics’ effects in hot objects. (Figure from https://en.wikipedia.org/wiki/File:Black_body.svg)

also formed when cooler clouds surround hot sources. These absorption lines provide valuable probes of the dark clouds of Hydrogen gas and other star-­ forming material less visible from their own emission. The Earth’s atmosphere absorbs and scatters light due to the water vapor, dust particles, and molecules of Nitrogen and Oxygen. The scattering of light from Nitrogen and Oxygen molecules provides our blue sky, and absorption of light from all the atmosphere’s constituents prevents ultraviolet and shorter wavelengths and much of the infrared spectrum from reaching the Earth. To observe those wavelengths, space telescopes such as the Hubble Space Telescope (HST) have been built to have a clear view of non-optical wavelengths. Three other “Great Observatory” space telescopes were launched in the 1990s along with HST, including the Compton gamma-ray telescope, the Chandra X-ray telescope, and the Spitzer Space Telescope for infrared wavelengths (Fig. 9.4). Other telescopes sensitive to even longer wavelengths in the

172 

B. E. Penprase

far infrared include the Herschel and Planck telescopes. In the microwave wavelengths, space telescopes have been able to view the light from the earliest epochs of the Big Bang  – this light, known as the Cosmic Microwave Background Radiation (or CMBR), provides vital clues about the early universe and represents the longest redshift visible, arising from light emitted 300,000 years after the Big Bang. Since the “redshift” of light from cosmic expansion shifts light farther to the red in proportion to the object’s distance, the most distant objects are brighter in infrared than in optical light. Infrared astronomy is therefore able to see the most distant galaxies and undiscovered objects in the cold and dark reaches of the outer solar system. Several new telescopes are planned to be optimized in these wavelengths, including the new James Webb Space Telescope, discussed later. By placing telescopes in space above the Earth’s atmosphere, the astronomical seeing and distortions from the air are removed, and the only limit to angular precision comes from the telescope’s optics. The theory of diffraction, developed by 19th-century astronomers, showed that the limiting angle of

Fig. 9.4  NASA’s “great observatories,” launched in the 1990s, included space telescopes in the gamma rays (Compton Gamma Ray Observatory, or CGRO), x-rays (Chandra X-ray observatory), UV/optical (Hubble Space Telescope or HST), and the infrared, (the Spitzer Space telescope or SIRTF). (Image from https://upload.wikimedia.org/wikipedia/ commons/6/65/Great_Observatories.jpg)

9  Mapping Space to the Edge of the Observable Universe 

173

resolution could be found by dividing the wavelength of light by the telescope diameter. For a 1-meter telescope, this “diffraction limit” for optical light provides a resolution limit of about 0.12 arcsecond. The Hubble Space telescope’s 2.6-meter diameter mirror gives a limit of angular resolution in the blue light of about 1/20 of an arcsecond, allowing for much more precise observations than would be possible on Earth. This astounding resolution of HST corresponds to viewing an object the size of a softball from 1000 km. Along with the suite of multi-spectral space telescopes, new ground-based telescopes of ever-greater diameter and sophistication have been built in the past decades. Several 30–40-meter telescopes are under construction – fully three times larger than the current world’s largest ground-based telescope. The largest optical telescopes on Earth have already far surpassed Palomar in light-­ gathering capacity, with the current champion telescopes including the two Keck 10.5-meter telescopes at Mauna Kea in Hawaii, the four 8-meter telescopes at Cerro Paranal in Northern Chile, the Gemini and Subaru 8-meter telescopes in Hawaii, and the dual 8.4-meter telescopes known as the Large Binocular Telescope in Arizona, and the pair of 6.5-meter Magellan telescopes at Las Campanas in Chile. The variety of telescopes and their modern instruments enable astronomers to peer into galaxies, nebulae, star clusters, and our own galaxy to gather vast amounts of data in astronomy, making “data-­ mining” in astronomy a research field in its own right. New and even larger telescopes are now being built that extend primary mirror diameters beyond 30-meters. These include the Giant Magellan Telescope (with seven 8.4-meter mirrors), the Thirty Meter Telescope (with a 30-meter segmented diameter mirror) and the European Southern Observatory Extremely Large Telescope in Chile (with a 40-meter diameter meter), and new survey telescopes like the Vera C. Rubin Observatory in Chile will conduct all-sky surveys far more sensitive than Palomar or ZTF on a daily cadence (Fig. 9.5).

Fig. 9.5  Montage of the world’s largest ground-based telescopes, including the current ground-based telescopes such as the Keck telescopes, and the upcoming new thirty-meter telescopes such as the ELT, TMT, and GMT, which are being built in Chile and Hawaii. (Image from https://upload.wikimedia.org/wikipedia/commons/5/59/Size_ comparison_between_the_E-­ELT_and_other_telescope_domes.jpg)

174 

B. E. Penprase

Rather than documenting each of these telescopes and their discoveries, the capacity of just two current telescopes  – the GAIA and JWST space telescopes – provide helpful case studies of the transformative advances of new technology in astronomy and their impacts on our models of the Milky Way and our perspective on space and time. The two most important properties of telescopes – angular resolution and light-gathering capacity – are well illustrated with these two space telescopes. The GAIA telescope is designed to optimize angular resolution, and the JWST is designed to maximize light gathering from a space telescope with its 6.5-meter primary segmented mirror. Both telescopes are just beginning to profoundly impact our models of space and time.

9.3 GAIA Maps of the Milky Way and Astrometry Angular resolution was optimized by the refracting telescopes of the nineteenth century to measure parallax and stellar aberration and enabled astronomers like Bessel and Fraunhofer to measure star positions within a fraction of one arcsecond. The limits of visual observations in the best observation sites of the world, as measured by “seeing,” is typically in the range of 0.4–0.7 arcseconds. The measurement of astronomical parallax, which Bessel first demonstrated in 1838 and von Struve in 1837 for the nearby stars 61 Cygni and Vega, used exacting repeat measurements and micrometers to discern the wobble of stars from parallax as of 0.31 and 0.125 arcseconds respectively. Later techniques using photographic plates could measure hundreds of stars on the plates with a microscope to derive parallax values. With this photographic technique, astronomers at Yerkes and Lick Observatory could measure parallax to stars at distances of up to 100 parsecs, pushing the precision of angular measurement close to 0.01 arcseconds. This allowed astronomers in the twentieth century to derive accurate distances to nearby stars – the fundamental basis for our cosmic distance ladder, and a necessary step for calibrating the physical properties of stars. This technique of measuring star positions is known as “astrometry” and has been an essential part of astronomy for over a century. Astrometry using photographic plates was very time-­ consuming and inefficient, however, and was limited in accuracy by blurring from Earth’s atmosphere. To provide a larger dataset and greatly improve the efficiency of the process, space-based astrometry was developed, which allowed for the positions of

9  Mapping Space to the Edge of the Observable Universe 

175

thousands of stars to be determined without any interference from Earth’s atmosphere. Two spacecraft specifically optimized for measuring parallax and stellar motions were launched, with the goal to extend parallax measurements to reach across the galaxy to enable 3-dimensional maps of the Milky Way. The first of these satellites was named Hipparcos, in honor of the Greek astronomer who constructed the first star catalog and was launched in 1989. Hipparcos measured the parallax of 118,200 stars of stars to a precision of 0.001 arcseconds and extended our maps of the galaxy out beyond 1000 pc or to our next spiral arm (Howell, 2022). The successor spacecraft, GAIA, was launched in December 2013 to achieve 200 times more accurate resolution than Hipparcos and to provide measurements of more than two billion stars. The GAIA spacecraft can measure the positions of stars with an astounding angular precision of 24 micro-­arcseconds, which is comparable to measuring the diameter of a human hair at 1000 km. This precision will enable nearby star distances to be determined to within 0.1% and provide 10% accuracy for distance measurements of stars at the edge of our Milky Way galaxy at 30,000 light years distance, extending our three-dimensional map of the galaxy to the galactic center and beyond. In addition, two other instruments on GAIA will measure spectra for the stars, giving access to their radial velocities while also measuring their exact brightness. GAIA provides repeat measurements of its targets more than a dozen times a year, and its target list includes billions of nearby stars and 500,000 distant quasars. The GAIA spacecraft does not even look like a conventional space telescope, as it contains a ring of sensors that gain incredible precision from scanning around a ring of sky repeatedly and averaging the measured positions with many repeat observations (Fig. 9.6). The accuracy of GAIA’s measured star distances has enabled the discovery of distinct star streams moving through the galaxy at various rates, giving clues to the early history of the Milky Way galaxy, a study some have labeled as “galactic archaeology.” GAIA has discovered streams of stars that are leftovers from collisions with other galaxies in the early days of the Milky Way. One discovery shows that the Milky Way collided with another smaller galaxy, about one-fourth the Milky Way’s size, about 10 billion years ago. This proto-galaxy is known as GAIA Enceladus, and many of its stars can be found in the Milky Way Halo – a group of stars orbiting our galaxy in a roughly spherical pattern. Additional discoveries of GAIA include detecting extremely fast-moving stars that may be scattered away from black holes in other galaxies (European Space Agency, 2022). Another achievement of GAIA is to map the disk of our galaxy in unprecedented detail, locating the exact positions of the different clumps of

176 

B. E. Penprase

Fig. 9.6  The GAIA spacecraft, which has mapped the positions and distances of over 1 billion stars to unprecedented precision and enabled new models of the space within our Milky Way galaxy with accurate distances and velocities of the stars. (Image courtesy of ESO)

gas and mapping hundreds of star-forming regions that appear in this part of our cosmic ocean. Maps of star-forming clouds have revealed their filamentary structure, and their locations can be used to better understand the dynamics that trigger star formation in our galaxy. GAIA has also revolutionized our knowledge of star clusters, batches of stars that formed in earlier epochs of star-formation. By locating and tracking the motions of star clusters more precisely, GAIA can constrain the history of star formation in the galaxy and lock in the ages of stars and their evolution in the process. Some of the Gaia observations have also shown that the young star clusters are expanding and can help constrain the process by which stars are “released” into the galaxy as they drift away from the star clusters where they are formed. GAIA can also revisit the distances and evolution of the globular star clusters orbiting our Milky Way  – with angular precision that has provided results that Harlowe Shapley and his early twentieth-century astronomers could only dream of. For example, the distances to more than 20 globular clusters, ranging from 2400 pc to 24,000 pc, have been recently recomputed using a combination of GAIA and HST data. The results include measured distances with an accuracy of 1% or better (Baumgardt and Vasiliev, 2021).

9  Mapping Space to the Edge of the Observable Universe 

177

GAIA’s parallax accuracy is astounding, and yet even higher precision is possible using radio telescopes linked together. The radio telescope connected in this way has an effective diameter as large as the Earth’s diameter, giving incredibly accurate measurements of position and parallax. Using the Very Long Baseline Astrometry system, or VLBA, astronomers have been measuring objects across the Milky Way on the “dark side” past the galactic center. In one recent report, astronomers provided parallax for a star-forming region known as G007.47+00.05, which is 66,000 light years away. This radio-­ derived parallax provides an accuracy of 50 micro arcseconds and is unaffected by the large amount of stardust and “extinction” between us and the other side of the galaxy. Similar feats of high angular resolution have been performed using the earth-sized radio telescope to take images of the central black hole in the distant galaxy M87, which provided the first direct image of a black hole in the center of a galaxy 55 million light years away. The M87 black hole is over 6 billion solar masses and extends 38 billion km across – about the size of our solar system (250 Astronomical units) (Figs. 9.7, 9.8, and 9.9).

Fig. 9.7  Image of the M87 black hole, taken with the Event Horizon telescope, a planet-wide group of radio telescopes that can be combined to produce the highest angular resolution to date, resolving features within the galaxy as small as 50 microarcseconds. (Image from https://en.wikipedia.org/wiki/File:Black_hole_-­_Messier_87_ crop_max_res.jpg)

178 

B. E. Penprase

Fig. 9.8  Image of the Milky Way galaxy based on star densities within the immense GAIA stellar database. The image is constructed from locations of measured stars and not a direct image of the sky. Still, the completeness of the database reconstructs the appearance of the Milky Way and Magellanic Clouds precisely. (Image from ESA/ GAIA/DPAC)

9.4 The James Webb Space Telescope The ultimate combination of light-gathering power and high angular resolution is embodied in the newest space telescope, the James Webb Space Telescope or JWST. JWST combines many of the best features of the Spitzer Space Telescope, which was NASA’s Great Observatory for infrared observations, the Hubble Space telescope, which rewrote astronomy textbooks with its stunning high-resolution images of the cosmos, and the Keck telescope, which pioneered a new segmented mirror technology to achieve the largest light-gathering capability of a ground-based telescope when it began operations in 1993. JWST is the culmination of decades of experience and brings together the talents of NASA, the European Space Agency (ESA) and the Canadian Space agency, with collaborations with over 300 universities, 29 US states and 14 countries (Howell, 2018). JWST is optimized for the infrared, which allows it to see high-redshift galaxies, cooler planets, and star-forming regions, and also to be free of most of the interstellar scattering which blocks the view of the galaxy in optical wavelengths. To detect faint infrared light, the telescope must be extremely

9  Mapping Space to the Edge of the Observable Universe 

179

Fig. 9.9  GAIA’s three-dimensional dataset has measured the distances to the stars, allowing us to get an “aerial view” of the Milky Way galaxy. This image is a map of the galaxy from GAIA taken as if we were in a spacecraft hovering over our galaxy several 1000 light years above the Sun’s location. The locations of nearby stars and star-­forming regions are indicated, as is the position of the local spiral arm of our galaxy. (Image courtesy of ESA from https://www.cosmos.esa.int/web/gaia/iow_20180614)

cold, and operates at 40 degrees above absolute zero. Its instruments were specially designed to detect infrared light and were based on decades of experience in building and using such instruments in HST and the Spitzer Space telescope. JWST’s location at the L2 Lagrange point – an invisible point of gravitational stability 1.5 million kilometers away from earth – enables it to view the skies free from earth’s interference. JWST is free from the earth’s background radiation, which allows it to be cooler and darker than HST. To assist in keeping JWST’s instruments in the dark and cold, it is protected by a heat shield that consists of a stack of 5 thin aluminum coated Kapton sheets the size of a tennis court that reflect any incident sunlight. The sun shield is a technological marvel – each layer is only 0.05 millimeters thick and coated

180 

B. E. Penprase

Fig. 9.10  The James Webb Space Telescope provides an effective 6-meter mirror diameter from a composite of 18 gold-coated segments. The mirror is protected from stray light and heat by its five layers of the heat shield, which span an area comparable in size to a tennis court. (Image from https://esahubble.org/images/jwst_in_space-­cc/)

with aluminum metal and silicon materials to create the highest possible reflectivity and insulation (Goddard Space Flight Center, 2022) (Fig. 9.10). The JWST mirror provides an effective 6.5-meter diameter, white dwarfs HST’s 2.4-meter mirror. The JWST mirror is built from 18 gold-coated hexagonal segments, a technology pioneered with the Keck telescope. Unlike Keck, the JWST mirror had to be folded into a compact bundle for launch and then flawlessly unfolded 1.5 million kilometers away, all under its own control. Its suite of instruments includes cameras and spectrographs that operate from the red part of the optical spectrum (at about 600  nm or 0.6-micron wavelength), all the way to the middle of the infrared band of the spectrum (to 28 microns). Since its launch on December 25, 2021, JWST parked at the Sun-Earth L2 Lagrange point and perfectly unfolded its optics. Its first images were released soon afterward on July 12, 2022. These images

9  Mapping Space to the Edge of the Observable Universe 

181

surpassed all previous telescopes in their stunning detail and have amazed astronomers with their razor-sharp views of planets, stars, galaxies, nebulae, and the larger universe. Just as Galileo turned his new telescope on the planet Jupiter, JWST viewed Jupiter early in its operation in August 2022. The views of Jupiter through JWST were as stunning to astronomers in 2022 as they were to Galileo. The JWST images, unlike images from Galileo’s telescope or HST, are taken in the infrared, in wavelengths longer than we can see with our eyes. The infrared light arises deep within Jupiter’s atmosphere, allowing the JWST to peer deeper into Jupiter than an optical telescope. The view from JWST’s near-­ infrared camera (NIRCAM) picks up deeper and finer detail in the many bands of clouds on the planet’s surface, as well as hot spots from Jupiter’s aurora and locations in the disk of Jupiter glowing in the infrared from its many violent storms. The great red spot of Jupiter in Fig. 9.11 appears white, and the mix of colors from the many other storms in Jupiter’s cloud decks reflect differing amounts of haze, reflected light, and energy. The zoomed-out JWST image in Fig. 9.11 also shows the faint ring around Jupiter, and two of its lesser-known moons, the moons Amalthea (discovered by Bernard in 1892 at Lick Observatory) and Adrastea (discovered by the Voyager spacecraft in 1979).

Fig. 9.11  JWST infrared images of Jupiter. Like Galileo’s view of the planet with his new telescope, the JWST team selected Jupiter as one of the first targets for viewing. The infrared view provides stunning detail on Jupiter’s many storm systems and intense glowing from lightning and aurora on the planet. The left panel shows the disk of Jupiter, while the right panel shows Jupiter along with two of its fainter moons and its ring system. (Image from https://blogs.nasa.gov/webb/2022/08/22/webbs-­jupiter-­ images-­showcase-­auroras-­hazes/  - https://blogs.nasa.gov/webb/wp-­content/uploads/ sites/326/2022/08/JWST_2022-­07-­27_Jupiter.png)

182 

B. E. Penprase

The JWST image also shows pair of aurorae at Jupiter’s poles – Jupiter’s aurora, like Earth’s northern lights, comes from atmospheric ions excited by ions from the Sun driven to the polar regions by magnetic fields. Jupiter has the strongest magnetic field of any planet in our solar system – some 20,000 times stronger than Earth’s magnetic field. These magnetic fields also bring in ions from Jupiter’s moon Io, forming a visible “footprint” in the image. As Io itself was discovered in 1610 by Galileo- these JWST images bring us full circle in the history of telescopic astronomy! One of the spectacular first images from the JWST was an image of a group of relatively local galaxies known as Stephan’s Quintet, shown below in Fig. 9.12. The Stephan’s Quintet is a group of four interacting galaxies (NGC 7319, 7317, 7318a, and 7318b) and one foreground galaxy (NGC 7320). The image shows all five together in the sky, and with JWST’s infrared cameras, glowing regions of red indicate the heat from star-forming clouds of interstellar gas. Galaxies often “interact” – meaning that their mutual gravity can cause them to not only orbit each other but also to disturb the gas in their disks and trigger star formation from close approaches. One orbit of interacting galaxies can take hundreds of millions of years. In many cases, the

Fig. 9.12  JWST image of Stephen’s Quintet, a relatively nearby group of galaxies interacting within the Virgo cluster of galaxies approximately 290 million light years away. (Image from https://www.nasa.gov/image-­feature/goddard/2022/nasa-­s-­webb­sheds-­light-­on-­galaxy-­evolution-­black-­holes)

9  Mapping Space to the Edge of the Observable Universe 

183

interactions end when the galaxies spiral together and form a composite or merging galaxy. The two merging galaxies in the Stephan’s quintet image, NGC 7318a and NGC 7318b, are visible in the middle of the image, and their pair of nuclei look something like a pair of egg yolks in a pan. These two galaxies are well on their way to merging, and if we could return in a billion years, we would see Stephan’s quintet as a quartet – as at least two of the galaxies would have merged by then. While these galaxies may seem impossibly far away, they are quite close by a cosmic standard. The nearest galaxy, NGC 7320, is 40 million light-years away, making it a member of the neighboring Virgo cluster of galaxies. This galaxy appears in the middle left of the image, at about the 10 o’clock position. The galaxy at the top of the image is NGC 7319, which like the other four of the interacting Stephan’s Quintet galaxies, is at 290 million light years away. NGC 7319 contains a massive black hole of 24 million solar masses, which is not too unusual for disk galaxies (our own Milky Way has a black hole in its center of 6 million solar masses). Within the image at the bottom is also the elliptical galaxy NGC 7317, which lacks the red glowing star-­ forming gas, and like most elliptical galaxies, is not actively forming stars. Some recent studies have suggested NGC 7317 contributed stars to the merging system and itself may have a black hole. The two interacting galaxies in the center are in the later stages of merging. The glowing red material surrounding these galaxies in the image is probably the interstellar gas that was “stripped” from the galaxies from their intense gravitational interaction. As another illustration of the capabilities of this amazing new telescope, let us consider one of the first images taken with the JWST, the deep image of the galaxy cluster SMACS 0723, an image now known as JWST’s “First Deep Field.” The image is the deepest image yet taken in the infrared from any telescope. It can peer through the entire expanse of the visible universe to reveal galaxies over 13 billion light years away. The image was so impressive that it was the subject of a press conference event on July 11, 2022, at the US White House hosted by US President Joseph Biden (Fig. 9.13). Within the JSWT deep field image are thousands of galaxies – all in an area less than the size of a grain of sand held at arm’s length. Like the Hubble Deep Field image, JWST’s deep field reveals thousands of galaxies in what were previously considered blank regions of the sky. Within the image are traces of “gravitational lenses” that trace the gravitational bending of spacetime from unseen dark matter and supermassive black holes. These arcs of light arise from even more distant galaxies that are “lensed” by the foreground cluster of galaxies. Unlike the HST deep field images, which took over a week, the JWST image took just 12.3 hours and revealed every bit as much detail. More

184 

B. E. Penprase

Fig. 9.13  The SMACS 0723 galaxy cluster as viewed by the JWST. This long exposure is known as the JWST’s “First Deep Field” and reveals thousands of galaxies, all within a region of the sky the size of a grain of sand held at arm’s length. (https://www.nasa. gov/image-­f eature/goddard/2022/nasa-­s -­w ebb-­d elivers-­d eepest-­i nfrared-­ image-­of-­universe-­yet)

amazing still is that the JWST can simultaneously take this image and the spectra of several of the objects within the image. Since JWST views the galaxies in the infrared, its sensitivity is optimized for the extremely redshifted light from most distant galaxies. The JWST deep field provides a wealth of information that could happily occupy an astronomer for their entire career to interpret fully. Teams of astronomers around the world are only just beginning to analyze the data to study the thousands of galaxies in the image and interpret the sources of the gravitational lenses which arise from both visible matter in the form of stars and vast amounts of dark matter as well as hidden black holes. The JWST image includes include infant galaxies 12-13 billion light years away at the extreme range of its vision, as well as nearer foreground galaxies in their mid-­ teenage years, and even a smattering of nearby dwarf galaxies and many foreground stars from the Milky Way, which happen to “photo bomb” this image and show up in dazzling detail. Since JWST samples space from across the entire visible universe, the JWST deep field image contains within it much of the history of the universe. In its foreground, the image can reveal thousands

9  Mapping Space to the Edge of the Observable Universe 

185

of stars in the distant halo of our galaxy – many of these arising from earlier epochs of the galaxy’s history when it collided and merged with other galaxies in the early universe. The galaxy cluster at the center of the JWST deep field image, SMACS 0723, is typical for a large galaxy cluster. Galaxies, like stars, are grouped into larger assemblages held together by their gravity. Our Milky Way galaxy is part of the “local group” of galaxies, which forms the local cluster that includes 50 galaxies, among which the Milky Way, the Andromeda galaxy, M33, and the Magellanic Clouds are the largest. The nearest large group of galaxies is the Virgo cluster, which is about 50 million light-years away and features nearly 2000 galaxies. Virgo cluster galaxies have velocities ranging from about 700 to 2500 km s−1, which places them at a redshift range between z = 0.002 and z = 0.008. The redshift z is simply the fraction of the speed of light that the galaxies appear to be moving. For example, a median Virgo cluster galaxy with a redshift of about 1000 km s−1 would be at a redshift of 0.003, meaning it is moving away from us at about .3% of the speed of light (Nasa Extragalactic Database, 2022). The SMACS 0723 cluster is considerably farther away than the Virgo Cluster and yet is only an intermediate distance for the vast range of distances in the JWST deep field. The SMACS 0723 cluster redshift is z = 0.39, which places it at about 3.8 billion light years away, based on the light travel time. The light from this cluster is so redshifted it appears to have a velocity that is about 40 percent of the speed of light. Since the light was released many billions of years ago, the galaxy light is a time capsule that provides a picture of the cluster from 3.8 billion years ago. At this time, our Sun and Earth had just formed, and the universe was about 7.5 billion years old, or about half its present age. An initial study of the SMACS 0723 cluster in the JWST deep field has identified 130 galaxies as “members” of the cluster. These 130 galaxies are bound together by their mutual gravitational attraction, which also includes a vast amount of dark matter that can be measured from the galaxies’ motions and the resulting gravitational lenses. The combined mass of the SMACS 0723 cluster based on numerical models appears to be in the range of 400–800 trillion solar masses, which is consistent with the mass of a cluster of about 400 Milky Way galaxies. This estimate is based on considering the mass of the Milky Way galaxy, including dark matter, to be close to 1 trillion solar masses (even though the Milky Way is only thought to contain about 200 billion stars) (Yirka and Phys.org, 2019). The discrepancy between the measured mass of the SMACS 0723 cluster and the mass of its 130 galaxies suggests vast amounts of dark matter within the cluster (Golubchik et al., 2022).

186 

B. E. Penprase

The immense mass of the SMACS 0723 cluster creates a vast distortion of space and time which provides gravitational “lenses” that boost the light of galaxies much farther away than the galaxy cluster. Within a few weeks of releasing the JWST deep field image, scientists worldwide studied the arcs of light within the image for clues about the nature of the background galaxies that were lensed. Within the image are several galaxies with higher redshifts of about 1.38 – which also appear to be lumpy in a way that might reveal substructures within the galaxy – perhaps globular clusters inside the galaxy. The background lensed galaxies reveal that the light was emitted by an extremely distant galaxy at about 9 billion light years away. This galaxy emitted its light well before the Sun existed, and when our own Milky Way galaxy was forming, in a time in the early universe just 4.6 billion years after the Big Bang (Pascale et al., 2022). The astounding fact about the JWST deep field image is how it captures light from galaxies even farther away, at the farthest visible parts of our universe. Within days of the release, several teams of astronomers looked through the images for telltale signs of “cutout” galaxies which appear to have light shifted so far into the red that the galaxy images are not visible within the optical wavelengths of the JWST. Within the image are at least four images of galaxies in the range of redshifts of 6.3–8.5, and additional galaxies that are at even higher redshifts, with four at redshifts z > 9 and one at a redshift of z = 11.5 (Trussler et al., 2022). If those galaxies are confirmed, they would be the highest redshift galaxies ever observed, and the z = 11.5 galaxy being is viewed at 13.32 billion light years away, meaning the light has been traveling for 97% of the age of the galaxy or since the universe was just 400 million years old! (Pascale et al., 2022) (Fig. 9.14). The combination of ultra-precise angular measurements possible with modern telescopes such as GAIA and the extreme light-gathering power of telescopes like JWST gives us the capacity to map any part of the sky with exquisite high angular resolution and sensitivity. As these telescopes continue to refine our maps of the sky and measurements of the structure of the universe, our models of time and space will continue to improve based on data acquired from across the entire visible horizon of the universe.

9  Mapping Space to the Edge of the Observable Universe 

187

Fig. 9.14  Detail within the JWST SMACS 0732 “Deep Field” image showing the high redshift galaxy at a distance of 13.1 billion years. The light from this galaxy is being viewed as it was emitted 13.1 billion years ago when the universe was only 600 million years old. The red arcs of light in the lower panel result from gravitational lensing, which brings faint light behind the galaxy cluster to focus due to distortions in spacetime. (Image is from https://webbtelescope.org/contents/media/images/2022/035/01G7 HRYVGM1TKW556NVJ1BHPDZ?Category=02-­cosmology&Collection=First%20Images)

188 

B. E. Penprase

References Baumgardt, H., & Vasiliev, E. (2021). Accurate distances to Galactic globular ­clusters through a combination of Gaia EDR3, HST, and literature data. Monthly Notices of the Royal Astronomical Society, 505(4), 5957–5977. https://doi.org/10.1093/ mnras/stab1474 Bowen, I. S. (1951). The Palomar observatory. The Scientific Monthly, 73(3), 141–149. https://www.jstor.org/stable/20548 California Institute of Technology. (2021). The 200-inch Hale Telescope. Sites.astro. caltech.edu. https://sites.astro.caltech.edu/palomar/about/telescopes/hale.html European Space Agency. (2022). Gaia overview. Www.esa.int. https://www.esa.int/ Science_Exploration/Space_Science/Gaia_overview Goddard Space Flight Center. (2022). The Sunshield Webb/NASA. Webb.nasa.gov. https://webb.nasa.gov/content/observatory/sunshield.html Golubchik, M., Furtak, L., Meena, A., & Zitrin, A. (2022). HST strong-lensing model for the first JWST galaxy cluster SMACS J0723.3-7327. https://arxiv.org/ pdf/2207.05007.pdf Hale, G.  E. (1928). The Possibilities of Large Telescopes. Harper’s Magazine, April 1928. Howell, E. (2018, July 17). NASA’s James Webb Space Telescope: Hubble’s Cosmic Successor. Space.com; Space. https://www.space.com/21925-­james-­webb-­space-­ telescope-­jwst.html Howell, E. (2022, June 13). Gaia: Mapping a Billion Stars. Space.com. https://www. space.com/41312-­gaia-­mission.html Leverington, D. (2017a). Observatories and telescopes of modern times : Ground-based optical and radio astronomy facilities since 1945 (p. 9). Cambridge University Press. Leverington, D. (2017b). Observatories and telescopes of modern times : Ground-based optical and radio astronomy facilities since 1945 (p. 14). Cambridge University Press. Nasa Extragalactic Database. (2022). By Name | NASA/IPAC Extragalactic Database. Ned.ipac.caltech.edu. https://ned.ipac.caltech.edu/byname?objname=Stephan%2 7s+Quintet&hconst=67.8&omegam=0.308&omegav=0.692&wmap =4&corr_z=1 Pascale, M. et  al. (2022). Unscrambling the lensed galaxies in JWST images behind SMACS0723. https://arxiv.org/pdf/2207.07102.pdf Pettini, M. (2011). The First Stars: clues from quasar absorption systems. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 467(2134), 2735–2751. https://doi.org/10.1098/rspa.2011.0117 Rucker, E. (1948, June 4). Huge Telescope dedicated at Palomar Rites. San Diego Union-Tribune. https://www.sandiegouniontribune.com/news/150-­years/sd-­ me-­150-­years-­june-­4-­htmlstory.html Sandage, A. (1999). The first 50 years at Palomar: 1949–1999 the early years of Stellar evolution, cosmology, and high-energy astrophysics. Annual Review of

9  Mapping Space to the Edge of the Observable Universe 

189

Astronomy and Astrophysics, 37(1), 445–486. https://doi.org/10.1146/annurev. astro.37.1.445 Trussler, J. et al. (2022). Seeing sharper and deeper: JWST’s first glimpse of the photometric and spectroscopic properties of galaxies in the epoch of reionisation. https://arxiv. org/pdf/2207.14265.pdf Voller, R. (2021). Hubble, Humason and the Big Bang (p. 169). Springer. Yirka, B., & Phys.org. (2019, December 13). Researchers estimate the mass of the Milky Way to be 890 billion times that of our sun. Phys.org. https://phys.org/news/2019-­12-­ mass-­milky-­billion-­sun.html

10 Our Light Sphere and Horizon

The outgoing light cone departing from Earth into the future is sometimes called our “light sphere,” which represents the symphony of electromagnetic radiation and signals we send to surrounding stars. One of the most local parts of our light sphere are the orbiting satellites that ring our planet and provide nearly instant communications and positions through the GPS satellite network. Rushing into deep space are the electromagnetic waves derived from human technology  – radio, television, and cell phone- filling our light sphere. The vast array of telescopes built to explore the universe is bound in their explorations by beams of light that impinge on Earth from the outer universe to give us a view of distant space and the ancient past. This past light cone intercepts a narrow subset of the universe, and beyond it may lie an infinity of space and time or perhaps even alternate universes or “multiverses” The “edge” of the observable universe is an object of fascination for our time, no less than the edges of the explored parts of the Earth were for explorers in centuries past. This limit to our view is known as our “horizon,” just as in the time of ancient navigators.

10.1 GPS Satellites and Modern Navigation Our light sphere begins with the satellites that orbit our planet, which includes 4852 active satellites by one recent count (Salas, 2019). Signals from these satellites wash over our world and provide entertainment, including our voice conversations and broadcast television, as well as billions of transmissions © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_10

191

192 

B. E. Penprase

helping guide our global economy. These signals also spill out into outer space and provide the most recent part of our electromagnetic radiation in our light sphere. The practical implications of the finite speed of light and general relativity abound in the world of GPS satellites and communications in space. Every time you look at your phone for GPS, it receives a set of signals from space and uses the arrival time of the signals from a network of satellites to triangulate your position. The GPS technology relies on a constellation of at least 24 satellites in a specially configured set of orbits that allows for simultaneous viewing of at least four satellites to compute your position. The satellites are in medium Earth orbit – or about 20,000 km – which also means that the effects of general relativity are needed to provide the precise signals needed for navigation. Each GPS satellite is equipped with multiple atomic clocks that measure time to within 100 billionths of a second. By comparing the precise timing of signals received from multiple satellites, your cellphone or GPS transponder can locate your position on the earth within a few meters. With more advanced receivers, one can determine the location on the earth to within centimeters or even millimeters (GPS.gov, 2019). GPS‘s precision allows scientists to measure subtle geologic changes, such as shifts in the Earth from plate motions, volcanoes, and earthquakes. To accomplish this precision, the GPS unit needs to measure the timing of signals between the earth and the satellite orbit for your location. These signals would take less than 0.1 s to make the trip from the satellite to your location on Earth. Given that the speed of light is 3 × 108 meters per second, to achieve a precision of millimeters, the GPS signal needs to measure timing accuracy to one part in a trillion, or to measure timing signals of about three trillionths of a second. Because time runs more slowly on the Earth’s surface than in space, GPS satellites must be tuned to run at a slower frequency to match the signals on the ground. This difference is a frequency of 10.22999999543 Mhz on Earth, compared to 10.23 Mhz in space. The resulting shift in time between Earth and GPS satellites results in a time dilation of about 47 microseconds per day (Fig. 10.1). So if you are on Earth, you age more slowly than you would in space due to general relativity! This GPS navigation system works when you are on Earth – but how can spacecraft know where they are? What could they use for GPS? Believe it or not, NASA is working on it. Scientists have proposed to use pulsars as an extraterrestrial GPS network. In one example, scientists at the US Naval Research Laboratory and Goddard Space Flight Center are developing a prototype for an X-ray pulsar “GPS” system for interplanetary spacecraft. Analogous to early seafarers using the stars for navigating across the ocean

10  Our Light Sphere and Horizon 

193

Fig. 10.1  Time corrections for satellites based on height above Earth based on general relativity (“gravity speedup”) and from special relativity in units of picoseconds or one trillionth of a second. The time in space elapses slightly faster due to the combined effects of general and special relativity. (Image from https://en.wikipedia.org/wiki/ Error_analysis_for_the_Global_Positioning_System#/media/File:Orbit_times.svg)

when terrestrial landmarks were no longer available, scientists are considering pulsars as alternative beacons that can serve as GPS satellites for spacecraft that leave the “shore” of near Earth. In the absence of GPS satellites, the pulsars make a good substitute – their signals come from radio and x-ray pulses that are predictable to within nanoseconds. Any shift in the arrival times of these pulses will signal that the spacecraft has moved toward or away from the pulsar. Just as Roemer noted that the shifts in the timing of the eclipses of Jupiter’s moons change as the Earth moved around the Sun, these scientists noted that shifts in the timing of pulsars could be used to great effect to navigate between the planets. The spacecraft would need a small array of x-ray concentrators to enable the satellite to pick up the x-ray pulses from these pulsars. One system named SEXTANT is being built and paced within a spacecraft known as NICER to test the concept of “X-ray pulsar Navigation.” (Winternitz, et al., 2016).

194 

B. E. Penprase

10.2 The Light Sphere of Earth Moving away from Earth beyond our ring of artificial satellites, our outgoing light cone contains all the information of our present and recent past, including all of our tv, radio, and other electromagnetic broadcasts speeding outwards into the galaxy at the speed of light. This chaotic blizzard of electromagnetic transmissions from the twentieth and twenty-first centuries forms a light sphere some 125 light years in radius, encompassing many nearby stars in the Milky Way. These signals from Earth convey a message to any planets or civilizations (if they exist) who may be listening. The Search for Extraterrestrial Intelligence (SETI) program has used this idea in reverse. SETI assumed that other civilizations with technology would likewise be sending signals in our direction – either intentionally or accidentally – and so SETI used radio telescopes to detect them since 1960 when astronomer Frank Drake conducted the first SETI search (Garber, 2014). Our outgoing broadcasts include all the radiation since Marconi made the first wireless radio transmission in 1897. These transmissions fill a sphere with a radius that is the equivalent number of light years, or approximately 125 light-years at the time of this book. This sphere encompasses a region that includes over 18,000 stars. Using information derived from the NASA satellites TESS and Kepler, there are at least 1319 known planets within that sphere that  could possibly have already  tuned into our first television  and radio programs. Figure 10.2 below shows the extent of this light sphere, which currently extends out to the Hyades cluster, and encompasses the “Ursa Major moving group,” otherwise known as the stars of the Big Dipper. The Coma cluster, a cluster of galaxies 332 million light-years distant, is very far in the background. An observer on one of these Coma Cluster galaxies would see the earth in the time before humans existed, and indeed when our current continents on Earth were combined into one supercontinent known as Pangea (NASA Exoplanet Archive, 2022). Looking through our region in the galactic neighborhood, we can trace the structure of these signals in space. The signals would be sequenced in order of origin, with the oldest radio signals leading the way, 125 light years away. Closer to Earth in the light sphere would be the crescendo of electronic transmissions from the age of radio, the first television signals, cold-war defense radio systems, and then from more recent times, the proliferation of cell and microwave communications. A thin shell of signals near Earth would include short-range transmissions from cell phones and millions of other devices. These last regions of the light sphere would be contained in the 50 light years closest to Earth, when satellites and computers first became available.

10  Our Light Sphere and Horizon 

195

Fig. 10.2  Our light sphere extends 125 light years, and includes within it all of the human-made radio, television and other artificial electromagnetic broadcasts. Within this sphere are over 18,000 stars and over 1300 known planet systems. (Image based on https://www.cosmos.esa.int/documents/29201/5011668/GCNS_Poster_Stellar_ Densities_top_image.png/)

The “light sphere” would also include the view of Earth in the past that could be seen from a distance with telescopes. Like the radio signals, any view of the Earth would be delayed, so beings 10, 100, or 1000 light years distant would see our Earth as it was 10, 100, or a thousand years ago. Beyond 125 light years, Earth’s emissions would only include natural radio emissions from the aurora and pulses of radio waves from interactions between Earth’s magnetic fields and the solar wind.

10.3 Planets That Can Detect Earth Astronomers have studied Earth’s light sphere more systematically, turning the tables on SETI and evaluating how many planets in our galactic neighborhood would be able to detect our radio and television signals today. One study recently published in the journal Nature was entitled “Past, Present and Future Stars that can see Earth as a Transiting Exoplanet” (Kaltenegger and Faherty,

196 

B. E. Penprase

2021). The requirement of the Earth being detected as a “transiting exoplanet” requires precise alignment between the star and the plane of Earth’s orbit so that the Earth can be viewed crossing the Sun’s disk. Such exoplanet “transits” are the primary way we detect extrasolar planets (Fig. 10.3). Still, for a small planet like the Earth, the change in the Sun’s brightness when the Earth crosses is extremely tiny – equal to 0.008% of the Sun’s light being blocked by our tiny Earth (the amount is the ratio of the area of Earth’s disk to the Sun’s disk). The alignment required for the Earth to block the Sun is also very precise, within about 0.3 degrees of perfectly aligned to the plane of the Earth’s orbit. This constraint means that of all the stars we can see, only about 0.3% are aligned well enough to detect Earth, or put another way, the known planets we can see are missing over 99% of potential Earths due to misalignment. Using our current technology, we can detect extrasolar planets in one of two ways – from viewing the planet as it crosses the disk of a star (causing a transit that dims the light of the star by a tiny fraction of a percent), and from viewing the tugging effects of the planet on the star, which causes measurable “wobble’ of the distant star, which is detected by tiny Doppler shifts in the star’s spectrum. Both techniques have only been developed in the past 20 years, and yet already have discovered over 5000 extrasolar planets. The “transit” technique is the easiest to employ, and dedicated satellites like Kepler and TESS have been used to stare at thousands of stars over many years to detect transit events. Even with our very limited knowledge, we have identified several planets within Earth’s “light sphere” that can view Earth as an extrasolar planet. There are undoubtedly thousands of other planets yet to be discovered that could detect Earth even using our current primitive technologies  – and countless others if more advanced technologies could be contemplated! Using the latest results from GAIA and the most complete catalog of discovered extrasolar planets, Kaltenegger & Faherty identified some 1715 stars that were within 326 light-years that are able to be able to spot Earth in its current stage of civilization and identified another 319 stars which could see the Earth transiting with the next 5000 years. The study calculated the view of the Earth and the Sun from these differing vantage points and selected those with the precise alignment needed to “discover” our planet as it crossed through the Sun’s disk. The study considered only the extrasolar planets known today  – a very incomplete sample due to the requirements we mentioned above  – so the actual number of exoplanets (including those we can’t detect) may be 100 times greater. Even within the catalogs of known exoplanets, seven known exoplanet systems could see the Earth’s transit, and 75 of the closest stars in the sample can view human-made radio waves  – and are part of Earth’s

10  Our Light Sphere and Horizon 

197

Fig. 10.3  The two main techniques for detecting extrasolar planets have enabled the discovery of over 5000 extrasolar planets. The spectroscopic technique (left) can detect the Doppler shift of a star being pulled by its planet, while the transit technique (right) can detect the tiny dip in brightness when the planet moves in front of the disk of the star. (Images from https://commons.wikimedia.org/wiki/File:Transit_Method_of_ Detecting_Extrasolar_Planets.jpg and https://commons.wikimedia.org/wiki/File:The_ radial_velocity_method_(artist%E2%80%99s_impression).jpg)

current “light sphere” of radio and television broadcasts. It is remarkable to recall that the transits of Venus and Mercury played a crucial role in launching Earth’s pursuit of scientific astronomy. Now, that same science has revealed precise conditions at which the Earth itself could be viewed from other solar systems! The seven known extrasolar planets that could detect Earth include the planets orbiting the nearby star Ross 128 – only ten light years or 3.375 pc distant and one of the 20 closest systems. A viewer on a planet around this star

198 

B. E. Penprase

is now able to see television shows from 10 years ago!), The planets also include those orbiting Teegarden’s star at about ten light years, which includes two Earth-massed planets, and the Trappist 1 star system at 40 light years, which includes seven Earth-sized planets.

10.4 JWST Imaging of Other Planets A third possibility of detecting other planets comes from the technique of “direct imaging,” which has only now been demonstrated a few times from telescopes like the Hubble Space telescope and the VLT at the European Southern Observatory. This technique was recently validated for the first time with the new JWST, which can greatly expand the types of planets that can be detected. The challenge in seeing planets orbiting other stars directly arises from the enormous difference in brightness between the star and planet, which causes the planet to be lost in the star’s bright glare and is often billions of times brighter than the planet in optical light. Since the stars are typically brightest in the optical wavelengths, and the planets are brighter in the infrared, direct imaging of the planets requires techniques that block the intense optical light and that select the infrared light emitted from the planets. The JWST is a perfect instrument for picking out the faint light from extrasolar planets since it is optimized for observations in the infrared. It also contains additional instruments that will allow us to measure the composition of these planets by taking spectra of the planet’s atmosphere. In one of the first images from JWST, the planet system orbiting the star HIP 65426 was observed with JWST. The star HIP 65426 is at a distance of 361 light years, with a white-hot surface at nearly 9000 degrees K, or about 50% hotter than the Sun. For the planets around HIP 65426 to be “habitable,” they would need to be far enough from this star to be either above the freezing temperature or below boiling temperature. This condition of a “habitable zone” is used to locate planets that have some chance of having liquid water on their surface and provide some basis for focusing on planets that could harbor life. As the HIP 65427 star is 17 times brighter than our Sun, the habitable zone would be expected to be approximately four times farther out from the star than in our solar system – in the range of 4–6 astronomical units. The HIP 65426 star was selected for study as a nearby and young star that could have planets in the formation process and is bright enough to be detectable by the best ground-based telescopes. From studies with the ground-based VLT telescopes at the European Southern Observatory, a planet was indeed detected in the near-infrared (around 2-micron wavelength). The ground-­ based telescopes were able to also obtain a spectrum of the object. They

10  Our Light Sphere and Horizon 

199

determined that the planet (known as HIP 65426b) appears to be a “super Jupiter” with mass in the range of 6–12 Mjupiter, with intense infrared luminosity coming from the hot surface of the planet, estimated to be in the range of 1300–1600 K. The planet was observed to be at 92 Astronomical Units from its parent star, which is well beyond the “habitable zone” for this star. However, the planet itself is glowing from its own heat and is an interesting cross-over case of a “brown dwarf,” which is somewhere between a star and a planet. The HIP 65426b planet is also very young – perhaps in the range of 15–20 million years- so the new JWST images help us learn more about the formation of new planets (Chauvin et al., 2017). The HIP 65426b planet is an ideal candidate for direct imaging with JWST due to the greater distance between the planet and its parent star. As part of the early release images from JWST, the HIP 65426b planet was observed in a variety of infrared wavelengths and easily detected shining in its own light. The HIP 65426 planet is over 10,000 times fainter than the star in infrared wavelengths, so this test observation was an excellent way to gauge how well JWST can function in this type of exoplanet research (Witze, 2022). With this proof of JWST’s capability of directly imaging extrasolar planets, further studies will no doubt reveal hundreds of new planets around nearby stars and increase our catalog of known extrasolar planets beyond what we can currently know from transiting studies (Fig. 10.4).

10.5 The Horizon of the Observable Universe We know from studying the universe’s expansion rate that the “Big Bang” happened 13.7 billion years ago, based on the best models of cosmic expansion, updated with the most accurate estimates of the universe’s mass and the value of the cosmological constant. This “age of the universe” sets an upper limit to the age of the light we can receive from our telescopes today. If we apply the speed of light and multiply by this age, we can derive the upper limit to the “light travel time” distance of light we can observe. This maximum light travel time distance, 13.7 billion light years, could be considered the current “radius” of the universe if the universe were static. However, we also know that more space exists well outside of this light cone, which we have not yet been able to observe. A more proper way to describe this limit is our cosmic horizon. Just as ancient explorers knew there were ships, islands, and continents beyond the horizon, we can infer the existence of space beyond our cosmic horizon, even though we lack any observational tools to see beyond this cosmic horizon.

200 

B. E. Penprase

Fig. 10.4  JWST direct imaging of the extrasolar planet HIP 65426b in several infrared wavelengths after removing the light from the star HIP 65426. The images of the planets were taken with the different infrared wavelengths of 3, 4.44, 11.4, and 15.5 microns from left to right. The longer wavelengths are ideal for imaging an extrasolar planet because the ratio of brightness of the planet to the star increases in the infrared. (Image courtesy of NASA / STScI, available at https://www.jpl.nasa.gov/news/ nasas-­webb-­takes-­its-­first-­ever-­direct-­image-­of-­distant-­world)

Space beyond our horizon is at a distance so vast that the light from these greater distances has not had time to arrive to us yet. The best estimates from our cosmic expansion measurements suggest that the universe’s present size is about 47 billion light years in radius, which is a result of cosmic expansion pushing the farthest galaxies so far from us that their distances are beyond our horizon. The light from these galaxies outside our horizon won’t arrive for billions of years. The regions of space beyond our horizon are technically beyond our observable universe, and is sometimes referred to as the “multiverse,” which is both a topic of speculation and of great interest (Wright, 2013). When confronted with the notion of the Big Bang cosmology and the expansion of space from a primordial “atom,” it is common to ask the fundamental question – “what does the universe expand into?” The author asked the great Carl Sagan this question in a meeting at my small college in 1997. Sagan’s answer was, “the universe expands into a space to which we do not

10  Our Light Sphere and Horizon 

201

have access.” This statement, which sounds like a computer error message, expresses modern astrophysics’ fundamental limitations in our knowledge of the universe. The limit of our vision – the extreme edge of the light cone – provides our “horizon,” which is the limit to the knowledge we can derive from cosmic photons. In short, our knowledge of the macroscopic universe is limited to the narrow “light cone” we inhabit, based on the technology of today. And while we may not have access to the space we are expanding into, we can speculate about the regions outside that space – the domains known as the multiverse. Whether the space and time outside that bubble or beyond our horizon is like our space and time is unknown. These regions of the universe – or the multiverse – are as unknown to us today as were the contents of continents and islands beyond the visible horizons on Earth for our ancient ancestors.

10.6 Maps of the Distant Universe and Large-Scale Structure Just as our ancestors refined their maps of the Earth as they developed more advanced knowledge, we too can refine our maps of the universe to include detailed mapping of the distributions of galaxies up to the “horizon” of our universe – the limit to which we can map the spatial extent of the universe using light. Unlike the Earth maps made by our ancestors, these maps represent a mixing of time and space – with only nearby galaxies visible in their present state and more distant galaxies being viewed billions of light years in the past. Nevertheless, the mapping of galaxies has been conducted systematically for decades through sweeps of the skies with large telescopes and a thorough census of galaxies and their redshifts. By applying Hubble’s law, each galaxy can be located in a three-dimensional space from its redshift and the universe’s large-scale structure can be revealed. Mapping these galaxies requires compiling the spectra of hundreds of galaxies in a patch of sky and then measuring the redshift z for each of them. The redshift can be used to calculate the distance d to the galaxy, using d = cz/Ho, where c is the speed of light and Ho is the Hubble constant. The ability to derive distances to the galaxies quickly allows for the two-dimensional image of the sky to be converted into a threedimensional map of the large-scale structure of the universe. This investigation began in the 1980s when astrophysicists Margaret Geller, John Huchra, and other colleagues conducted systematic redshift surveys in strips of the sky. These surveys revealed that the universe is distributed in a

202 

B. E. Penprase

patchy way, with large voids and structures of galaxies that the group gave names such as the “Great Wall” (Harvard and Smithsonian Center for Astrophysics, 2022). Further studies by several teams have filled in our picture of the universe out to billions of light years. They have shown additional structures beyond the “great wall” and a large-scale structure that more resembles Swiss cheese than a smooth and even mixture of galaxies. Some of the systematic surveys of large-scale structure include the 2dF galaxy redshift survey, which offered its final data release in 2003, and the Sloan Digital Sky Survey or SDSS, which has continually released additional data for over 20 years, with the most recent 17th data release from 2021, and the 2MASS survey, which used a dedicated infrared telescope to fill in the region near the galactic plane where the Milky Way gas absorbs optical light (Fig. 10.5). The result of these efforts at mapping the universe is that we have a complete picture for the first billion light-years or so, and then tapering off into “terra incognita” after about 2 billion light years, as the galaxies become too faint for the mapping and spectroscopy to continue. Deep fields have been observed with huge telescopes to get complete samples within small patches of sky, and these deep surveys have been very helpful for continuing the mapping to more distant galaxies and earlier times. A fundamental limitation is that as we push farther back in time, the galaxies will be smaller and have lower luminosities, as they themselves take time to grow and brighten. At some point, we will run out of galaxies altogether, as in the initial hundreds of millions of years in the universe’s history, there should be no stars at all! This period in our universe’s history is known as the “Dark Ages.”

10.7 The Cosmic Microwave Background Radiation The data from the Sloan Digital Sky survey includes a vast number of high redshift galaxies and quasars and has been used to extend the picture of large-­ scale structure to search for the seeds of large-scale structure in the first few billion years of the universe’s history. Recognizing that these seeds of structure come from the oscillations – or sound waves – in the early universe, this dataset is known as the Baryon Oscillation Spectroscopic Survey or BOSS (SDSS, 2012). The Baryon Oscillations also refer to the pressure waves in the first protons (or baryons) as they formed the first neutral Hydrogen atoms 13.67 billion years ago. This period of the universe, some 300,000 years after the initial singularity, is a crucial event in the universe’s history. This time

10  Our Light Sphere and Horizon 

203

Fig. 10.5  An example of a large-scale structure map of the universe, based on a 3D map of the galaxies from the 2DF galaxy survey, showing a slice of space from a survey of galaxies within a strip of the sky. The redshifts of the galaxies determine the distances, while the position in the sky locates the galaxy along the angular coordinate. Many of the superclusters of galaxies are labeled in the figure, which span hundreds of millions of light years; one of them is named after Harlowe Shapley. (Image by the author, based on https://apod.nasa.gov/apod/ap071107.html)

represents when the first neutral atoms were formed and the moment when the universe became more transparent to light. As a result, the light from the early universe and its atoms separated or “decoupled,” creating the flash of light we can see all around us – the Cosmic Microwave Background Radiation or CMBR. The Discovery of the CMBR is yet another story of serendipity in astronomy. The tale involves two young radio engineers, Arno Penzias and Robert Wilson, who, while working at Bell Labs in 1964 on a new low-noise microwave radio receiver, noticed a non-zero signal from the skies that was present in all directions. Using a large feed horn, the pair diligently worked to reduce all noise sources in their equipment – including cleaning out some pigeons from the feed horn – and despite their best efforts, the noise persisted. This noise, the CMBR, is a photon echo of the Big Bang. It represents the flash of optical and infrared light that, over the billions of years of the universe’s history, has been redshifted from the optical beyond the infrared into the microwave

204 

B. E. Penprase

Fig. 10.6  The Bell Laboratory microwave horn used by Arno Penzias and Robert Wilson to discover the Cosmic Microwave Background Radiation in 1964, eventually resulting in a Nobel Prize for both of them. (Image from https://commons.wikimedia. org/wiki/File:Horn_Antenna-­in_Holmdel,_New_Jersey.jpeg)

range of wavelengths. As the pair investigated the data more carefully, they recognized the telltale signature of a hot glowing object  – the Black Body radiation  – which by this point in the universe’s history had cooled to 2.7 degrees above absolute zero. Their discovery earned them the Nobel prize in 1978 (Fig. 10.6). Further studies of the CMBR have included space-based mapping experiments that include the Cosmic Microwave Background Explorer (COBE), launched in 1989, the Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001, and most recently, the Planck Satellite, launched in 2009. With each of these new satellites, our picture of the CMBR has improved, and the tiny wiggles in intensity, at the level of one part in 100,000, are direct evidence of the oscillations and structure in the early universe at the moment when the universe became transparent to light. Before this moment, the universe was opaque to light; therefore, the CMBR image represents the limits of our exploration in electromagnetic radiation. The CMBR is essentially

10  Our Light Sphere and Horizon 

205

Fig. 10.7  Cosmic Microwave Background Radiation from the Wilkinson Microwave Anisotropy Explorer spacecraft. The image shows the tiny departures from a uniform microwave background that give clues about the initial seeds of large-scale structures in the universe. (Image from https://commons.wikimedia.org/wiki/File:WMAP_ ILC_b.jpg)

a “wall” that blocks our view into the early universe beyond 300,000 years after the Big Bang (Fig. 10.7). The inhomogeneity of the CMBR is followed rapidly by the formation of large-scale structures characterized by large filaments and voids. This transition presents a challenge to astrophysical models, as it requires a relatively smooth and uniform early universe to condense into structures that are amplified rapidly. The ‘seeds’ of this large-scale structure are planted in the fireball of the early universe, where oscillations of the seething cauldron of nuclei and particles create regions of enhanced density that form the centers of gravitational condensation of the first galaxies and galaxy clusters. Astrophysical models that incorporate the physics of the early universe have replicated the spectrum of the oscillations shown in the CMBR. These oscillations have a spectrum corresponding to harmonics you might hear in a musical instrument. Instead of measuring the frequencies directly in time, the oscillations of the early universe are imprinted on the CMBR and show up as patches of intensity with different angular sizes on the sky. By mapping the power of each angular frequency, the CMBR provides data on the spectrum of oscillations of the early universe, which can be compared with astrophysical models that include key astrophysical parameters such as the Hubble constant Ho, the amount of matter in the universe, and the ratios of “baryonic” matter

206 

B. E. Penprase

Fig. 10.8  CMBR spectrum showing the strength of features in the microwave background at different angular scales. The large bump at 1 degree is the “baryon peak” that shows the typical angular size of clumps of matter emerging from the early universe, and represents one of the fundamental frequencies within the early universe. These bumps provide constraints on the composition and physics of the early universe. (Image from https://commons.wikimedia.org/wiki/File:WMAP_TT_power_spectrum. png)

(like protons and neutrons) to other components of the universe, like dark matter and dark energy (Fig. 10.8). Our best models of the universe incorporate two mysterious ingredients – dark matter and dark energy. In the very early times of the universe, space suddenly expanded, as energy from the Higgs field went through a phase transition that provided the intense jolt of energy that set matter and light into a state of violent expansion. The exact mechanisms of the very earliest moments require more profound knowledge of the particle universe which we discuss in the next chapter. As our telescopes improve, we can also begin to map the “terra incognita” and fill in the spaces between the CMBR and the earliest galaxies. The new JWST telescope promises to fill in the data for deep fields out to the “dark ages” when the first stars began to form, and since it can see in the infrared, it can also pick out data from the earliest galaxies before they would be visible in optical light. New radio telescopes like the ALMA array in Chile can also see into the depths of the “dark ages” and see the radio emissions of the earliest star-forming clouds before the first stars were fully

10  Our Light Sphere and Horizon 

207

Fig. 10.9  Image showing the connections between the CMBR and the first galaxies, which condense from the tiny seeds of enhanced density that emerge from the early universe. This image illustrates the work from the BOSS group, and shows how surveys of galaxies can help map the universe’s evolution from the CMBR to the more modern clusters of galaxies within our current epoch of the universe. (Image courtesy of Chris Blake and Sam Moorfield and at https://www.sdss3.org/surveys/boss.php)

formed. The mapping in these longer wavelengths promises to provide a nearly complete picture of our universe in electromagnetic radiation as we study the transition of the universe from a hot “fireball” into the realm of galaxies and stars we inhabit today (Fig. 10.9).

10.8 Early Universe and the Limits of Our Light Cone The universe beyond the CMBR is a realm we can only understand through nuclear and particle physics. Our available data includes the results of experiments from accelerators that can recreate the intense heat and density of the early universe. The CERN LHC collisions create flashes of particles at the highest temperatures created by humans, exceeding 5 trillion degrees Kelvin, which corresponds to the universe’s temperature after the first microsecond of the Big Bang (Robertson, 2012). Theoretical models can go back even further to times when our present-day particles had not yet condensed– and where the separation of matter and antimatter has not yet happened. These models can then model the evolution forward to the date of the CMBR, 300,000 years after the Big Bang started, and we can compare the detailed structures within the CMBR with such models. These models reproduce the overall smoothness

208 

B. E. Penprase

of the CMBR, which suggests that the earliest times of the Big Bang included a period of rapid and exponential expansion known as the “inflationary epoch.” This initial burst of exponential growth resulted in space expanding at a rate faster than that of light. As a result, the inhomogeneities of the universe were spread out over vast distances to locations currently beyond our horizon. In this way, the relatively smooth nature of the CMBR can be explained, and more recent models of the quantum fields present in the universe provide additional support for this inflationary model. Since the matter was embedded in the inflating space, the matter technically was not moving through space faster than the speed of light, and so inflation does not violate special relativity. This early epoch of cosmic inflation extended our universe far beyond our visible horizon; therefore, the light travel time of our universe – 13.7 billion years – provides only a lower limit to the size of the universe. As billions of years pass, more of the universe will become visible to us as our horizon expands. In addition, the expansion of our “bubble” of space and time seems to be accelerating from a mysterious form of dark energy. The combination of these influences – inflation and dark energy – distorts our light cone from the varying expansion rates. At the base of the light cone, the rapid exponential expansion from a singularity makes the bottom of the light cone something like the stem of a wine glass. And at the rim, the ever-increasing acceleration of the expansion of space from dark energy causes the cone to flare outwards. The revised cone – which represents a 3-dimensional projection of a 4-­dimensional universe – is shown below (Fig. 10.10).

10.9 The View of the Early Universe Knowledge of the specific evolution of the early universe, which exists on the other side of the afterglow of the Big Bang, requires detailed knowledge of high-energy nuclear and particle physics. The initial burst of energy is thought to have come from quantum fluctuations that produced the inflation of the early universe, which exponentially expanded its size in a short time in the first 10−33 s. Bringing in more of the details of cosmic expansion due to dark energy (as we will discuss later) also gives the shape of the light cone a unique and subtle curvature that is the subject of ongoing research with the Hubble Space Telescope and the new James Webb telescope. Learning about the earliest times in the universe requires us to disentangle the effects of early particle physics and nuclear physics that created the fragments of particles and energy that emerged from the early universe. This

10  Our Light Sphere and Horizon 

209

Fig. 10.10  The shape of our light cone after accounting for inflation and dark energy. In the very early part of the universe, the inflationary era causes the base of the light cone to expand outwards, much like the stem of a wine glass. The outer flaring of the light cone comes from the accelerating expansion caused by dark energy. (From https:// www.nasa.gov/content/goddard/webb-­c onversations-­i ts-­a ll-­a bout-­i nfrared­why-­build-­the-­james-­webb-­space-­telescope)

period includes the formation of matter from an initial undifferentiated sea of matter and antimatter, and the condensation of the particles we are made of – the first protons, neutrons, and electrons. These particles, in turn, emerged from an initial quark sea and condensed once the temperatures of the Big Bang lowered to levels that allowed protons and neutrons to exist without converting back into each other. The initial protons and neutrons eventually cooled to form the first nuclei of Helium, and built the trace amounts of Li, Be, B that emerged from the Big Bang. Studying the ratios of these elements in the oldest stars, which trace the element abundances emerging from the Big Bang, provides important constraints on these first nuclei in the universe. To investigate the early universe before the CMBR, we require a tool other than the telescope. The data of the earliest universe comes from an unlikely source – the giant particle accelerators like CERN that can create flashes of energy that duplicate in some small measure the titanic energies and blinding temperatures of the early Big Bang.

210 

B. E. Penprase

References Chauvin, G., Desidera, S., Lagrange, A.-M., Vigan, A., Gratton, R., Langlois, M., Bonnefoy, M., Beuzit, J.-L., Feldt, M., Mouillet, D., Meyer, M., Cheetham, A., Biller, B., Boccaletti, A., D’Orazi, V., Galicher, R., Hagelberg, J., Maire, A.-L., Mesa, D., & Olofsson, J. (2017). Discovery of a warm, dusty giant planet around HIP 65426. Astronomy & Astrophysics, 605, L9. https://doi. org/10.1051/0004-­6361/201731152 Garber, S. (2014). SETI: The Search for Extraterrestrial Intelligence. History.nasa.gov. https://history.nasa.gov/seti.html GPS.gov. (2019). GPS.gov: Timing Applications. Gps.gov. https://www.gps.gov/ applications/timing/ Harvard and Smithsonian Center for Astrophysics. (2022). Large Scale Structure | Center for Astrophysics. Www.cfa.harvard.edu. https://www.cfa.harvard.edu/ research/topic/large-­scale-­structure Kaltenegger, L., & Faherty, J. K. (2021). Past, present and future stars that can see earth as a transiting exoplanet. Nature, 594(7864), 505–507. https://doi. org/10.1038/s41586-­021-­03596-­y NASA Exoplanet Archive. (2022). Planetary systems. Exoplanetarchive.ipac.caltech. edu. https://exoplanetarchive.ipac.caltech.edu/cgi-­bin/TblView/nph-­tblView?ap p=ExoTbls&config=PS Robertson, A. (2012, August 15). CERN scientists may have set new man-made heat record: 9.9 trillion degrees Fahrenheit. The verge. https://www.theverge. com/2012/8/15/3244513/cern-­scientist-­hottest-­man-­made-­temperature Salas, E. (2019). Number of satellites by country 2019 | Statista. Statista; Statista. https://www.statista.com/statistics/264472/number-­o f-­s atellites-­i n-­o rbit­by-­operating-­country/ SDSS. (2012). BOSS: Dark energy and the geometry of space | SDSS. Sdss.org. https://www.sdss.org/surveys/boss/ Winternitz, L. et al. (2016). SEXTANT X-ray Pulsar Navigation Demonstration: Flight System and Test Results (pp.  1–11). 2016 IEEE Aerospace Conference. https:// ieeexplore.ieee.org/document/7500838 Witze, A. (2022). Webb telescope wows with first image of an exoplanet. Nature. https://doi.org/10.1038/d41586-­022-­02807-­4 Wright, N. (2013). Frequently asked questions in cosmology. Www.astro.ucla.edu. https://www.astro.ucla.edu/~wright/cosmology_faq.html#ct2

11 The Quantum World – Physics of the Very Small

A nearly complete model for all the forces of physics has been developed in just over a century from the first glimmerings of knowledge of the subatomic world, which began in the late nineteenth century. The journey toward our current understanding of nuclear and particle physics began with accidents and surprises. The accidental discovery of the first sources of radioactivity in laboratories in the late nineteenth century was followed by multiple surprise discoveries of completely unknown energy sources and radiation that were given provisional names to provide a basis for classifying them. This included the mysterious “x-rays” and uranic rays and the alpha, beta, and gamma radiation. To add to the scientists’ surprise, new elements could be created from radioactive decay, and particles inside the nucleus were discovered to be tightly bound into tiny and dense charge concentrations. The proton, the neutron, and a new force to hold them together were needed as the picture of our quantum world began to come into focus. The quantum theory that arose from these discoveries also provided a tantalizing glimpse of a more profound truth and a fundamental limit to our knowledge – as matter was found to alternate between its particle and wave-like nature to defy exact measurement. The development of this story is one of courage, passion, and genius, which ultimately revealed a fundamental reality of the physical universe that limits our knowledge of the universe at the smallest scales of time and space due to what has become known as the uncertainty principle.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_11

211

212 

B. E. Penprase

11.1 Discovery of the Fundamental Particles of Physics The first evidence of the sub-atomic world of particles and high-energy radiation came from the unexpected discovery of a mysterious form of energy given the name “X-rays” by Wilhelm Conrad Roentgen in 1895. Roentgen found that these rays could expose photographic plates without emitting visible light and penetrate opaque layers of heavy black paper. The following year in 1896, Henri Becquerel discovered another kind of mysterious rays, provisionally given the name “uranic rays” based on their apparent origin from Uranium compounds. Ernest Rutherford did more experiments with these compounds and found that two kinds of radiation came from them, which he provisionally named “alpha” and “beta” radiation in 1898. Another type of “gamma” radiation was discovered later by P. V. Villard in 1900, who observed that gamma radiation was more penetrating than alpha and beta radiation and did not respond to magnetic fields, suggesting that gamma rays lacked an electrical charge (Segrè, 1980, p. 51). Soon after these discoveries, a series of clever experiments clarified the nature of the alpha, beta and gamma radiation as Helium nuclei, electrons, and photons, respectively. The alpha radiation was shown to be Helium nuclei in 1907 when Ernest Rutherford and Thomas Royds sent alpha particles through the very thin wall of a sealed and empty glass tube and verified that tube contained Helium after exposure to the alpha particles. In 1900, Becquerel showed that beta radiation had the same charge-to-mass ratio as electrons, thereby identifying the beta rays as electrons. Since the gamma radiation was electrically neutral, it was determined that it was a type of light (Fig. 11.1) The early experiments also revealed that the atoms were not fundamental and immutable but could be transformed through a type of nuclear alchemy. The British physicist Ernest Rutherford studied a variety of radioactive compounds and verified that alpha and beta rays resulted in chemical changes indicating the transformation of one element into another. Rutherford’s experiments also found evidence of a dense “nucleus” within the elements. In 1919 Rutherford discovered the positive particle within the nucleus that he called a “positive electron” or “H particle” –which we now know as the proton. Chadwick and Rutherford showed that alpha particles were being bounced off from the foils at a variety of angles and were able to measure the tiny size of the nucleus – which seemed to be less than 10–13 cm, or over 1000 times smaller than the atoms themselves (CERN, 2019). This concentration

11  The Quantum World – Physics of the Very Small 

213

Fig. 11.1  Alpha, beta, and gamma radiation, based on their tracks in a magnetic field, as presented in Marie Curie’s 1904 thesis. The curvature of the tracks shows the ratio of charge to mass, and the uncharged gamma rays travel without deflection in the magnetic field. (From https://loc.getarchive.net/media/recherches-­sur-­les-­substances-­radioa ctives-­c94828)

of charge into such a dense point of matter was a huge surprise to the physics community, who had instead favored a prevailing “plum pudding model” of the nucleus, in which the negatively charged electrons (the plums) were spread out over a much larger positively charged “pudding” within the atom. In all these cases, the discoveries puzzled and amazed the scientists. Becquerel’s discovery of radioactivity in 1896 was an accident, as he was studying the new phenomenon of fluorescence, which produced glowing chemicals from chemical reactions. Becquerel had placed a series of slides coated with fluorescent chemicals in sealed envelopes next to photographic plates in a dark drawer. His experiment was to expose the slides to sunlight and then seal them in an opaque envelope to study how long each compound would glow while in darkness. He was amazed to find that Uranium compounds produced rays, even when not exposed to sunlight. For Thompson, his discovery of the electron was also accidental when he observed that magnetic fields could bend the rays within Roentgen’s tube. In Rutherford’s case, the discovery of the tiny size of the nucleus and that a beam of alpha particles could bounce off a very thin gold foil was entirely unexpected. As Rutherford described his shock and amazement later, “It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if

214 

B. E. Penprase

you fired a 15-inch shell at a piece of tissue paper and it came back and hit you” (Ratcliffe, 2016). Rutherford’s experiment proved that the matter in the nuclei was packed into discrete bundles of mass, which we now know as protons and neutrons. This new picture replaced an earlier conception in which the positive charge within an atom was extended throughout a larger sphere of charge. The smaller size of the positively charged nuclei seemed impossible from the massive electromagnetic forces that would be generated that would push the nuclei apart. One leading theory of the time suggested that having positive protons and negative electrons in proximity somehow suppressed the “ordinary properties of electrons” to reduce the strength of electromagnetic forces. Werner Heisenberg presciently imagined a new force would be needed to explain the binding of the protons within the nucleus – a force we now know as the “strong” force. However, it was still a mystery why some elements decayed, and some nuclei would produce radioactivity. Finding out the nature of the mysterious radioactive decay and discovering some of the new elements formed within the decomposition of the uranic compounds was to be accomplished by one exemplar of the dynamic and courageous people involved in the early days of particle physics, Marie Curie. Curie elucidated the nature of “radioactivity” and discovered several new elements through years of patient and dangerous work.

11.2 Marie Curie and Her Discoveries Marie Curie’s life provides a valuable case study of how our knowledge of radioactivity and the particle world evolved dramatically over one lifetime. Marie Sklodowska was born in Warsaw in 1867, the youngest of five children in a successful, educated, but not wealthy family. Her childhood was filled with educational opportunities, with her father teaching physics at a Lyceum in Warsaw and her mother directing one of the top girl’s schools. As was typical for girls at the time, her aspirations at first were focused on teaching younger children and as a governess. Later in her education, she made a pact with her older sister to leave Poland and study in Paris, which was fulfilled when her sister Bronia received a degree in medicine and welcomed her in Paris in 1891 when Marie began her studies at the Sorbonne. Marie Sklodowska impressed her professor of physics, Gabriel Lippman, who himself would win a Nobel prize, and he encouraged her to join laboratory groups to study magnetism and new emerging metallurgical techniques.

11  The Quantum World – Physics of the Very Small 

215

Her work with Pierre Curie began at the industrial laboratory. When she felt her duty was to return to her family after finishing her studies, Pierre encouraged her to stay in the laboratory to fulfill “our dreams, your patriotic dream, our humanitarian dream, and our scientific dream.” These dreams would include marriage, and Marie became Marie Curie in 1895. The Curies began their studies of radioactivity with Antoine Henri Becquerel, who had recently discovered the mysterious “uranic rays” in his laboratory. Becquerel had discovered that slides with uranium sulfate exposed film in the dark, resulting in the idea of “uranic rays” described earlier. Marie and Pierre Curie took Becquerel’s discovery further by purifying various uranium compounds to find the most active ingredients for producing the rays. Raw uranium ore is found in a form known as pitchblende, which includes a mix of minerals and elements. The Curies were able to purify several tons of pitchblende into the trace components that were most radioactive, as judged by their ability to expose film in the dark. Among these substances were two new elements, which the Curies named Radium and Polonium, the latter named after Marie’s home country of Poland. These substances were millions of times more active than Uranium. Since the properties of the rays appeared to come from Uranium, Radium, and Polonium, Marie Curie invented the term “radioactivity” to describe this new phenomenon, and she devoted the rest of her life to unlocking the nature of the mysterious glowing caused by these new elements. The Curies and Becquerel shared the 1903 Nobel prize for this work, which was crucial for developing new techniques for isolating and measuring the properties of radioactive compounds (Fig. 11.2). As often is the case in science, any discovery opens additional questions. Given the new intensely radioactive elements of Radium and Polonium, what made them so radioactive? What was the nature of the radioactivity itself? The Curies planned to answer these questions by testing all of the elements and uranic salts they could isolate, using a device that could measure the amount of ionization produced by the substances as revealed by a cleverly designed chamber that could detect the charges present between two plates. This device, sometimes known as a Curie electrometer, was a radiation detector that revolutionized the detection of radioactivity and was the best device available until the invention of the Geiger counter. The radiation detector considerably sped up discoveries as it did not require the laborious effort of working in the dark with photographic plates. Somewhere along the way, Marie and Pierre had two children, Irene, born in 1897, and Eve, born in 1904, shortly after Marie and Pierre won the 1903 Nobel prize (LangevinJoliot, 1998). Their work in those years included further purifying and isolating the element Radium to

216 

B. E. Penprase

Fig. 11.2  Bronze statue of Marie Curie at Marie and Pierre Curie Hall, Soka University of America. The statue shows Marie contemplating the new elements she has isolated in the quest to explain the mystery of radioactivity. (Image by the author)

allow for more detailed measurements of its properties. These efforts in devising more exacting methods for isolating and detecting the trace amounts of Radium and Polonium in her sources (often less than 1-millionth of a gram) resulted in a second Nobel Prize for chemistry for Marie Curie in 1911. Unfortunately, Marie could not share the joy of this second Nobel prize with her husband Pierre, as he was killed years earlier in 1906  in a tragic street accident (Jorgenson, 2017). The outbreak of World War I upended and diverted Marie Curie’s research and focused her attention on the uses of radioactivity in medicine. She was close to completing her new research lab when the war broke out, and she put her experience to work during the war by helping injury victims. She organized 200 “radiological vans” to provide X-ray analysis of patients with broken bones. She also worked closely with doctors to develop and improve therapies using Radium in the hospital while she trained nurses to work with the dangerous equipment and sources. After the war, Marie Curie visited the US and gained more support for her research. During her visit, Curie visited the White House, where she was given a gram of Radium, worth $100,000 at the time, from donations raised by women to the Marie Curie Radium fund (American Institute of Physics, 2019).

11  The Quantum World – Physics of the Very Small 

217

Marie Curie’s discoveries prompted a new Radium industry and helped usher in a wide variety of medical devices that used radiation to help diagnose and cure patients. Marie Curie’s work continued into the next generation as she began to collaborate scientifically with her daughter Irene after 1925. Irene Curie obtained her Ph.D. in 1925 and worked scientifically with her husband, Frederic Joliot, like her mother. Irene and her husband worked with Marie Curie to isolate new compounds produced by radioactive sources. These new compounds included several artificially produced isotopes of elements, and Irene won the Nobel prize for her discovery of artificial radioactivity. Marie Curie also played an active role in the new League of Nations after World War I to promote international intellectual cooperation and was a passionate advocate for world peace. Before Marie Curie died of leukemia, she saw her daughter make the discovery that won her the 1935 Nobel prize and experienced the joys of being a grandmother. Her career inspired generations of scientists afterward, both for her outstanding scientific accomplishments and her humanitarian vision of peaceful uses for radioactivity and intellectual cooperation as a vehicle for preserving peace (Kohlstedt, 2004) (Fig. 11.3).

Fig. 11.3  Photograph of (from left to right) Marie Meloney, Irene Curie, Marie Curie, and Eve Curie in 1921. Her daughters Irene would go on to win the Nobel Prize and her younger daughter Eve became Marie’s biographer. (Image credit  – https://medicine. yale.edu/news/yale-­medicine-­magazine/article/marie-­curie-­at-­yale/)

218 

B. E. Penprase

11.3 De Broglie’s Matter Waves While Marie Curie’s work focused on the particle nature of the subatomic world, other physicists observed that matter at the smallest scales could also behave like waves. These insights included the work of Niels Bohr, who in 1913 successfully provided a model for Hydrogen atoms that predicted the energy levels in the Hydrogen spectrum. Bohr’s model explained that the electron orbited around the proton in a Hydrogen atom in a way that provided integer multiples of angular momentum, which can be calculated as the mass of the electron (m) multiplied by its velocity (v)and its radius in its orbit (r) or mvr. Bohr noticed that this angular momentum was quantized and could be described by the equation mvr = nh/2π, where r is the radius of the electron orbit, n is an integer, and h is Planck’s constant. By solving for r in this equation, the size of the Hydrogen atom could be predicted, and the quantized energies of the stable orbits corresponded to observed spectral lines seen in glowing fluorescent Hydrogen. The quantized light was emitted when the Hydrogen jumped from a higher orbit to a lower one and could also be absorbed if light with the right wavelength excited the electron into a higher orbit. The Bohr model of the Hydrogen atom successfully provided a predictive model that gave a solid theoretical basis for the otherwise mysterious “quantum jumps” that electrons were making within atoms. The energy levels of the Hydrogen atom had been measured and could be calculated with an equation known as the Rydberg formula, but were unexplained theoretically until Bohr’s theory. Only specific energy levels in the Hydrogen atom were “allowed,” a finding at odds with classical physics. The Bohr model began the process of bridging classical electrodynamics and classical mechanics into a new kind of quantum mechanics. Louis de Broglie extended the emerging ideas about the quantum world through his insightful work of 1924 that suggested that all matter in the quantum world behaves like a wave, with the wavelength depending on its momentum (the product of its mass and speed). De Broglie recognized that electrons behaved as matter waves within Bohr’s model. The quantized energies of electron orbits corresponded to standing waves in the atom with integer multiples of the wavelength. De Broglie’s model implied that any particle  – a proton, a nucleus, or a baseball  – can be considered a “wave packet” of matter with a wavelength that depends on its momentum. The “matter wave” wavelength can be calculated using Plank’s constant (h) divided by the momentum of the particle (p), which is often given the name of “De

11  The Quantum World – Physics of the Very Small 

219

Broglie wavelength” or λDB. Mathematically, λDB can be expressed as follows, where γ is the correction from relativity for the mass of a particle moving close to the speed of light:



h  p

h mv 1  v2 / c2





h  mv

The de Broglie wavelength λDB gets smaller and smaller as the momentum (p) increases. This property of matter enables electron microscopes and particle accelerators to perform their magic by focusing beams of particles into devices that can “see” much smaller dimensions than a wavelength of light. For electrons, their high charge-to-mass ratio, as well as their behavior as waves allows them to be directed and focused onto matter to reveal smaller and smaller sizes as the electron energy increases. For particle accelerators, this effect continues, and the titanic energies within the accelerator create particles with smaller and smaller sizes for studying the structure of matter at its smallest scales. The equivalence of matter as both a particle and wave and the observations of the particle nature of light identified one of the central paradoxes of the quantum world. When we think of the behavior of matter, most of us imagine protons, neutrons, and electrons as little points or spheres of matter. And yet, on the smallest level, matter is much fuzzier and technically is both a particle and a wave simultaneously. The quantum world excels in mind-bending paradoxes and a brew of cognitive dissonance, and this wave-particle duality is a central tenet of quantum mechanics. Our intuition is formed in the macroscopic world, where baseballs are indeed round and hard objects. However, quantum mechanics tells us that atoms, molecules even larger systems can be characterized as “matter waves.” Most of the things we see in the macroscopic world have such short wavelengths that they are effectively points of matter but still can take on these more counterintuitive quantum characteristics when placed at very low temperatures (forming a “Bose-Einstein condensate”) or when in a state of superconductivity, seen in certain metals at very low temperatures, or superfluidity, as is seen in liquid Helium at temperatures just a few degrees above absolute zero. These macroscopic quantum states allow ordinary matter to show quantum behavior, such as superfluid flow or allowing electrical current to flow without resistance, known as superconductivity. Superconductors are helpful for building the ultrastrong magnets routinely used in medical diagnostic MRI machines and the CERN particle accelerator.

220 

B. E. Penprase

In some ways, quantum mechanics resembles general relativity since it enables a description of the universe in some of the more extreme scales of space and time. Quantum effects are mostly observable at small spatial scales and low temperatures and are challenging to observe in our everyday macroscopic life. One of the critical findings of quantum mechanics was that particles come in two families – which we now know as “fermions” and “bosons.” The fermions (like protons, electrons, and neutrons) exist in only a discrete set of energy levels within atoms and nuclei. This quantization of energy gives rise to the spectral lines that astronomers have observed for centuries, and which astronomers use to measure the velocity of galaxies through Hubble expansion. Quantization arises from the requirement that fermions (like electrons) cannot occupy the same exact states, a property first described by Wolfgang Pauli and is now called the “Pauli Exclusion Principle.” If electrons (or any other fermions) are placed in the same state, they would be “degenerate,” and the quantum nature of reality resists degeneracy for all the fermion particles, which can have visible effects that we can readily observe. The other family of particles, the bosons, include the photons and the more famous Higgs Boson, and are not restricted by the exclusion principle.

11.4 The Heisenberg Uncertainty Principle One quantum paradox that defies our macroscopic intuition is the Heisenberg Uncertainty principle. Heisenberg concisely stated this principle: “the more precisely the position is determined, the less precisely the momentum is known” (American Institute of Physics, 2020). Heisenberg discovered that as matter is studied in ways that confine its location, the motions of particles increase in a way that defies precise knowledge of both quantities simultaneously. Heisenberg’s principle can be stated very concisely with an equation: x p  h / 4

This equation describes how a product of the uncertainty in position Δx and the uncertainty of momentum Δp of a particle will always exceed a minimum value. The quantity on the right side is Planck’s constant h divided by 4π, which limits how precisely we can measure particles in the smallest scale. The product of Δx and Δp is always greater than a constant, which means that a scientist can measure one variable or the other precisely, but not both simultaneously. Heisenberg also showed that the uncertainties in energy ΔE and the duration of an interaction Δt could not be measured simultaneously. For these variables, the Heisenberg Uncertainty principle is stated as follows:

11  The Quantum World – Physics of the Very Small 

221

E t  h / 4

The second pair of variables, ΔE and Δt, come into play in the lifetime of excited states in atoms and in the lifetime of particles created in particle accelerators. Within particle accelerator experiments, the energy of an invisible particle can be found, and its uncertainty ΔE will show up as a width in an energy distribution (Hilgevoord & Uffink, 2016). This width is a measurement of the lifetime Δt of the particle; short-lived particles will have large widths of energy uncertainty and vice versa. As we will see in a later section, quantum mechanics also allows the spontaneous production of short-lived particle anti-particle pairs. These particle anti-particle pairs will live for a lifetime Δt that is inverse to the mass of the particles since the energy ΔE can be expressed as E = mc2 (Nave, 2019a). Another implication of the Heisenberg Uncertainty principle is that if a particle is “localized” – or its position is constrained to within a small tolerance, the particle will have an uncertain momentum. This can result in particles in high-density regions, such as in the center of a white dwarf or a neutron star, to be localized (with a small value of Δx), giving rise to large values of momentum Δp. This form of momentum, generated by quantum mechanics, gives rise to what is known as “degeneracy pressure” that prevents the white dwarf or neutron star from collapsing. To prevent the electrons from being forced into the same quantum state inside the star, which would make them “degenerate,” the particles occupy new states with higher momentum values. Subramanyam Chandrasekhar showed in 1930 that the increased motions of electrons in the core of white dwarf stars, like Sirius B, prevented such stars from collapsing. This “degeneracy pressure” holds the star up and prevents it from converting into a neutron star or black hole until the star’s mass exceeds the threshold of approximately 1.4 solar masses, known as the Chandrasekhar Mass. Chandrasekhar won the Nobel prize for this discovery many years later, in 1983.

11.5 The Schrödinger Equation Erwin Schrödinger was able to unify the competing descriptions of the particle nature of matter and its wave propagation in 1926 through an alternative formulation  – an elegant wave equation now known as the Schrödinger Equation. Schrödinger called his new type of physics “wave mechanics,” which incorporated many ideas from Niels Bohr’s earlier model from 1913. Schrodinger’s equation included a solution for a single particle in empty space

222 

B. E. Penprase

that replicates de Broglie’s wave behavior and also  could describe electrons bound in atoms, such as in Bohr’s model. Schrödinger also showed that the quantum jumps, as described by Heisenberg’s matrix mechanics, could be explained as a wave resonance, and that what seemed to be competing theories provided the same results. Schrödinger developed his model as a quantum analog of how potential and kinetic energy are described in classical mechanics but added a new ingredient – the wave function, which represents the probability density of a particle in any location in space. In the Schrödinger Equation, the kinetic and potential energy of a particle and its wave function was expressed with the wave function ψ, in a way that provides overall energy conservation (Nave, 2019b). The use of mathematical functions or “operators” works with the “wave function” ψ to provide a detailed mathematical description of quantum particles in terms of their probabilities in different parts of space. In the version below, the upside-down triangle is an operator to enable the calculation of the particle’s kinetic energy. The second term is the value of potential energy V in the region where the particle is located, and the right side expresses the particle’s total energy.





2 2    V  E 2m

In empty space the Schrödinger equation simplifies to a simple wave equation and reproduces the “matter waves” that de Broglie used to explain the wave nature of particles. The key ingredient in the Schrödinger equation is the wave function which is the probability density of a particle being in a location at a given time. The function extends through all of space and varies depending on the nature of the operations or measurements made on it, as well as the value of the potential energy V that arises from fields like electromagnetism. The Schrödinger equation can also reproduce the many “paradoxes” of quantum mechanics. The act of measurement is said to “collapse” the wave function into one of its states, which immediately changes the wave function. The measurement changes the wave function to be either a particle or a wave, which explains the paradox of wave-particle duality. When the matter is being measured in a way that constrains it to act like a particle – it collapses into a wave-like behavior, and vice versa. The “fuzziness” of quantum particles is also explained since an electron or a proton are not points of hard matter but instead are extended waves of probability that have higher density in the locations where we measure them. They can spread out or localize further depending on our measurements – in accordance with the Heisenberg Uncertainty principle.

11  The Quantum World – Physics of the Very Small 

223

Further discussion of quantum mechanics is outside of the scope of this book. Still, solutions to the Schrödinger equation provide the basis for quantum chemistry and other paradoxes from quantum mechanics, such as entangled waveforms. The entanglement of quantum wave functions is also used to provide the basis for quantum computing, which offers the promise of vastly faster computers in the coming years, as a mathematical problem encoded in a quantum waveform can be solved nearly instantaneously as the wave function collapses within a quantum computer. Just as with Einstein’s general relativity equations, finding exact solutions to the Schrödinger equation and working to understand its deeper meaning became the most important challenge. Yet his wave mechanics provided a much-needed synthesis that placed quantum mechanics on a firm theoretical footing.

References American Institute of Physics. (2019). Marie Curie – Recognition and disappointment (1903–1905). Aip.org. https://history.aip.org/exhibits/curie/recdis2.htm American Institute of Physics. (2020). Heisenberg/Uncertainty. History.aip.org. https://history.aip.org/exhibits/heisenberg/index.html CERN. (2019). The proton, a century on. CERN. https://home.cern/news/news/ physics/proton-­century Hilgevoord, J., & Uffink, J. (2016). The uncertainty principle. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/qt-­uncertainty/#HeisRoadUnceRela Jorgenson, T. (2017). Seek and you shall find: Radioactivity everywhere. In Strange glow – The story of radiation. Princeton University Press. Kohlstedt, S.  G. (2004). Sustaining gains: Reflections on women in science and technology in 20th-century United States. NWSA Journal, 16(1), 1–26. https:// www.jstor.org/stable/4317032 LangevinJoliot, H. (1998). Radium, Marie Curie and modern science. Radiation Research, 150(5), S3–S8. https://doi.org/10.2307/3579803 Nave, R. (2019a). Particle lifetimes from the uncertainty principle. In Hyperphysics. Georgia State University. http://hyperphysics.phy-­astr.gsu.edu/hbase/quantum/ parlif.html Nave, R. (2019b). Schrodinger equation. In Hyperphysics. George State University. http://hyperphysics.phy-­astr.gsu.edu/hbase/quantum/schr.html Ratcliffe, S. (Ed.). (2016). Ernest Rutherford 1871–1937 New Zealand physicist. In Oxford essential quotations (4th Ed.). https://www.oxfordreference.com/ view/10.1093/acref/9780191826719.001.0001/q-­oro-­ed4-­00009051 Segrè, E. (1980). From x-rays to quarks: Modern physicists and their discoveries (p. 51). W.H. Freeman.

12 Accelerators and Mapping the Particle Universe

After the initial discoveries that revealed the family of sub-atomic particles – the proton, electron, and then the neutron  - stronger and more focused sources of particles were needed for experiments to study the internal structure of nuclei and the fundamental particles themselves. The first accelerators were developed in Cambridge and Berkeley and could accelerate protons to excite and even break apart atomic nuclei. As the accelerators generated even higher energies, they could explore previously unexplored frontiers in energy and small spatial scale. Particle accelerators have grown from the humble beginnings of Lawrence’s small 4.5-inch cyclotron in 1931 to the gigantic CERN Large Hadron Collider, with its 27-kilometer circumference and over 14  trillion electron volts of power. The data from these experiments have made a profound shift in our models of space and time as the fundamental building blocks of matter shifted away from the familiar protons, neutrons, and electrons toward a complement of dozens of invisible particles. Some of these particles, like the quarks, are impossible to isolate, while others decay nearly instantly, like the elusive Higgs, Z, and W bosons. This story begins, like many of the discoveries in astronomy and physics, with serendipity and surprise, followed by clever experiments that refined our knowledge into the exquisite models of quantum electrodynamics and quantum chromodynamics that can now explain the structure of matter to the smallest scales and the origins of our universe into the earliest times.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_12

225

226 

B. E. Penprase

12.1 Fundamental Particles and Families It was known since the early twentieth century that the particles that make all the elements of the periodic table are the protons, neutrons, and electrons, all of them fermions. Fermions are particles which exhibit “spin” and which resist being placed in the same quantum state, a property known as the Pauli Exclusion Principle. Laboratory experiments soon revealed a fundamental difference between the electrons, which have no internal structure, and the protons and neutrons, which have a finite diameter and charge distribution. The electron is present in swarms orbiting all the atomic nuclei in our body and all of the neutral atomic matter. Our experience of touch arises from these clouds of electrons in our fingers, encountering clouds of electrons in the objects we encounter. Tactile experience on a physical level is the electromagnetic force between clouds of electrons that are repelled from one another and create the illusion of solid matter. The electron was discovered to be one of a family of particles known as the leptons. The leptons also include muons, charged particles heavier than electrons first discovered in 1936 by Carl D.  Anderson from “cosmic ray” experiments measuring particles arriving to Earth from space. The muons were an entirely unexpected particle, prompting Nobel laureate I.I. Rabi to quip, “who ordered that?” when informed of the discovery that same year. Another mysterious particle, the neutrino, made its first appearance when Pauli and Fermi studied the details of the decay of the neutron into a proton and electron, a process known as “beta decay.” To conserve spin and energy, a new particle was needed, which Fermi named the “neutrino” which means “little neutral one” in Italian. The neutrino provided for a complete accounting of spin, energy, and charge for beta decay and was expected to be emitted by the trillions from the nuclear reactions in the Sun and the stars (Fig. 12.1).

Fig. 12.1  Diagram of the beta decay of a radioactive nucleus, in which a neutron converts into a proton and electron and emits an anti-neutrino to conserve energy and spin. (Image from https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Beta-­ minus_Decay.svg/640px-­Beta-­minus_Decay.svg.png)

12  Accelerators and Mapping the Particle Universe 

227

The full description of beta decay, which is how a neutron (n) decays into a proton (p) and an electron (e) looks as follows: n � > p+ + e - + � v + Energy

Neutrinos are extremely difficult to detect, as they can cross through vast amounts of solid matter effortlessly. For example, a beam of neutrinos can traverse a light year of lead and only reduce their intensity by half! (Stacey, 2019). The neutrino has nevertheless been detected from the Sun, nuclear reactors, as well as from particle accelerators and supernova explosions, and other high-energy events in space. Neutrinos are now classified as part of the lepton particle family, and each charged lepton (electrons, muons and tauons) has been discovered to have an associated neutrino. Another fundamental particle, part of our everyday experience of vision, is the photon, a member of the class of particles known as bosons. Bosons, unlike Fermions, have integer amounts of spin and transmit the four forces in the universe. The photons we see arise from the electromagnetic force and communicate their presence through the electrical signals generated in our eye when our retina absorbs them. This gives us even more debt to electromagnetism since both our tactile and visual experiences arise from this single force. The tactile experiences arise from electrons that transmit their force via photons, while our vision comes from photons, which are converted into electricity in our eyes and our brains. Clearly the particle realm and electromagnetism are inseparable from our experience of the world!

12.2 Accelerators as Microscopes for the Particle Universe Some of the most remarkable discoveries about our universe, its origins, and its long-term fate have been made not with telescopes but with particle accelerators. Like with telescopes, our story begins small and then relies on generations of inventors and innovators to expand and improve the early models into titanic instruments of discovery. The first cyclotron was co-invented by Ernest Lawrence and Milton Stanley Livingston in 1931. It used an ingenious technique for increasing the energy of a small beam of particles held with a magnetic field in a circular orbit. The acceleration of the particles came from an alternating voltage perfectly timed with the particle’s orbit to boost the particles each time they passed through the orbit. The initial cyclotron model

228 

B. E. Penprase

Fig. 12.2  (Left) The first cyclotron, made by Ernest Lawrence and M. Stanley Livingston, with a diameter of 4.5 inches, can be held in the palm of one’s hand. (Right) Lawrence and Livingston in front of the 27″ cyclotron from 1932. (From https://bancroft.berkeley. edu/Exhibits/physics/bigscience02.html and https://commons.wikimedia.org/wiki/File:4-­ inch-­cyclotron.jpg)

was only 4.5 inches in diameter and used a 1.8 kV alternating voltage to accelerate Hydrogen atoms to 80,000 electron Volts or 80 keV (Fig. 12.2). The energy of particles in accelerators is often expressed in a very convenient unit known as the electron volt, or eV.  One electron volt is, as one might expect, the energy an electron gets when dropped through a potential of one volt. The wall current in the US produces 110 V electrons, so that an ordinary wall outlet would be capable of producing 110 eV electrons (even though the wall outlet is not focusing the electrons and provides “thermal” electrons with a variety of energies). Since the energy of a charged particle in a voltage is just given by E = qV, protons experiencing a potential drop have the same energy as the electrons in eV. By repeatedly sending charged particles across a voltage drop, the cyclotron can give multiple kicks to the particle, so their energies are much higher than the voltage drop. Lawrence’s first cyclotron’s voltage was only 1800 volts, but the particles were boosted up to 80,000 eV of energy through their repeated boosts through the cyclotron. Lawrence soon developed more powerful cyclotrons with higher energies that soon exceeded millions of electron volts, allowing the beam to be focused on nuclei and with sufficient energy to excite or even break up nuclear bonds. By 1940 there were 33 cyclotrons completed or under construction, 22 of them in the US. The largest of these, a 184-inch diameter model that could produce energies beyond 100  million electron volts, was sponsored by the Rockefeller Foundation, which at the same time had provided funding for the Palomar 200″ telescope (American Institute of Physics, 2022).

12  Accelerators and Mapping the Particle Universe 

229

Critical insights into nuclear forces were obtained from these cyclotron experiments. The strange world of quantum physics assists in the process, as our knowledge of the very small is fundamentally limited by the Heisenberg Uncertainty Principle, which determines the degree to which we can simultaneously observe the position and speed of a particle. Higher energy particles, such as those produced in a cyclotron, can be focused on very small beams with sufficient energies to resolve the smallest scale structures in the nucleus. Cyclotrons also have enabled the discovery of dozens of new particles beyond the familiar protons, neutrons, and electrons. As in the case of telescopes, cyclotrons were refined and built with more precision and power through improved designs and larger-scale devices. One of the early refinements was the development of the “synchrotron,” independently invented in Russia in 1944 by Vladimir Veksler and in the US by Edwin McMillan in 1945. The synchrotron works like a cyclotron to accelerate charged particles in a circular path. However, it uses an increasing intensity of magnetic fields to hold the particles at the same radius as they gain higher and higher energies instead of letting them spiral outwards as they increase in energy. Precise corrections to the magnetic fields compensate for the increasing momentum of the particles as they approach the speed of light. The fixed radius improves the efficiency of the device and allows higher energies to be obtained with the same-sized device. McMillan also invented a modification of the design known as a synchrocyclotron in 1952, and by 1958 it was the dominant model. The synchrocyclotron adjusted both the magnetic fields and the frequency of the voltages to reach the maximum acceleration possible within the device’s size. Fueled partly by the cold war urge to compete in all areas of science – particularly in those with applications relevant to space and nuclear war – these large devices were built in Berkeley and Brookhaven in the US, and at CERN in Switzerland and at the Provino Institute for High Energy Physics in the former Soviet Union or USSR. With these machines, energies of many billion electron volts could be obtained. Even though these monster devices were 10,000 times more powerful than Lawrence’s first cyclotron, they used the same basic mechanism of accelerating protons held in a circular path by a magnetic field with an alternating electric field that gave them a push on each of their circular paths and then pulling them outwards from the center of the device to the edge as they gained energy (Wilson, 1958). With these powerful devices, physicists invented what amounts to the microscope of the subatomic world. The resolving power of a microscope depends on the shortest wavelength of visible light used to measure a sample. A typical optical microscope is therefore limited in resolution to sizes

230 

B. E. Penprase

Fig. 12.3  Highest magnification electron microscope image ever taken, showing individual atoms using electrons in a technique called electron ptychography, which allows for magnification of over 100 million and resolution on the scale of 1 trillionth of a meter. (From Blaustein, 2021; Image Credit: Cornell University)

comparable to the wavelength of light, or about 0.5 μm, or about ½ a millionth of a meter. High energy electrons can be accelerated and focused to build an electron microscope to resolve details smaller than the wavelength of light. Electrons accelerated to many thousands of volts can resolve tiny features as small as a billionth of a meter in the best electron microscopes. A typical scanning electron microscope or SEM will have an electron beam energy of 1–40 keV and can resolve features in the range of 1–20 nm (McMullan, 2006). The best electron microscopes can now see individual atoms in solids but require high energies and advanced electron “optics” to provide short enough electron wavelengths to see the atoms, which are on the scale of 0.1 nm, over 5000 times smaller (Blaustein, 2021) (Fig. 12.3). To see even smaller details, we need to create even shorter “matter wave” wavelengths, which require higher energies or more massive particles. At about 1 MeV, the size of the proton wavelength is approximately 10−14 m, which is close to the diameter of an atomic nucleus (10−15 m). As accelerators improved, they could reach higher energies and probe even smaller spatial dimensions. Some of the largest early synchrotrons include the Cosmotron (1952) at Brookhaven, which reached a beam energy of 3 GeV; the Bevatron (1954) at Berkeley, which reached 6.2 GeV; and another synchrotron device at Brookhaven known as the Advanced Gradient Synchrotron or AGS (1961) which reached 33 GeV. The USSR synchrotron, known as the U-70, reached an energy of 76 GeV by 1967, and just as with the space race, the USSR and USA raced to build ever more powerful devices in the Cold War period.

12  Accelerators and Mapping the Particle Universe 

231

Nuclear physics with synchrotrons also allowed for studying the “excited states” within nuclei. Like atoms, atomic nuclei appeared to have quantized energies and could be excited with strong enough energies to release radiation. Unlike atoms, the powerful bonding forces of nuclei require millions of times more energy to create the excitations. The resulting photons that were emitted were also millions of times more energetic – in the form of X-rays and gamma rays instead of optical light. Variations in the binding force within nuclei enabled the development of models for the strong force, the “glue” that holds nuclei together. However, seeing deeper into the nature of matter required the development of larger particle accelerators that could reach hundreds of billions of electron volts of energy.

12.3 Peering Inside the Fundamental Particles A critical stage in the evolution of early particle accelerators was the Mark III accelerator at Stanford, which ultimately evolved into the Stanford Linear Accelerator Center. In 1949 Leonard Schiff of Stanford wrote a white paper describing possible experiments that could be done with a future device he called the Mark III linear accelerator. This device would accelerate electrons and other particles using microwaves, creating very intense and higher energy beams for probing the structure of nuclei and even determining whether the proton and neutron had internal structures. As electrons are accelerated, their lower mass enables them to reach higher speeds at the same voltage  – but unlike protons, which were the primary source of particles for synchrotrons, the electrons lose a lot of that energy in radiating away “synchrotron” emission in the x-ray wavelengths. It was necessary to keep the electrons moving in a linear path to generate higher energies with electrons, which was the design principle of several of the linear accelerators built in California during the 1960s. These techniques were first applied by Robert Hofstadter at Stanford University in the 1950s. Hofstadter was able to focus electrons on atomic nuclei to peer inside the nucleus to see hints of internal structure and to measure a magnetic moment of the proton, earning Hofstadter the Nobel prize in 1961. The Mark III Stanford experiments were so successful that they were used to justify an even larger accelerator project which came to be known as SLAC – the Stanford Linear Accelerator Center. An initial proposal for SLAC was written in 1957, and by 1962 construction began in the hills above Stanford. The SLAC experiment brought together teams of physicists from MIT, Caltech, Stanford, and several international universities to conduct

232 

B. E. Penprase

experiments with the accelerated electrons, with the synchrotron radiation from electrons as they moved through a curved track. SLAC also experimented with the antimatter counterpart of the electrons, the positron, which could be generated in large numbers in collisions. By 1966, the new Stanford Linear Accelerator could generate 8  billion electron volts and began to acquire data that showed anomalies in the scattering of electrons from protons. As SLAC looked inside the proton and the neutron, there was a surprise  – smaller sub-particles were inside the protons, which were thought to be fundamental particles. In 1968, Richard Feynman and James Bjorken studied the data and concluded that the proton had small sub-components, which they called “partons,” that could explain the measured scattering of electrons. Further study of the data showed three of these partons inside the proton – giving us the first glimpse of what later became the quark theory of matter. Experiments with SLAC were able to make measurements of the structure within the proton, as the higher energies of the electrons created more localized waves of matter. The de Broglie wavelength of an electron at the energy of SLAC was much smaller than the diameter of a proton. For example, as SLAC and other particle accelerators soon crossed the 100 GeV threshold, the corresponding de Broglie wavelength of the 100 GeV electron is 0.73 × 10−17 m or 1/100 of the diameter of a proton.

12.4 The Particle Zoo By the time of the discovery of “partons,” accelerator experiments had already detected new particles known as mesons, first discovered in the form of the pion in 1950 and later in the form of the K mesons or kaons, as accelerator energy increased. As the energy of the collisions within accelerators increases, the energy of the collision provides a reservoir of energy from which a wide variety of exotic and short-lived particles can be created. The beam energy sets the limit to the particle’s mass that could be created via the familiar equation E = mc2. Most of the particles were unstable and yet could be identified from their tracks within bubble chamber photographs, which recorded the trajectories of the charged particles as they curved in a magnetic field. From careful detective work, the charge and mass of the particles could be obtained from these traces, even if the particle lasted for less than a millionth of a second. Soon the number of newly discovered particles seemed unlimited, and a theory was needed to classify them and explain their origins. Richard Feynman, in a 1974 paper entitled “Structure of the Proton,” described how at 140 MeV,

12  Accelerators and Mapping the Particle Universe 

233

Fig. 12.4  Particles that could be described and predicted with Gell-Mann’s SU(3) symmetry group, known as the eightfold path, formed geometric patterns. On the left is an octet of baryons particles predicted within the model, which includes the familiar neutron and proton at top, and other spin ½ baryons which we know now contain three quarks. On the right is the “decuplet” of ten spin 3/2 baryons predicted by the model. The prediction of the W- boson, at the bottom of the triangle, was a triumph of the model and helped earn Gell-Mann the 1973 Nobel Prize. (Images from https:// en.wikipedia.org/wiki/Quark_model)

accelerator experiments could produce pions in three varieties with either −1, 0, or +1 charge, and at 494 MeV, the kaons could be produced in four types. At 1190 MeV the particles known as sigmas could be produced. Feynman wrote, “and soon we run out of names” so that “today we have cataloged over 300 species of ever-increasing mass and expect the true number is unlimited” (Feynman, 1974). This proliferation of subatomic particles is sometimes called the “particle zoo” and includes hundreds of particles that could be arrayed into different families, perhaps suggesting a more fundamental reality (Fig. 12.4). As the number of discovered particles grew, schemes were developed to organize them into groups corresponding to mathematical models. The “baryons” were massive particles that included not only the proton and the neutron but also over a dozen newly discovered particles, some with different spin levels than the proton. Murray Gell-Mann created a theory known as the “8fold path,” which classified these many particles into geometrically appealing groups that also coincided with a mathematical structure known as the SU(3) symmetry group. The 8-fold path provided an intriguing reference to the 8-fold path in Buddhism, which was a prescription for enlightenment. GellMann’s system grouped the baryons with the same spin as the proton into an octagon that classified the particles using quantum numbers for charge and spin and added a new quantity known as “strangeness.” Gell-Mann invented the new quantum number for strangeness in 1953, and observed that it was

234 

B. E. Penprase

Fig. 12.5  Image of the discovery of the W particle in 1960, shown at the lower part of the figure, using the 80-inch Bubble Chamber of Brookhaven, fed by the 30-GeV synchrotron. The bubble chamber provides tracks for particles created in collisions. The W- decays into several other particles, such as the Ξ, Λ, π, and a photon (γ) and a proton (p). The masses and charges of the particles can be detected from the rate of curvature of the particles within the large magnetic fields of the bubble chamber, allowing for the measurement of the charge and mass of the W-. (Image from http://www.hep.fsu. edu/~wahl/satmorn/history/Omega-­minus.asp.htm)

conserved in all particle interactions through the strong and electromagnetic forces (Beiser, 1987). Gell-Mann’s model also had predictive power, predicting new particles from symmetry arguments. One such particle was the Ω− baryon, which had a spin of 3/2 and a negative charge. This particle was discovered in an experiment at Brookhaven National Laboratory in 1964 and was part of the reason for Gell-Mann’s 1969 Nobel Prize. The Ω− was identified from its decay particles through tracks generated in a “bubble chamber” at the 30 billion electron volt Alternating Gradient Synchrotron at Brookhaven and had the expected charge and mass that Gell-Mann’s model predicted (Smithsonian, 2022) (Fig. 12.5). Among the particles within the “zoo” were dozens of mesons that came in various charges and masses. To explain the many mesons, one proton model included a core of charge in the center surrounded by a cloud of mesons.

12  Accelerators and Mapping the Particle Universe 

235

Observations of the proton from SLAC suggested three sub-­particles might be within the proton that was provisionally named the “partons.” These sub-­ particles were later renamed as the “quarks,” which appeared to have a fractional electric charge in the amounts of +2/3 and −1/3 of the electron charge. The discovery of the quarks provided a new set of more fundamental particles that replaced the proton and neutron as the fundamental building block of matter. As Feynman put it, “The proton, then, no longer appears particularly fundamental, but is just one particle of a vast number, the one that happens to be most stable.” The proton was shown to be just one particle in a large new family, known as the hadrons, that are made of quarks. The hadron family includes the proton and neutron, along with the many other baryons and mesons produced during the accelerator experiments (Dodd & Gripaios, 2020, p. 131). Aside from the proton and the neutron, all the baryons and mesons are unstable, requiring particle accelerators to catch glimpses of these transient particles before they decay. Murray Gell-Mann developed the modern quark model of matter as a follow-up to his earlier “Eightfold Way” classification scheme. In the quark model, which earned Gell-Mann the Nobel Prize in 1969, quarks come in multiple varieties. At the time of Gell-Mann’s Nobel prize, there were three known types of quarks, the up, the down, and the “strange” quark. In GellMann’s model, the quark has a charge of either −1/3 or +2/3, has a baryon number of 1/3, and has a spin of ½, which makes the quarks fermions. We now know there are six varieties of quarks in three generations. The three generations include the up and down, the charm and strange, and the top and bottom quarks. In this model, a quark can be described using some of the usual quantum numbers we use for other particle, and some new quantum numbers as well. These numbers include “strangeness,” “charm,” “bottomness,” and “topness,” and they combine with the baryon charge to give a quantity of hypercharge. Like other nuclear particles, the quarks also have isospin, which for quarks is a vector that includes the combination of hypercharge and the z-component of isospin. The quarks can be grouped in terms of the quark generations and a new quantity known as “color,” which provides three states for each quark. By allowing for color, the triplets of quarks found in our familiar protons and neutrons would not violate the Pauli exclusion principle since each quark would be in a different color state. As a few examples, it is possible to build a proton by combining two up and one down quark, which adds an electric charge of +2/3 for each up quark and −1/3 for the down quark, giving the proton a net of +1 unit of electric charge. A neutron can be built from two down and one up quark, giving a yield of 0 charge. Likewise, a positively charged pi meson or pion can be made from an

236 

B. E. Penprase

up quark and a down antiquark, giving +2/3 and +1/3 charge or a net of +1 charge. These combinations of quarks that build some familiar baryons and mesons can be written concisely using the notation below: quark recipe for a proton :p > = uud >

quark recipe for the neutron:n > = udd > quark recipe for a π +meson: π +> = ud >



A concise diagram of the generations of quarks, their color charges, and their electrical charges can be created in a three-dimensional plot shown below. Along with each of the three generations of quarks are three generations of neutrinos and three types of particles known as leptons. These leptons are truly fundamental particles like the electron and include the other two leptons, known as the muon and the tauon. Experiments at CERN are still searching for additional generations of quarks, which could potentially be created in the higher energy collisions at CERN or found from more indirect evidence from cosmic rays or other laboratory experiments (Fig. 12.6).

Fig. 12.6  Gell-Mann’s model for quarks groups them into three “generations” with a combination of quantum numbers that provide for a separation of the quarks into states that allow them to regroup in pairs or triples to form the particles discovered in accelerator experiments. (Adapted by the author from http://atlas.physics.arizona. edu/~shupe/Indep_Studies_2015/Notes_Goethe_Univ/B11_QuarkModel.pdf)

12  Accelerators and Mapping the Particle Universe 

237

Fig. 12.7  Schematic of a particle collision experiment, in which the proton (top left), consisting of two up and one down quarks, is excited and creates a meson and a baryon particle known as the L particle. Inputs of energy from particle collisions can excite ordinary particles like protons and create new short-lived particles like the mesons, which have two quarks, or some of the many baryons, like the proton, have three quarks. (Image from US Department of Energy  – https://science.osti.gov/np/ Highlights/2015/NP-­2015-­08-­b)

All of the nuclei in our universe that build all the stars, the planets, and the atoms we are made of include three quarks which are named “up” and “down” quarks. Just as all the words of our language can be produced by the letters of the alphabet, all the mesons and baryons can be created from combinations of 2 or 3 of the quarks, respectively (Fig. 12.7). Even though the quarks are on a firm footing theoretically and experimentally, they have yet to be discovered in isolation. Even more peculiar is the fact that the quarks resist being pulled apart from baryons with a force that was measured to increase as the distance increases, unlike other forces like gravity and electromagnetism, which get weaker by the square of the distance. This force was named the “gluon” force, and models for the gluon force required the special quark charge known as “color.” The new field of Quantum Chromodynamics (QCD)  was developed  to explain quark behavior. QCD joined the mature theory of Quantum Electrodynamics or QED, invented in 1928 by Paul Dirac to incorporate relativity into quantum mechanics.

238 

B. E. Penprase

12.5 The Next Generation of Particle Accelerators – Tevatron and LHC In the later part of the twentieth century, even larger accelerators such as Fermilab and CERN grew in size and energy to surpass a trillion electron volts and in recent years, have reached 13.6 trillion electron volts. Accelerators of this size were needed to increase the energy of collisions to a point where they could create massive new particles predicted by theory and, in their collisions, create clumps of energetic particles and anti-particles such as those which existed in the early universe. The giant accelerators built at Fermilab and CERN opened new energy horizons and increased the mass of the particles that could be created in their collisions. These particle accelerators became the largest scientific instruments ever built – extending their beam tracks across hundreds of square kilometers. In the US, the Fermilab accelerator near Chicago ran along a circular underground track 6.3 km in circumference. It combined the acceleration of particles from several stages to accelerate protons in a large circular tunnel  to reach 300 GeV of energy by 1972. The Fermilab Tevatron reached 1 TeV, or one trillion electron volts, of beam energy by 1982, when it was the world’s most powerful particle accelerator. It operated until 2011, and now Fermilab’s beam is used for other experiments, including research with neutrinos. The CERN accelerator grew in parallel with Fermilab, beginning with the 7  km circumference Super Proton Synchrotron (SPS) in 1976, located in Meyrin, Switzerland, which is still the headquarters for much of CERN’s operations. By the 1970s, CERN’s SPS could reach 500 GeV, and plans began for the largest accelerator in the world soon afterward. This accelerator was the Large Electron Positron Collider or LEP, constructed between 1985 and 1988 and housed in a 27-kilometer circumference tunnel. The project was Europe’s largest civil engineering project before the Channel Tunnel. The LEP energy was designed to exceed 91 GeV, to enable Z bosons to be created, and it began its work as a “Z factory” in 1989. After successful operations for many years, the LEP was dismantled in 2000 to make way for the CERN LHC, which used protons collisions to attain even higher energy collisions than the LEP. The CERN LHC was built within the LEP tunnel, and the first LHC collisions were observed in 2009 (CERN, 2022a) (Fig. 12.8).

12  Accelerators and Mapping the Particle Universe 

239

Fig. 12.8  Schematic of the CERN accelerator, which provides a 26.7 km underground track that sends beams of protons in opposite directions for head-on collisions at 99.9999991% the speed of light, generating 14 trillion electron volts of beam energy. The CERN track alternates between locations in Switzerland and France and includes four collision zones bristling with scientific instruments. Those collision zones are labeled on the figure as LHCb, CMS, ATLAS, and ALICE. (Figure courtesy of CERN at https://lhcb-­outreach.web.cern.ch/detector/)

12.6 How CERN Works Colossal accelerators like CERN boost their particle beams via a series of stages, each of which includes modern incarnations of the earlier particle accelerator technology in a relay of ever higher energies. The stages of a particle within CERN also follow something of a history of particle physics – and so a proton’s journey goes from a static energy accelerator to a basic linear accelerator to a synchrotron before it enters the main storage ring. The Fermilab accelerator began with a relatively simple static electricity generator known as a Cockcroft-Walton generator, used until 2012, which brought the particles (typically protons) up to energies of about a million electron volts. The CERN accelerator also used a Cockcroft-Walton generator until 1993, when it was replaced by a radiofrequency device known as a LINAC4. The LINAC4 device at CERN is an 86-meter-long linear accelerator, which, much like SLAC, uses radiofrequency waves to boost particle energies as they travel through radiofrequency cavities. The CERN LINAC4 can reach energies of 160 MeV.

240 

B. E. Penprase

After the first stage, the particles are fed to a booster ring known as the Proton Synchrotron Booster and then to an even larger synchrotron known as the Super Proton Synchrotron or SPS, which brings the particles up to energies that used to be the highest energies possible. For CERN, this SPS provides particles at the energy of 450 GeV. Finally, the particles are relayed to the much larger “main ring,” which consists of a vast, evacuated tube that gently curves through many miles of terrain in both the case of Fermilab and CERN. The particles are focused using magnets and race through the main ring at 99.9999991% of the speed of light until they collide with either a fixed target or an opposite-moving beam. In the main ring, the particles are boosted by microwave stations that generate powerful electromagnetic fields (400 MHz) timed precisely to pull them through the microwave chamber and give them an extra kick as they approach their final energy. For each of the beams at CERN, this energy is 6.5 TeV (CERN, 2019b) (Fig. 12.9). The LHC uses the older SPS synchrotron accelerator to provide the final boost of the particle beam before it is introduced into the giant LHC ring. The larger LHC ring is 26.7 km in circumference and extends across a vast expanse surrounding the city of Geneva, crossing the French and Swiss Border

Fig. 12.9  The accelerator complex at CERN, showing the various stages of boosting energy for the particles, that includes an initial Proton Synchrotron and additional storage rings for accumulating antimatter and creating larger numbers of particles for experiments. The Super Proton Synchrotron produces protons at energies of 450 GeV, which are then relayed to the main LHC ring, which extends over a 26.7 km circumference under Switzerland and France. (Figure by the author based on diagrams at CERN site)

12  Accelerators and Mapping the Particle Universe 

241

4 times. The enormous circular tunnel is buried 100 meters below the French and Swiss countryside and contains the world’s largest ultrahigh vacuum chamber. The entire ring is held to a nearly perfect vacuum, at less than one trillionth of the Earth’s atmospheric pressure  – or ten times less than the Moon’s atmospheric pressure. The vacuum is needed to prevent any collisions of the particles with stray air molecules during their many journeys around the ring as they are accelerated. The high-energy protons in CERN make over 11,000 trips per second through the entire 26.7 km circumference, as they are boosted to speeds only 1/1000 of a percent less than the speed of light. The large and gently curving circular path is necessary due to the limited strength of the magnets for bending the particle trajectories, and it also limits the energy lost to synchrotron radiation as the protons race around the ring. A series of over 1232 dipole electromagnets use superconducting coils chilled to 1.9 degrees above absolute zero to generate 8.3 Tesla of magnetic field and to keep the racing protons in the underground circular track. A set of 474 quadrupole magnets keep the beams focused. By comparison, the large electromagnets seen at medical MRI facilities have a strength of about 1.5 Tesla. CERN has over 1000 such magnets, each over four times stronger! Two beams of protons circulate within the CERN LHC in opposite directions, each containing nearly 7 TeV of energy, resulting in a head-on collision of the beam that doubles the single beam’s energy. The current LHC is operating a new round of experiments at the time of this book’s publication has produces a collision energy of 13.6 TeV and an increased beam intensity that includes nearly 3000 proton bunches per beam, which provides over 1 billion collisions per second (CERN, 2020). The amazing thing about CERN is its scale – as immense and as large as any machine humans have ever constructed to create the highest energy particles yet made by any accelerator – and yet its precise tolerances enable the measurement of the smallest dimensions of space ever recorded. In this regard, it also resembles the great telescopes at Mount Wilson, Palomar, and JWST that have pushed our engineering to its limits – both in terms of immense size and exquisite precision. The CERN LHC measures collisions between oppositely directed protons, and in other research, CERN accelerators also routinely measure collisions between protons and anti-protons, the antimatter counterpart to the proton. Since the antimatter has an opposite charge from matter, matter and antimatter will respond to magnetic fields in opposite directions and can be created and sent into the accelerator for head-on collisions with matter. Antimatter was first predicted by Paul Dirac in 1928 and is now routinely produced and used in experiments. CERN has built an “antimatter factory” that isolates

242 

B. E. Penprase

large amounts of antimatter used for multiple experiments. The antimatter is first isolated using the Antiproton Decelerator, which creates low-energy antiprotons that can be captured by magnets and combined with positrons to create “antiatoms” of anti-Hydrogen. The antiprotons are created from collision fragments from the proton synchrotron, which is fired at a block of metal, and the antiproton decelerator gathers the fragments and stores them in deceleration rings. These devices produced the first antihydrogen atoms in 2002, and by 2011 and had produced and trapped antihydrogen atoms for 16  min and even took a measurement of the antihydrogen spectrum from their cloud of antiatoms in 2012. The antimatter can be stored in magnetically focused beams in a complete vacuum to isolate it from matter. It can then be placed into a battery of experiments to test whether the laws of physics are indeed symmetric for matter and antimatter. One experiment known as AEgIS verifies that the force of gravity is the same for antimatter and matter. It measures the rate at which a beam of anti-hydrogen falls during its horizontal flight. Another experiment known as GBAR creates antihydrogen, cools the anti-Hydrogen cloud to microkelvin temperatures, and drops the atoms from a height of 20 cm. The timing of the fall is then recorded by looking for the gamma rays released when the antimatter reaches the end of its 20 cm fall. These gamma rays are produced since anti-Hydrogen (like all antimatter) will annihilate into an explosion of gamma rays if it comes into contact with matter! (CERN, 2019a). Accelerators like Fermilab and CERN employ counter-rotating beams to more than double the energy of the collisions by providing head-on collisions between the two beams. These head-on collisions are far more effective at producing new particles and helping detectors capture the many decay fragments that burst out from the collisions. The figure below shows the difference – having a beam collide with a stationary target produces a spray of new particle fragments, but to conserve momentum, the spray of the fragments moves away in the same direction as the impact and carries away some of the energy to conserve momentum. With a head-on collision, the net momentum at the collision site is zero, and far more energy can be used to create new particles that are easier to measure since they move more slowly from the collision site (Dowell, 1984) (Fig. 12.10). The CERN accelerator directs the counter-rotating beams of protons to one of four collision sites, which contain four different sets of instruments that can capture the fragments of the protons as they spray in all directions. Exquisitely calibrated detectors measure the tracks of the particles and help evaluate the masses, charges, and other properties from the thousands of fragments that emerge from the collisions. Each of the four collision sites contains

12  Accelerators and Mapping the Particle Universe 

243

Fig. 12.10  The advantage of having two beams colliding head-on. The top figure shows how the collision fragments of a single beam impacting a fixed target produce a spray of fragments that carry away energy through the conservation of momentum. In contrast, the lower figure shows how a head-on collision between two protons can double the energy of the impact to create new particles, that also remain closer to the collision by momentum conservation. (Figure by the author, based on Dowell, 1984)

separate instruments developed by hundreds of scientists worldwide. These instruments include the CMS, ATLAS, ALICE, and LHCb experiments. The Compact Muon Solenoid (CMS) experiment serves as a generalpurpose detector. It is surrounded by a huge solenoid magnet that generates 3.8 Tesla and comes from huge superconducting coils and a metal yoke that makes this detector one of the largest in the world, at 14,000 tons. CMS includes six different kinds of detectors layered concentrically to measure the momentum and energy of the particles in collisions. The ATLAS detector is the largest volume detector and is a 46-meter long and 25-meter diameter cylinder. Since over a billion particle interactions occur every second within ATLAS and CMS, these detectors require incredible data gathering and analysis speed to keep track of the events. The ALICE experiment works with large ions, such as lead nuclei, which can be accelerated and slammed together to create a blob of quarks and gluons, creating conditions similar to the first microsecond in the early universe. Studying this quark-gluon plasma helps us constrain the physics of the early universe, and the ALICE experiment can measure temperatures over 100,000 times hotter than the center of our Sun. Finally, the LHCb experiment investigates the slight asymmetry between matter and anti-matter, which can be revealed by studying the “b” quark. The 5600-ton LHCb detector catches particles thrown forward from the collision site and is 21-meters long (Fig. 12.11).

244 

B. E. Penprase

Fig. 12.11  The four CERN detectors, from each collision site, clockwise from the top left  – the ATLAS, CMS, ALICE, and LHCb experiments. Each experiment is colossal in scale, like CERN, and can measure the billions of collisions per second generated from collisions within CERN. (Figures courtesy of CERN. https://home.cern/science/experiments/lhcb; https://home.cern/science/experiments/cms; https://atlas.cern/Discover/ Detector; https://home.cern/science/experiments/alice)

12.7 The Discovery of the W and Z Bosons The W and Z bosons are among the many exciting particles created by CERN. The standard model of physics includes bosons for each of the four forces in the universe. The electromagnetic fields are mediated by the photon, the most familiar boson of our experience – that we call light. The other three forces of the universe – the weak force, the strong force, and gravity – also involve the exchange of boson particles. The weak force resists the decay of neutrons into protons and electrons through beta decay and has two associated bosons. The W− boson, a negatively charged and massive boson, was the messenger for the weak force and plays a crucial role in determining the lifetime of radioactive elements that decay through beta decay. The other boson, the Zo, was predicted as the mediator of “neutral current” reactions, providing a mechanism for electrons to interact with neutrinos.

12  Accelerators and Mapping the Particle Universe 

245

Yang and Mills had predicted the existence of the W and Z bosons in 1954. Since the W and Z boson particles were expected to be very massive, high energies were needed to produce them. Since E = mc2, a larger particle mass m requires a greater energy E to enable the production of the particle. The best models predicted that the mass would be in the range of 80–90 GeV/c2, and so both Fermilab and CERN were upgraded to produce the W and Z particles within collision fragments. It took several decades to develop the advanced technology needed to verify the theoretical predictions of the Z boson and the neutral current reactions, and the first production of the W- boson at CERN did not arise until 1983 (CERN, 2022b). The Z and W bosons were not only interesting, but their lifetimes revealed a subtle detail about the universe that implied the existence of additional families of quarks and leptons. Like many particles, the Z boson is unstable and will decay into smaller particles which are a mix of quarks and leptons that includes all the existing neutrino families. The particle’s decay rate is faster if there are more decay modes available  – and measurements of the lifespan and decay modes of the Z boson can provide a limit to the number of quark and lepton families (Cline, 1988). The Z can decay into quark-antiquark pairs or lepton-antilepton pairs. These decay modes can produce all six quark types, which result in particle showers known as “jets.” The leptonantilepton decays would produce either electron-positron, muon-antimuon, and tau-antitau pairs. By studying the ratio of these decay paths, the Z boson has confirmed the existence of the three quark families and six quark types (International Masterclasses, 2022). The other exciting result from the discovery of the Z and W bosons was the confirmation of the “unification” of forces predicted for higher energies. In extremely high temperatures, such as those that existed in the first trillionth of a second in the universe, it is thought that the separation of forces like the weak and electromagnetic forces would break down. The W and Z boson discovery provides evidence for a common “electroweak” force. This unification of the electric and weak forces is analogous to the work that Maxwell did in the nineteenth century that unified what appeared to be separate electric and magnetic forces into a common electromagnetic field. Like the particle theorists, Maxwell predicted this unification decades before it could be verified experimentally. For very high temperatures above 1015 K, which would be found in the early universe, the single electroweak force exists in place of the separate electromagnetic and weak forces (Nave, 2022).

246 

B. E. Penprase

12.8 Development of the Standard Model of Particle Physics The discovery of the W and Z resulted from the study of thousands of collisions which requires high-speed computers to interpret the thousands of tracks produced in each collision. By reconstructing the collision and all its fragments from the tracks (which reveal the particle momentum, charge, and mass), it was possible to uniquely trace the fingerprint of the W and Z bosons. With these two pieces of the puzzle predicted and confirmed, the “Standard Model” was nearly complete. Twelve particles comprise all of matter – the six quarks and the six leptons. The standard model, with just 12 different kinds of particles, dramatically simplifies the ingredient list for the universe. We can build a proton from three quarks (two up and one down quark, which gives a net +1 electric charge) and a neutron from three different quarks (two down and one up quark, which gives a net 0 charge). The electrons in our atoms have no subcomponents and are still considered fundamental particles as part of the family of leptons. Only the electron remains fundamental of the various “fundamental particles” of the twentieth century (the protons, neutrons, and electrons). As mentioned, the four forces within the universe are mediated by the bosons – photons for the electromagnetic force and the W and Z bosons for the weak force. For the strong force, the boson is called the “gluon” and operates inside of protons and neutrons, holding the quarks together through the gluon force. It is important to note that the quarks inside the universe’s protons and neutrons are moving incredibly fast and have enormous amounts of energy stored in their motion. The best predictions of the theory show the masses of the up and down quarks as only 2.2 and 4.7 MeV, respectively. And yet the proton’s mass is 938 MeV – meaning that about 930 MeV of the proton’s ‘rest mass’ is produced by the kinetic energy of the frantic motions of the quarks inside, and the binding energy from the gluon force. Since Einstein showed us that E = mc2, we can confidently predict that these forms of energy inside the proton provides much of its mass (Figs. 12.12 and 12.13). For gravity, the associated boson is the Higgs Boson. The Higgs Boson was first predicted in 1964 by Robert Brout and Francois Englert in Brussels and Peter Higgs at the University of Edinburgh. Their model is known as the Brout-Englert-Higgs mechanism, which posits a Higgs field and a massive

12  Accelerators and Mapping the Particle Universe 

247

Fig. 12.12  Graphic showing the particles of the standard model, which includes six quarks in 3 generations or families, six leptons, and six bosons to mediate the four forces. The graphic represents the Higgs Boson (center), the W and Z boson for the weak force, the photon for the electromagnetic force, and the gluon for the gluon force. (Image courtesy of CERN – https://home.cern/science/physics/standard-­model)

Fig. 12.13  Diagram showing the masses of the quarks and leptons in MeV. The quark masses provide a “rest mass” for the quarks much less than the “dressed” mass they provide when bound by gluons inside larger particles like the proton and neutron. Some of the experimental limits for the mass of the tau neutrino ντ and muon neutrinos νμ (which are currently unknown) are shown as upper limits at the left. (Figure by the author, based on http://atlas.physics.arizona.edu/~shupe/Indep_Studies_2015/ Notes_Goethe_Univ/B11_QuarkModel.pdf)

248 

B. E. Penprase

boson that can impart mass to elementary particles. This particle soon became known as the Higgs Boson and prompted one of the great stories in the history of science, and as we will see, it involved the full capabilities of CERN to verify this prediction.

References American Institute of Physics. (2022). Lawrence and the Cyclotron. History.aip.org. https://history.aip.org/exhibits/lawrence/bigscience.htm Beiser, A. (1987). Concepts of modern physics (4th ed.). McGraw-Hill College. Blaustein, A. (2021). See the highest-resolution atomic image ever captured. Scientific American. https://www.scientificamerican.com/article/see-­the-­highest-resolutionatomic-­image-­ever-­captured/ CERN. (2019a). Antimatter | CERN. Home.cern. https://home.cern/science/physics/ antimatter CERN. (2019b, September 23). Accelerating: Radiofrequency cavities | CERN. Home. cern. https://home.cern/science/engineering/accelerating-­radiofrequency-­cavities CERN. (2020). CERN  – LHC: Facts and figures. Cern.ch. https://public-­archive. web.cern.ch/en/LHC/Facts-­en.html CERN. (2022a). The large electron-positron collider | CERN. Home.cern. https:// home.cern/science/accelerators/large-­electron-­positron-­collider CERN. (2022b). W boson: Sunshine and stardust | CERN. Home.cern. https://home. cern/science/physics/w-­boson-­sunshine-­and-­stardust Cline, D. B. (1988). Beyond truth and beauty: A fourth family of particles. Scientific American, 259(2), 60–67. https://www.jstor.org/stable/24989195 Dodd, J. E., & Gripaios, B. M. (2020). The ideas of particle physics (p. 131). Cambridge University Press. Dowell, J. D. (1984). The CERN proton-antiproton collider and the discovery of the W and Z particles. Science Progress (1933- ), 69(274), 257–289. https://www.jstor. org/stable/43420602 Feynman, R.  P. (1974). Structure of the proton. Science, 183(4125), 601–610. https://doi.org/10.1126/science.183.4125.601 International Masterclasses. (2022). International physics masterclasses. Atlas. physicsmasterclasses.org. https://atlas.physicsmasterclasses.org/en/zpath_ lhcphysics2.htm McMullan, D. (2006). Scanning electron microscopy 1928-1965. Scanning, 17(3), 175–185. https://doi.org/10.1002/sca.4950170309 Nave, R. (2022). Unification of forces. Hyperphysics.phy-Astr.gsu.edu; Georgia State University. http://hyperphysics.phy-­astr.gsu.edu/hbase/Forces/unify.html

12  Accelerators and Mapping the Particle Universe 

249

Smithsonian. (2022). Brookhaven 80″ hydrogen bubble chamber. National Museum of American History. https://americanhistory.si.edu/collections/search/object/ nmah_700210 Stacey, B. (2019). Neutrinos. Snews.bnl.gov. https://snews.bnl.gov/popsci/neutrino. html#:~:text=Neutrinos%20have%20none%20of%20the Wilson, R. R. (1958). Particle accelerators. Scientific American, 198(3), 64–80.

13 The Higgs Boson and Models of the Early Universe

As mentioned earlier, the four forces of nature – gravity, electromagnetism, the weak force, and the strong force  – all operate together to produce the nuclei and particles that comprise our bodies, the planets, and the stars of our universe. It is more intuitive to think of these particles as being separate from the forces, and responding to the forces, just as we might think of a planet “responding” to gravity in its orbit. However, on the quantum scale, the particles we see are inseparable from the forces they experience or mediate. The electron responds to an electric force, but from a more modern quantum perspective, the electron itself arises from the electromagnetic field. In the quantum world, all the particles are resonances of the different forces they come from. And in many cases, these particles can be invisible or created and disappear instantly from the “vacuum” energy all around us. These particles and forces all arise together from the heat of the Big Bang. At extreme temperatures and densities, particles and antimatter particles are rapidly generated from the intense energy within the early universe, which operates like the ultimate particle accelerator. As the universe expands and cools, the particles and forces we see in our present-day universe “freeze out” from an earlier stage of undifferentiated energy. The temperature of the universe when each particle “freezes out” is set by its mass, which is an emergent property of the early universe. The mass values for each particle are established by a newly discovered and very famous particle, the Higgs Boson, which underlies the existence of gravity through its ability to impart mass to particles.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_13

251

252 

B. E. Penprase

13.1 The Higgs Boson Discovery and Mass With the discovery of the W and Z Bosons, a fascinating and mind-bending property of our universe came into play. W and Z boson particles were discovered in part due to their interactions with invisible virtual particle pairs created out of the vacuum. By the Heisenberg uncertainty principle, such “virtual particle pairs” – a particle and its antiparticle – can appear and disappear at any instant, provided that their lifespan Δt is less than an amount of time determined by the mass of the particle. Since Heisenberg’s principle tells us that ΔE Δt > h/4π, and since we know that E = mc2, we can then recognize that virtual particle pairs consisting of two particles with mass m will have a total energy of creation which would be ΔE  =  2mc2, meaning that the maximum duration of these pairs would be Δt = h/4 π/(2mc2). Larger mass pairs would therefore  live for shorter times. The energy for creating these pairs results from quantum fluctuations of the vacuum and is “borrowed” and then returned within the instant of the virtual particle pair’s lifetime. The timescale for these virtual pairs is extremely short – creating a sort of fuzzy noise in empty space that is undetectable for all practical purposes because the particles disappear almost instantaneously (Peet, 2014). For example, the duration of virtual particle and anti-particle pairs for an electron/positron is 10−21 s, and for a proton-anti-proton pair is less than 10−24 s. The much shorter time for the proton/antiproton comes from the fact that the proton has over 1000 times more mass than the electron/positron pair. Ordinarily, the pair of particles have a sum of zero energy and exist for such a short time that there is no trace of their existence. However, in some cases, such as within high-energy collisions at CERN or near the horizon of a black hole, virtual particle pairs can interact in ways that can give rise to observable effects. At CERN, these interactions affect the decay products detected in collisions. Near a black hole, the interactions of the virtual particle pairs with the event horizon cause one half of the pair to be trapped within the black hole and the other to be released, resulting in the phenomenon of “Hawking Radiation.” The mass of the emitted particle comes at the expense of the black hole and is the only known mechanism for black holes to reduce their masses. Any force that may exist can have its presence known by either its effects on particles or from the boson that mediates the force. For example, the electromagnetic force can be known by looking for electrons bending within the presence of electric or magnetic fields. These fields pervade all of space and can be detected far from the sources of the field, which implies that the massless photons that mediate electromagnetism have a “rang” that is ­

13  The Higgs Boson and Models of the Early Universe 

253

infinite. The strong and weak forces only reveal their effects on nuclei and are hidden from view since their bosons cannot be observed as easily due to their limited range. Nuclear physicists have measured the range and strength of the strong and weak force, and particle physics has discovered the mechanism and particles for mediating the strong and weak force in the form of the gluons and the W and Z bosons, which all have mass. Perhaps ironically, gravity, the weakest yet most familiar of all forces, remains the most mysterious and elusive as it has eluded a full description by quantum mechanics. Newton and Einstein masterfully provided the mathematical framework for describing the effects of gravitation that arises from all matter and how the gravitation warps space and time. These are wonderful tools for describing the fields and forces on a macroscopic level but have yet to be reconciled with quantum mechanics at the smallest spatial scales. Perhaps more troubling, the nature of mass itself, which underpins all of gravity, was also unexplained by Einstein and Newton. The Higgs mechanism was developed to explain the origins of mass by way of a “scalar field,” which interacts with all of the particles in the universe. An invisible sea of Higgs bosons is associated with this field and transfers mass to particles as they move through it (CERN, 2022). The Brout-Englert-Higgs field was predicted in 1964, and its associated particle, the Higgs Boson, was finally discovered in July 2012, some 48 years after the prediction. The theory predicted that the Higgs Boson would transmit mass to the particles of the universe, and yet the theory was unable to predict the mass of the Higgs particle itself. Later in the 1990s, new calculations predicted that the Higgs Boson would be expected to have a significant mass, in the range of 100 GeV/c2. One of the goals for creating the LHC at CERN was to generate energies high enough to produce this particle, which was expected to decay quickly into more conventional particles like photons and muons. Peter Higgs and François Englert were present at CERN when this particle was finally discovered in July 2012, and they received the Nobel prize for their theoretical model in 2013. The discovery found a particle with the proper range of mass, at 126.5 GeV, which was neutral and had zero spin, all consistent with the predicted properties of the Higgs Boson (CERN, 2000). The discovery of the Higgs Boson leveraged the two separate detector systems at CERN  – the ATLAS and CMS detectors  – which provided complementary and independent verification of the Higgs Boson mass. The ATLAS and CMS teams conducted a separate analysis of the two different decay modes detected by each detector and then compared the results from their mass determination. These masses agreed very well, despite being derived independently from different detectors (see figures below). The CMS group

254 

B. E. Penprase

Fig. 13.1  Figures from the papers announcing the detection of the Higgs Boson from the two CERN experiments known as CMS and ATLAS in 2012; both experiments detected the particle at the same mass level and confirmed the discovery. (Images from https://cms.cern/news/cms-­closes-­major-­chapter-­higgs-­measurements and https://atlas. cern/updates/briefing/new-­atlas-­measurement-­higgs-­boson-­mass)

measured the Higgs mass to be 124.7+/−0.34 GeV, while the ATLAS group measured 125.1+/−0.4 GeV (Fig. 13.1). The two groups could not see the Higgs Boson directly but instead measured the decay products that gave proof of the Higgs Boson’s short life. The decay of the Higgs Boson can follow several decay modes, which provide “fingerprints” of the particle. By adding up the energy, charge, spin, and masses of all the decay particles, the mass, charge, and spin of the original Higgs Boson was reconstructed. In one decay mode, the Higgs Boson decays into two photons. Using the relationship E = mc2, the mass of the original particle that created the photons was determined based on the photon energies. A second decay path of the Higgs includes decaying into a pair of Z bosons, which would then transform into an electron-positron pair or muons. The result would be groups of four simultaneous leptons, which could be either electron/positron pairs or muons. The ATLAS and CMS detectors both measured the energies (and masses) of these decay particles to calculate the Higgs Boson mass (Riordan, et al, 2012). Other decay modes for the Higgs boson include decays into a bottom/anti-­ bottom quark pair or a pair of Z bosons. These decay modes are more difficult for measuring the Higgs mass, but each decay mode’s frequency provides additional evidence that the decay particles arise from the Higgs. The data from CERN confirms that the frequency of the observed decay modes is consistent with the predicted properties of the Higgs Boson.

13  The Higgs Boson and Models of the Early Universe 

255

With the experimental detection of the Higgs Boson at the LHC, the Brout-Englert-Higgs field and the Higgs mechanism have been verified and the LHC provided experimental evidence for a theory that explains the earliest moments of the Big Bang. The Higgs field provides the potential for injecting huge amounts of energy into the early universe, through a process known as spontaneous symmetry breaking. The result of this event was a vast burst of energy that resulted in exponential expansion of the universe, as well as a process for setting the masses of the various particles that emerged from the Big Bang. To understand this further, we need to consider the particle physics of the ultimate high-energy particle accelerator – the Big Bang.

13.2 Particle Physics of the Early Universe Particle accelerators have enabled us to directly recreate the physics of the early universe in the first instants of time. The main story of the early universe is the emergence of our current set of particles and forces from an impossibly dense and hot pinpoint of undifferentiated energy. As mentioned before, the initial burst of energy is thought to arise from the spontaneous symmetry breaking of the Higgs field. In this process, the enormous potential energy of the invisible Higgs field is imparted to matter and space, resulting in an exponential expansion that we call the “inflationary era.” This expansion of space was so rapid that patches of space were isolated from each other as the boundaries of their horizon raced apart faster than the speed of light. This “flattening” of the early universe accounts for the relative smoothness of the early universe as revealed by the Cosmic Microwave Background Radiation, which preserves a record of the universe within the horizon or “light cone” up until its release, some 300,000 years after the inflationary era. At about the first millionth  of a second of the big bang, matter and antimatter existed in almost equal amounts. The fact that the numbers of particles and antiparticles were not precisely equal arises from the phenomenon known as CP parity violation – that the universe is not exactly symmetric in nuclear reactions when the charges and spins of particles are reversed. This asymmetry, when played out over trillions of nuclear reactions in every nanosecond of the early universe, resulted in the surplus of one part in a billion production of matter compared to antimatter. This point in the universe is also where our story might end – if the universe were perfectly symmetric. The newly created matter anti-matter pairs would eventually collide, and would then annihilate each other, leaving behind only photons in the form of gamma rays. It is a well-known and experimentally

256 

B. E. Penprase

proven fact that any particle when it meets its antiparticle, will “annihilate” both particles and leave only a flash of light with an energy of 2mc2. If the universe were “symmetric” – with equal amounts of matter and antimatter, we would expect only these photons to exist. And to a large degree, this process was nearly perfect – with all but one part in a billion anti-matter and matter pairs annihilating and leaving only a tiny residue of matter – the protons and neutrons that comprise all the planets, stars, and galaxies of the present-day universe. Our material universe is made from this tiny residue of what is left of this annihilation  – so we should be very grateful that the universe is imperfect and asymmetric! As the universe expanded, temperatures dropped, and particles and antiparticles condensed directly from the extremely high energies. In the early universe, high-mass particles (and their anti-particle pairs) condensed first, and as the thermal energy decreased, lower-mass particles condensed out later. The expanding volume of the universe lowers the temperature and mean thermal energy, eventually cooling below the point where particle-anti-particle pairs can be created from the thermal energy. At this point, the particle is said to “freeze out” from the early universe. The mean thermal energy of the early universe can be calculated as E = kT, where k is the Boltzmann constant, a proportionality constant between temperature and energy. When this energy drops to the rest mass energy of each particle (mc2), the particle and antiparticle freeze out. Quarks are thought to be interconverting rapidly between mesons and other unstable particles until the first protons can condense. This happens around 1 microsecond when the universes’s temperature drops to below 1 trillion degrees K, when the protons first “freeze out” of the universe. As amazing as this appears, the current energies of CERN generate such temperatures, allowing us to directly test the physics back to this point in the history of the Big Bang. Further “freeze-out” occurs as the energy drops below the energy needed to interconvert neutrons and protons – at about 1 second after the Big Bang. This event is crucial for the future of the universe, as it locks in the neutron-­ to-­proton ratio, ultimately determining the raw materials of the universe from which all the atomic nuclei are made. This period leaves an imprint on the nuclei that form afterward, which comprise a mix of Helium, Lithium, Beryllium, and Boron. High-precision spectroscopy of ancient stars can measure these abundances of elements and test our theories of the early universe at this point. Further cooling below several billion degrees allows the electrons to freeze out, at about 25 seconds into the Big Bang (Mutel, 2009). As the temperature lowers further to below that of an exploding thermonuclear bomb (at about 100 million degrees Kelvin), the first stable Helium nuclei

13  The Higgs Boson and Models of the Early Universe 

257

Fig. 13.2  Timeline of events in the early universe, showing the steady decline in temperature and the resulting changes in the universe. The early times of the universe include the Planck Era of quantum gravity, and the GUT era when the strong, weak, and electromagnetic forces are unified. The CERN LHC can generate energies comparable to the first trillionth of a second of the Big Bang and is labeled. After the first trillionth of a second, the electromagnetic and weak forces decouple, quarks combine to form neutrons and protons, the neutron-to-proton ratio is set, and the first nuclei form. At 300,000  years, the first atoms form; this is the point at which astronomy begins as light can pass through the universe without scattering from electrons. (Figure by the author)

can “freeze out,” and the ratio of Helium to Hydrogen in the universe is locked in, providing about 25% of the early universe’s baryon mass in helium nuclei within the first few minutes of the Big Bang. The cooling trend continues until the temperature drops below about 5000 K, or the surface temperature of a star, which is cool enough to allow for the possibility of neutral atoms. This event, known as recombination, is crucially important to astronomers, as it marks the limits to which we can probe the universe using electromagnetic waves, as the neutral atoms are more transparent to light and allows for light to escape from matter for the first time and ultimately allows for matter to cool further to form galaxies and stars. From that point forward, light and matter separate, and we can observe the universe directly with our telescopes (Fig. 13.2).

13.3 Supersymmetry and Symmetry Breaking Our picture of the early universe appears to be complete, except for two key ingredients, which provide 95% of the universe’s mass. These ingredients are dark matter, which comprises some 25% of the universe by mass, and dark energy, which comprises nearly 70% of the universe by mass-energy density

258 

B. E. Penprase

(Ladav & Liddle, 2019). The fields and particles responsible for both dark matter (which causes gravitational attraction) and dark energy (which causes a repulsive force in every cubic meter of space) are unknown but actively being searched for, as discussed in the next chapter. One suggestion for the dark matter particles would be the existence of an additional set of particles that mirror our existing families of particles in the standard model. This concept of “supersymmetry” predicts “supersymmetric” particles for each of the particles of the standard model. The predicted particles would have their spin differing by half a unit – meaning that each fermion would have a supersymmetric boson and vice versa. If true, this model would then double the number of particles in the universe and might also predict other high-mass particles, which could be challenging to detect, forming a reasonable basis for dark matter. The supersymmetry models also can explain the observed mass of the Higgs Boson, which was not precisely constrained in the standard model (Gianotti & Virdee, 2015). The concept of supersymmetry is inspired by reasoning analogous to the prediction of antimatter, formally described first by Paul Dirac, who showed that spacetime symmetries required every particle to have a corresponding antiparticle. Dirac’s theory  used the newly developed theory of relativity, which extended space to include a fourth time dimension and become spacetime. In Dirac’s model, this extra time dimension required every particle to have an anti-particle. Similarly, supersymmetry theory extends spacetime into another higher dimension, a “fermionic dimension,” in which particles can move in quantum steps that allow them to transform from bosons into fermions and vice versa. If supersymmetry is true, it would provide a deeper explanation of how particles and their masses are set and why the various forces of the universe have the relative strengths we observe. It would also flood the universe with an entirely new set of massive particles, which could solve the problem of identifying the dark matter in the universe. Current CERN experiments hope to find traces of these supersymmetric particles, which could have larger masses (and therefore require higher energies to produce) than the Higgs Boson. It remains to be seen if this bold prediction is accurate – and observations of new  invisible particles are expected to provide  observational constraints on the dark matter and dark energy and could also provide important evidence of supersymmetry.

13  The Higgs Boson and Models of the Early Universe 

259

References CERN. (2000). The Higgs Boson | CERN. Home.cern. CERN. https://home.cern/ science/physics/higgs-­boson CERN. (2022). An artistic depiction of the Brout-Englert-Higgs field. CERN Document Server. https://cds.cern.ch/record/2815837 Gianotti, F., & Virdee, T.  S. (2015). The discovery and measurements of a Higgs Boson. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 373(2032), 20140384. https://doi.org/10.1098/ rsta.2014.0384 Ladav, O., & Liddle, A. R. (2019). Cosmological parameters. LBL.gov. https://pdg.lbl. gov/2019/reviews/rpp2019-­rev-­cosmological-­parameters.pdf Mutel, R. (2009). Early universe. Uiowa.edu. http://astro.physics.uiowa.edu/~rlm/ mathcad/addendum%2014%20early%20universe.htm Peet, A. W. (2014). Physics pages of Prof. A.W. Peet. Ap.io. https://ap.io/mpip/10/ Riordan, M., Tonelli, G., & Wu, S. L. (2012). The Higgs at last. Scientific American, 307(4), 66–73. https://www.jstor.org/stable/26016129

14 Exploring the Invisible Universe

As we have seen, the most dramatic discoveries in physics and astronomy usually begin with a mystery. An unknown and unseen particle, like the muon, suddenly appears in cosmic ray experiments; a strange glow in matter gives the first hint of radioactivity, or an invisible particle is detected by its unique decay fragments. These events in the history of physics opened up more than a century of discovering additional particles culminating in the standard model of physics discussed in the previous chapter. In astronomy, the initial views of galaxies began with a mystery about the nature of “spiral nebula,” and then prompted a recalibration of our place in the universe to describe a universe filled with billions of galaxies. This pattern of discovery involved the discovery of new objects viewed indistinctly with telescopes and then moving into provisional classifications and descriptions, eventually leading to detailed theoretical models. In this way, the nebulae were revealed to include spiral, elliptical and irregular galaxies, which were further determined to be dynamical systems continually evolving and absorbing each other. Another kind of discovery involves the detection of an object or phenomenon that theory may have predicted but which required technological advances to enable the detection to be realized. In physics, this would include the discovery of the Higgs Boson, or the neutrino, both of which were predicted by theory decades earlier and which required advances in technology to provide detections of the particles that they interact with during unseen transformations and decays. In astronomy, this channel of discovery can be considered the “study of the invisible” and has revealed the presence of planets, such as Neptune, Pluto, and several “dwarf planets” at the edge of the solar system, that were first predicted by theory. In the stellar realm, the discovery of the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_14

261

262 

B. E. Penprase

first white dwarf, Sirius B, and the first neutron stars and black holes were all detected based on the predicted effects of these invisible stars on more visible companion stars. This same technique has been applied to extrasolar planets, which are far too faint to be seen directly but tug on visible stars, causing them to move. Such examples from the history of astronomy can provide helpful illustrations about how the invisible universe has propelled discovery and helps us better understand some of the current challenges of identifying and measuring the properties of dark matter and dark energy.

14.1 Discovery of Invisible Objects in Astronomy – Pluto, Sirius B, Neutron Stars, and Black Holes, and Modern Searches for Planet X The Lowell Observatory, discussed earlier, is celebrated for the discovery of the planet Pluto, which was heralded as a triumph of twentieth-century American astronomy  – despite Pluto’s later unceremonious demotion to “minor planet” status in 2006. Pluto was discovered not by accident but by diligent searching. Its presence had been revealed from its effects on Neptune, a planet discovered by German astronomer Johan Galle in 1846 after a prediction by the French astronomer Urbain Le Verrier, as discussed in Chap. 4. In the case of Pluto, the challenge of detection was much greater due to a less certain prediction of its location, as well as due to Pluto’s much greater distance, and Pluto’s tiny size, which we now know is less than 1500  km in diameter (smaller than Earth’s moon). These factors mean that the reflected light from Pluto is 1000 times fainter than Neptune, and its location in the outer solar system makes its motion very difficult to detect. Vesto Slipher organized and directed the search for the new planet at Lowell Observatory, but its discovery is credited to a newcomer to astronomy, Clyde Tombaugh. Clyde Tombaugh was a self-taught astronomer who built his own telescopes and sent some drawings and photographs of planets taken from his Kansas farm to Lowell Observatory, which impressed the Lowell observatory so much that they offered him a job in 1929. Within a year, Tombaugh was able to find Pluto’s faint image within photographic plates taken of star fields where Pluto was predicted. After blinking dozens of the plates, the tiny smudge of Pluto was discerned in 1930, moving slowly across a star field of much brighter stars. Tombaugh’s discovery caused welcome good news in the midst of the Depression and made him an international astronomical

14  Exploring the Invisible Universe 

263

celebrity. Ultimately, Pluto’s small size and very eccentric orbit, which takes it inside Neptune’s orbit and then out to the Kuiper Belt of the outer solar system, suggested that it did not share a common origin with the other planets. Pluto was determined to be one of the many Kuiper Belt or “trans-Neptunian objects” that pass through our solar system, and its status was downgraded from planet status by a vote within the International Astronomical Union in 2006. The search for additional objects like Pluto at the solar system’s edge is ongoing today. Ironically, the two “frontier” areas of discovery with minimal data in our maps of the skies include the very distant edges of our visible universe and the distant edges of our solar system. The edges of our solar system, typically classified as the Kuiper Belt (50–200  AU) and the Oort Cloud (200 AU – 1 light year), are very cold and provide only very faint light. Light from these regions comes mostly in long infrared wavelengths that are not easily collected from Earth or even with space telescopes. The objects in the outer reaches of our solar system are often quite small and reflect very little of the faint sunlight that reaches these outer regions. Despite these difficulties, teams of astronomers are now on the hunt for new “trans-Neptunian” dwarf planets and even the possibility of a full-fledged planet at the edge of our solar system (Fig. 14.1).

Fig. 14.1  A plot of the major trans-Neptunian objects, which appear to be bunched on one side of the solar system. This asymmetry and mathematical models have suggested that an additional planet, provisionally named “Planet Nine,” could be providing forces that are perturbing the orbits of the other outer solar system objects, and a hunt is ongoing to locate this new planet. (Image from R. Hurt, IPAC/Caltech)

264 

B. E. Penprase

As in Lowell’s time, today’s astronomers are also hunting for an additional planet beyond Neptune in the outer solar system. New models of the evolution of our solar system suggest that an Earth-sized object could have been flung to the Kuiper Belt or beyond early in the history of our solar system. Our experience studying other solar systems through exoplanet studies also suggests that it may also be possible for such larger objects to have formed in the outer solar system. Several “trans-Neptunian Objects (TNOs)” have already been discovered, including Sedna, an object discovered by American astronomer Mike Brown in 2003, which appears to be about 1000  km in diameter and traverses a highly eccentric orbit that ranges in distance from 76  AU to 930  AU.  More than ten other large TNOs with diameters over 500 km and names such as Quaoar, Makemake, and Haumea have also been discovered, all with extremely eccentric orbits that reach deep into the outer solar system. Studies of the distributions of orbits of these known TNO objects suggest that their orbits may be affected by one or more large objects in the outer solar system, which could produce the observed asymmetry in their orbits, which would require some external forces to maintain. New telescopes, like the upcoming Vera C.  Rubin  Observatory telescope, will help clarify the origins of these anomalies and possibly find another planet in the outer solar system (Lemonick, 2016).

14.2 Discovery of Dark Matter In addition to predicted objects like Pluto that we can eventually see as telescopes improve, astronomers have also identified new and unexpected types of matter that remain invisible yet have definite effects on more visible objects – stars, galaxies, and even clusters of galaxies – that reveal its presence. This invisible matter exerts a gravitational pull on visible objects, and by the motions of the visible object, we can infer the mass of the invisible matter. The invisible or “dark” matter seems to pervade the universe at all scales and outnumbers regular matter – the stars, planets, and atoms that we are made of – by a factor of five. Fritz Zwicky made the first mention of Dark Matter in 1933. Zwicky arrived at Caltech from Switzerland in 1925 and quickly moved from his studies of the physics of liquids and crystals into a wide-ranging consideration of the biggest questions in astronomy and astrophysics. His contributions to astrophysics were numerous and included elucidating the process of star death through supernova explosions and the subsequent production of a neutron star (which was eventually confirmed in 1967) and the observation of galaxy

14  Exploring the Invisible Universe 

265

motions in the large Coma Cluster, a vast group of galaxies 330 million light years away. In his1933 study of the Coma Cluster, Zwicky referred to an unseen Dunkle Materie or “dark matter” needed to prevent the galaxy cluster from flying apart based on the observed motions of the galaxies. Zwicky’s observations and suggestions about the dark matter were largely ignored for several decades until Vera Rubin, an American astronomer working with galaxy spectra, noticed in the 1970s the excess motion within the galaxies. By studying the Doppler shifts of known atomic lines within galaxy spectra, Rubin could not only see the redshift from the galaxy, but also the range of velocities of the stars within the galaxy. She used these spectra to create “rotation curves” that showed the orbital speeds of the stars increased as a function of distance from the galaxy center. Rubin discovered that these speeds kept increasing to the very edges of the galaxy, suggesting increasing mass in the galaxy even beyond where stars were no longer visible (Fig. 14.2). Rubin’s conclusion was based on the physical argument that orbital speed around a central mass (in this case, the galactic nucleus) should decrease with radius by the decreased gravitational pull from the central mass at large radii. This fact provides the basis for Kepler’s law – which predicts slower orbital speeds and longer orbital periods for outer planets like Jupiter or Saturn

Fig. 14.2  Illustration of a galactic rotation curve, showing the increasing velocity of rotation at the outer edges of a galaxy. The extra motion indicates the presence of invisible mass, which increases far outside the boundary where visible light from stars is seen. Vera Rubin discovered this evidence for Dark Matter in 1970. (Image from https://en.wikipedia.org/wiki/File:Rotation_curve_of_spiral_galaxy_Messier_33_ (Triangulum).png)

266 

B. E. Penprase

compared with inner planets like Mercury, Venus, or Earth. Rubin’s observations suggested that the galaxies had a mass distribution very different from the light distribution. Even though the visible starlight was centrally condensed into a bright core in the center of the galaxy, the mass appeared in nearly every galaxy to extend well beyond the visible edges of the galaxy, and to be increasing as a function of radius well beyond where the stars were no longer visible. The amount of matter that she inferred from these rotation curves appeared to be 3–5 times larger than the amount of mass within the galaxies’ stars. While the full import of Rubin and Zwicky’s discoveries was not realized at the time, they now have become a central focus for observational astronomy and laboratory physics. With the Hubble Space Telescope and new JWST images, we can detect dark matter directly in galaxy clusters from their effects of “gravitational lenses.” With larger sky surveys and x-ray telescopes, we have verified that galaxy clusters are indeed bound together by vast expanses of dark matter, accounting for billions of solar masses within galaxies and trillions of solar masses among galaxy clusters. The new Vera C.  Rubin Observatory telescope – a powerful 8.4-meter telescope in Chile capable of surveying the entire sky in three nights and has been named in honor of Rubin’s contributions to discovering dark matter. This new telescope and the Zwicky Transient Facility, a network of telescopes based at the Palomar Observatory, provide powerful capabilities for discovering new “transient” sources in the sky, which will greatly improve our knowledge of dark matter and its distribution in the universe (Fig. 14.3).

Fig. 14.3  Image of the new Vera C. Rubin Observatory, perched on Cerro Pachon in Chile, and ready to survey the skies every few days to discover new supernovae and help constrain the nature of dark matter and dark energy. (Image from https://www. aura-­astronomy.org/centers/nsfs-­oir-­lab/rubinobservatory/)

14  Exploring the Invisible Universe 

267

14.3 Gravitational Microlensing and 3D Dark Matter Maps With the Hubble Space Telescope and the new JWST, astronomers can routinely map out the locations of large blobs of dark matter. The technique uses the fact that dark matter, even if it does not emit or interact with light directly, will bend spacetime according to Einstein’s theory of General Relativity. Using the high-resolution images from Hubble or JWST, the presence of gravity from dark matter can be seen to produce arcs of light in what is known as strong gravitational lensing, and to distort the shapes of galaxies in subtle ways in what is known as “weak lensing or “gravitational microlensing.” Gravitational microlensing can be detected by analyzing samples of galaxies in the background of the dark matter to reveal the presence and distance of invisible blobs of dark matter. Just as the presence of a hot parking lot will create shimmering currents of air that reveal our atmosphere’s heat and convection, dark matter creates distortions in the shapes of these background galaxies. The technique is made possible by the fact that any of the long-­ exposure Hubble or JSWT images will reveal literally thousands of background galaxies, giving a rich probe of the invisible dark matter in front of these galaxies. By systematically quantifying the orientations and shapes of small galaxy samples of galaxies within an image, the dark matter can be detected from the “shear” in the images of the galaxies that produce statistical fluctuations away from random orientations and distributions of shapes. By studying samples of galaxies in different bins of distance within the fields of Hubble and JWST, the technique provides a three-dimensional map of dark matter (Fig. 14.4).

14.4 Laboratory Detection of Dark Matter Another technique for detecting dark matter doesn’t even require a telescope or looking at space – it uses the fact that all of space would be expected to be filled with dark matter – even here on earth. This means that any sensitive detector placed in a location on Earth should be able to detect the presence of these particles as they pass through our planet. Such laboratory dark matter searches are looking for particles that are expected to have little interaction with light and very limited interaction with ordinary matter. These dark matter candidates are sometimes called weakly interacting massive particles or WIMPs. The WIMPs interact by the weak force, the force that operates in

268 

B. E. Penprase

Fig. 14.4  The image shows how the Hubble Space telescope can provide a three-­ dimensional dark matter map. By combining statistics of galaxy shapes at different distances, the HST can measure the distribution of dark matter throughout the universe. The gravitational microlensing technique allows for the determination of dark matter content based on distortions of galaxies from the otherwise invisible dark matter. (Image from https://commons.wikimedia.org/wiki/File:COSMOSDMmap2007.jpg)

nuclei and is part of the beta decay process discovered early in the twentieth century. The neutrino is the best known weakly interacting particle, but our best estimates suggest that the mass contributed by all three known types of neutrinos is insufficient to account for the dark matter (Bahcall, 1996). The WIMPs that are being searched for include some “supersymmetric” particles, such as the neutralino. This particle has been predicted theoretically in some models but has not yet been seen in the laboratory or within energetic collisions in CERN. If they did exist, they are expected to be abundant, weakly interacting, and would make good dark matter candidates. To optimize the chances of finding these elusive WIMPs and axions, physicists have created pristine laboratory environments where they can detect rare interaction events. For WIMP searches, placing laboratories and experiments

14  Exploring the Invisible Universe 

269

underground provides refuge from the constant flux of cosmic ray particles on the Earth’s surface. One example is the Gran Sasso undergraduate mine in Italy, which contains an experiment known as XENONnT. The Gran Sasso experiment uses a huge underground tank of liquid Xenon, an element that cannot react chemically but can detect the presence of dark matter particles. Collisions between the WIMPS and Xenon atoms would cause the atoms to move suddenly, and this motion releases a small pulse of light that the experiment can detect. To shield the Xenon from more pedestrian particles arising from cosmic rays and natural radioactivity, it sits within a vast 700,000-liter tank of water some 1400 m underground. Hundreds of photomultiplier tubes have been placed inside and wait in complete darkness for the telltale flash of a dark matter particle. As impressive as the Gran Sasso experiment is, it is just one of several such underground dark matter experiments, which also includes a US experiment in South Dakota known as LUX-ZEPLIN that uses another liquid Xenon tank a mile underground encased in lead (Sanford Lab, 2020), and a Chinese experiment known as PandaX making use of 4-tons of liquid Xenon and 4-tons of target materials placed within the China Jinping Underground Laboratory, a facility 2400 m underground (PandaX, 2022). The deep underground sites are a safe refuge from cosmic rays. For example, the China Jinping Underground Laboratory has an incredibly low cosmic ray muon rate of 0.2 muons per meter per day, compared to the surface rate of cosmic rays on Earth, which averages about 14 billion muons per meter per day (Schilling, 2022) (Fig. 14.5). Another contender for the missing dark matter particle is the mysterious axion, a particle that was predicted by theory to carry away isospin from nuclear reactions. American physicist Frank Wilczek coined the name axion based on the name for a commercial detergent. The concept of isospin was introduced by Werner Heisenberg in 1932, well before our knowledge of quarks, mesons, and other particles had been developed. As new meson particles like the pions were discovered in accelerator experiments, what seemed to be separate particles, like the π0, π+, and π− mesons, were later shown to be the same particle with different states of isospin. Unlike quantum mechanical spin, isospin has no connection with angular momentum (nuclear-power. com, 2022). If the axion had mass, it would be expected to be a great dark matter candidate due to its abundance, with trillions of axions expected in every cubic centimeter of the universe, many of which are predicted to be created during the inflationary epoch in some models (Wood, 2019). However, even though the axion has been predicted for over 40 years, it has yet to be confirmed as a real particle (Conover, 2022).

270 

B. E. Penprase

Fig. 14.5  One example of the many laboratory detectors currently searching for dark matter particles. This is the LUX-ZEPLIN dark matter experiment, which includes a pair of nested titanium tanks containing 10 tons of ultrapure liquid Xenon, all placed about a mile underground in a site in South Dakota. (Image from https://www.llnl.gov/news/ lux-­z eplin-­d ark-­m atter-­d etector-­s anford-­u nderground-­r esearch-­f acility-­d elivers-­ its-­first)

Most axion detection experiments are based on the idea that strong magnetic fields can convert axions into visible photons (or vice versa). In the conversion, the wavelength of the light would be directly related to the axion mass. This idea has inspired the CERN Axion Solar Telescope (CAST), which seeks solar axions by attempting to convert them into photons with magnetic fields, the ALPS experiment at the DESY laboratory in Germany that makes use of a laser in a magnetic field to create axions, and the ADMX experiment at the University of Washington, which is based on a huge cavity filled with a super strong magnet at close to absolute zero temperature. The hope is that any passing axion would be converted into a microwave photon that could be detected.

14.5 The Discovery of Dark Energy and the Physics of the Vacuum Unlike Dark Matter, which attracts all visible matter and itself through gravitational forces, Dark Energy appears to exert a surprising repulsive force that pervades every cubic meter in the universe. Dark Energy, like many

14  Exploring the Invisible Universe 

271

astronomical discoveries, was an unexpected observation of excess motion among astronomical objects. The initial redshift-distance relationship from Edwin Hubble was extended to greater distances using new techniques and more advanced telescopes. As Hubble’s law was extended out to billions of light years using Cepheids in more distant galaxies or using the angular size or velocity dispersions in galaxies to measure distances, better measurements were obtained for the Hubble constant. By extending the Hubble plot to ever greater distances, astronomers were able to measure the “curvature” or deceleration parameter of the universe, which comes from changes in the expansion rate over billions of years (Fig. 14.6). The competing models of the universe in the twentieth century included closed, open, and “critical density” universes based on the total quantity of mass in the universe. By the 1990s, observations of the deceleration parameter could rule out the closed universe – even with generous allotments of dark matter. However, the most recent observations using distant supernovae have detected a very slight acceleration in the Hubble expansion over billions of years – something completely unexpected since the mass of the universe was

Fig. 14.6  The toolkit of distance measurements within the extragalactic distance ladder. Each technique can push to further distances, and the Type Ia supernovae (SN Ia on the diagram) have been essential for extending the Hubble expansion plot to billions of light years of distance, enabling the discovery of the “Dark Energy.” (From https:// commons.wikimedia.org/wiki/File:Extragalactic_distance_ladder.JPG)

272 

B. E. Penprase

only expected to decelerate or slow down the Hubble expansion. Some new kind of force or energy was at work – which soon became known as the “Dark Energy.” No known explanation existed for this acceleration – which operates like a strange sort of repulsive force that can be described  mathematically as the “cosmological constant” of Einstein’s equations that  opposes the universal gravitational attraction of matter. With the discovery of Dark Energy came the disquieting evidence that the acceleration of the universe may be increasing without bound – resulting in a longer-term future for the universe that is “open,” with galaxies accelerating exponentially as they race off toward infinity at a rate more rapid than had been predicted from the earlier open cosmological models. A key piece of evidence for the Dark Energy was provided by the use of supernovae as “standard candles” to measure distances to billions of light years. In particular, the Type Ia supernovae, which arise from the collapse of white dwarf stars, was thought to have a nearly constant luminosity since white dwarf stars are expected to collapse at almost the same mass. The threshold for collapse corresponds to the Chandrasekhar mass (approximately 1.4 solar masses), giving all the Type Ia supernovae roughly the same amount of nuclear fuel for their supernova explosion. By measuring the apparent brightness of such supernovae, which decreases with the distance squared, it was possible to solve for the distance of galaxies that contained Type Ia supernovae. These distances showed an unusual curvature in the Hubble Plot that was totally unexpected from earlier theories and suggested the existence of what is now known as Dark Energy. The first observations of Dark Energy came from two teams in 1998 – one led by Adam Riess and Brian Schmidt and the other led by Saul Perlmutter. They observed that at larger distances, their galaxies appeared to be slightly farther away than expected based on the redshift, using the error bars that included uncertainties in the distance, and corrections for extinction. After replotting the data with a larger sample of galaxies, improving their corrections for extinction, and adding in data taken in the infrared (where extinction is much less), the signature of the acceleration persisted. A concerted global collaboration of astronomers known as the Dark Energy Survey continued to toil away at improving the data using as many supernova observations as possible and correcting for every known effect that might skew the data. While this multi-decade effort has not answered the question of how Dark Energy arises, the exquisite precision of the observations and analysis earned the two teams a shared Nobel prize in 2011, which was awarded to Perlmutter, Riess, and Schmidt (Fig. 14.7).

14  Exploring the Invisible Universe 

273

Fig. 14.7  Summary of the Type Ia supernova data from the Dark Energy Survey group, including all of the Type Ia supernovae observed by the group by 2019. The top panel shows a plot of distance on the y-axis and redshift or velocity on the x-axis. Type Ia supernovae have extended this Hubble plot to a redshift of approximately 1.0, corresponding to a distance or “look back time” of 7 billion light years. The lower panel shows how the dark energy model (top line) fits the distance and redshift data better than the critical and open models. (From Abbot et al., 2019; available at https://www. osti.gov/pages/servlets/purl/1488595)

As new observations pointed toward the unexplained repulsive force, astronomers worked hard to rule out experimental errors and to improve the calibration of the Type Ia supernovae. One effect which was quickly tested was the effects of dust in space, which would cause the supernovae to appear fainter, not because they are farther away, but because their light had been scattered away. However, this effect would have telltale signatures of a wavelength dependence, which was not seen in careful follow-up observations. It was also known that there are slight variations in luminosity in Type Ia supernovae that correlate with the duration of the supernova explosion. Nearby galaxies with Type Ia supernovae were used to check the Type Ia supernova luminosity distances with more conventional techniques for measuring distance, such as Cepheid variable stars. After decades of effort in getting improved calibrations of the Type Ia supernovae, and checking for the effects of extinction, the Dark Energy acceleration in galaxy velocities has persisted, proving that this is a fundamental and real component of the expansion of the universe, which if converted into mass, would account for 70% of the mass-­ energy density in the universe.

274 

B. E. Penprase

One other disquieting implication of Dark Energy is that as matter continues to thin out as space expands, the relative strength of the expansion from Dark Energy would only be expected to increase. Unlike gravitational deceleration, which depends on the density of the universe and therefore is reduced as the volume of the universe increases, Dark Energy is expected to remain constant per unit of volume  – even as the volume of space increases. This property results in exponential expansion at the most extended timescales, perhaps even to a point where a “big rip” might occur – and the gravitational bonds within galaxies and even star systems would be exceeded by the repulsive forces of the dark energy. For this reason, measuring the “equation of state” of Dark Energy – its intensity as a function of volume – is extremely important in determining the universe’s ultimate fate (Fig. 14.8). At the moment, the best models appear to rule out the “big rip” and instead suggest that galaxies will remain bound for the indefinite future – but also will race faster and faster from each other until the accumulated apparent expansion from space causes galaxies to race beyond our visible horizon – leaving each galaxy (and the Milky Way) isolated in separate domains in the universe (Riess & Livio, 2016). The new NASA WFIRST mission (recently renamed the Nancy Grace Roman Space Telescope) and the ESA Euclid 1.2-meter

Fig. 14.8  The history of the universe’s expansion is predicted to follow an exponential path, which is the era of “dark energy domination.” Like in the early universe, during the Higgs inflation era, the volume of space in the universe will be expected to race outwards exponentially, pushing distant galaxies beyond our observable horizon. (Figure by the author; adapted from Phil. Trans.R. Soc. A 373: 20140038)

14  Exploring the Invisible Universe 

275

space telescope mission will work together to help untangle this mystery. Both include large space telescopes optimized to measure cosmic expansion using infrared light of supernovae at greater distances than possible from the Earth.

14.6 Neutrino Detection and Experiments The neutrino was once considered a leading candidate as the dark matter particle, but decades of experiments have provided upper limits to the neutrino mass which is well below what would be needed to explain dark matter. Particle experiments with the neutrino are now firmly established, as laboratories on Earth routinely measure neutrino fluxes from accelerator experiments and even have placed good limits on the emission of neutrinos from the Sun. The flux of cosmic and solar  neutrinos on Earth and underground includes over 60 billion neutrinos per square centimeter per second (Djorgovski, 2004). Still, nearly all of these neutrinos pass through the entire Earth (and the experiments) with no effect at all. Decades of experiments looking for a mass of one of the three known families of neutrinos or discovering a fourth neutrino family has given us some confidence that the neutrino mass is too low to be responsible for the dark matter. Neutrinos are known to come in three families – the electron neutrino or νe, the muon neutrino or νμ, and the tau neutrino or ντ. Constraints on the neutrino mass have suggested its mass is less than 1.1 eV, and some have placed even lower limits, which safely rules out the neutrino as the source of the dark matter (McGaugh, 2021). By comparison, the electron, our lightest particle in everyday life, is 500,000 times heavier than the upper limit of the neutrino mass. Since neutrinos are emitted in vast quantities in supernova explosions and in the everyday nuclear reactions of all stars, they have filled our universe with their ghostly presence. It is predicted that every cubic meter of space on Earth and in our region of the solar system is filled with 300 million neutrinos – primarily arising from solar nuclear reactions but also from the crisscrossing of cosmic neutrinos that come from other stars, from supernovae, and even more exotic celestial events. This means that about 100 trillion neutrinos pass through your body every second since most of the neutrinos are moving very close to the speed of light (IceCube, 2017). Detecting such neutrinos with neutrino telescopes will offer exciting new views of this component of the invisible universe. Some examples of neutrino detectors include the 40,000 cubic meter Super Kamiokande neutrino detector in Japan, which sits below Mount Ikeno, in a remote site 200 km NW of Tokyo near Japan’s West coast. This detector is

276 

B. E. Penprase

being expanded fivefold to create the new Hyper Kamiokande or Hyper-K experiment, which will include 260,000 tons of ultrapure water, an increase of five times from the earlier version. The inside of the enormous water tank is lined with sensitive light sensors that can capture the flash of light from a passing neutrino that perturbs one of the electrons in the water molecules in the tank (Castelvecchi, 2019). Another experiment known as KATRIN (the Karlsruhe Tritium Neutrino Experiment) has been built in Germany to provide precise measurements of the neutrino mass. The KATRIN experiment includes radioactive tritium, which decays into Helium 3, and emits an electron and an anti-neutrino. KATRIN shepherds the electrons with magnetic fields generated by superconducting electromagnets to measure their energies with a spectrometer. By comparing the electron energies with expected reaction energies, the experiment can precisely constrain the neutrino’s mass. The KATRIN experiment should be able to provide measurements of the neutrino mass as low as 0.2 eV. The most recent measurements from KATRIN show that the neutrino mass is less than 1.1 eV, with 90% confidence (Aker et al., 2019). This limit already confirms that it is unlikely that the neutrino can account for the dark matter. The Homestake experiment, placed over 1400 m below ground in a gold mine in South Dakota, used a 380,000-liter tank filled with dry-cleaning liquid (carbon tetrachloride) to detect neutrinos from the rare events that convert one of the Chlorine atoms into Argon. The Homestake experiment ran from 1970 to 1992 and was crucial for providing early measurements of neutrino flux from the Sun. These experiments suggested that neutrinos were “mixing” between their three varieties and provided a constraint on the mass ratio between the neutrino families. The unexpectedly low flux of neutrinos at the Homestake experiment challenged the prevailing models of nuclear reactions within the Sun until it was realized that the missing electron neutrinos could have converted into muon or tau neutrinos (Davis, 1994). A new and more modern neutrino observatory has been built in Canada 2000 m below the Earth’s surface in a nickel mine near Sudbury, Ontario, CA. This device uses 1000 tons of heavy water, which is water with one Hydrogen atom replaced by deuterium (an isotope of H with an extra neutron), to detect neutrinos (Harwit, 2021a, p. 114). Perhaps the most spectacular of the neutrino detectors uses the Antarctic Ice sheet to create a colossal neutrino telescope known as IceCube. Using the entire Earth as a shield, neutrinos traveling upwards after passing through the Earth can be detected by rare interactions with a section of the vast Antarctic ice sheet. Strings of photomultiplier tubes have been drilled several kilometers into the ice sheet, and the super clear ice provides a perfect medium allowing

14  Exploring the Invisible Universe 

277

the faint flashes of light from rare neutrino interactions to travel through the ice to reach light detectors. The IceCube device includes 5160 optical sensors mounted in a vast grid on and inside the ice sheet, including 86 different cables that hang into the ice’s surface at depths ranging from 1450 to 2450 m. The result is a cubic kilometer of pure ice working to detect neutrinos (Harwit, 2021b, p. 117). With information on the timing from multiple detectors, the array of sensors can work out the approximate arrival direction and measure the flux of the neutrino signal. It is hoped that with this neutrino telescope, we will be able to see the pulses of neutrinos from supernovae and other cosmic explosions (Coke, 2022) (Fig. 14.9). In addition to experiments hunting for natural and cosmic neutrinos, a variety of accelerators are now making powerful artificial neutrino beams for experiments to help learn more about the neutrino, its oscillations, and its mass. The LHC at CERN is working on studying collisions that produce electron positron pairs and muon anti-muon pairs, which often include abundant neutrinos. Additional projects like the Italian KLOE-2 experiment, the

Fig. 14.9  Schematic of the IceCube neutrino detection experiment, which uses a section of the vast ice sheet in Antarctica nearly 3000 m thick to detect the flashes of light from neutrinos interacting with the atoms within the water molecules as the neutrinos emerge from their journey through the Earth. (Image from IceCube collaboration at https://icecube.wisc.edu/science/icecube/)

278 

B. E. Penprase

Fig. 14.10  Schematic of the DUNE neutrino experiment, which uses Fermilab to generate an intense neutrino beam, which traverses 1300 km underground below several Midwestern US states before being detected at the Sanford Underground Research Facility deep under South Dakota. (Image from https://lbnf-­dune.fnal.gov/how-­it-­ works/neutrino-­beam/)

Heavy Photon Search or HPS experiment in the US, and the BaBar experiment at SLAC are all working to help constrain the neutrino mass and its oscillations. The Fermilab accelerator in Illinois generates the most intense artificial beams of neutrinos in the world in the Long-Baseline Neutrino Facility. The neutrinos are produced from particle collisions at Fermilab that are directed underground through Northern Illinois through 1300 km of the Earth to be measured in a laboratory known as the Deep Underground Neutrino Experiment or DUNE, deep in a mine in South Dakota (Fermilab, 2022). The neutrino beams are then sent through a colossal 70,000-ton tank of ultrapure liquid argon. The experiment can detect the rare nuclear reactions that convert Argon nuclei into Chlorine nuclei from interactions with neutrinos. The reaction rates will help better understand the oscillations between the neutrino  types (which is  sensitive to the neutrino mass) and could potentially reveal a fourth family of neutrinos that could be a potential dark matter particle (Dobrescu & Lincoln, 2015) (Fig. 14.10).

14.7 Gravitational Waves and Gravitational Wave Astrophysics One of the more fantastic predictions from Einstein’s theory of relativity was the recognition that not only would massive objects distort spacetime but that sudden changes in mass distributions would transmit through spacetime at the speed of light, producing “gravitational waves.” The existence of gravitational radiation was first indirectly detected from observations of binary pulsars. Very precise observations of the orbital period

14  Exploring the Invisible Universe 

279

of a pair of pulsars showed the orbit decaying over time. A decaying orbit would indicate a loss of energy – which was being released into space in the form of gravitational waves. Joseph Taylor, Russell Hulse, and Joel Weisberg were able to observe the orbital period changing by two parts per trillion over a span of 6000 orbital periods beginning with observations taken in 1975, a rate of decay consistent with the predictions from General Relativity. Their observations were reported in 1982, and Hulse and Taylor received the Nobel prize in 1993 for this discovery. Starting in the 1970s an intrepid (and very patient) team of researchers at Caltech and MIT began work on a detector to measure these waves directly. Their design used laser beams pointing 90 degrees from each other to detect the expansion and contraction of space that would come from gravitational radiation. The most substantial gravitational waves radiate away from asymmetric distortions of gravity with a large “quadrupole moment,” which favors the black hole merger as one of the most likely candidates for detection. As the black holes approach each other, their orbits are predicted by General Relativity to decay rapidly and form a “merger” event that would release vast amounts of energy. Such events had never been observed – but could be modeled and quantified with theoretical calculations. The calculations suggested that the gravitational waves would transmit throughout the universe and shorten one arm of the device while stretching the arm of the other, resulting in measurable signals. The gravitational wave detector, known as LIGO (an acronym for Laser Interferometric Gravitational Observatory), was designed to detect these kinds of distortions of spacetime, which radiate throughout the universe. LIGO is a fantastically scaled-up version of the same type of interferometer that Albert Michelson built 130  years earlier for his studies of the aether. LIGO is so accurate it can detect the motions of its mirrors on the scale of 1/10,000 of the proton’s width – a precision of better than one-millionth of a trillionth of a meter! (LIGO, 2010) (Fig. 14.11). The LIGO laser beams traverse a long (4  km) pathway to increase their sensitivity. Two independent facilities were built in separate locations across the USA to guard against false events and noise since their extreme sensitivity can detect the motions of livestock in nearby pastures and cars from miles away. At each site, the entire 4 km pathway is superbly isolated from vibrations and completely evacuated to prevent air currents from disrupting the beams. Next to the Large Hadron Collider, the LIGO beams provide the world’s largest ultra-high vacuum chambers and are evacuated to 1 trillionth of an atmosphere (LIGO, 2010). The laser beams inside of LIGO are also folded, so they bounce off the mirrors 300 times, multiplying the device’s

280 

B. E. Penprase

Fig. 14.11  (Left). View of the LIGO Hanford, Washington site, showing 4 km arms that contain an ultra-high vacuum that enables a clear and stable path for the laser light to trace any distortions of spacetime. (Right) a schematic of the LIGO Michelson interferometer in which a slight shift in phase from a change in length of one arm causes the laser beams to interfere. The LIGO is so sensitive that it can detect changes in the path length as small as 1/10,000 of the width of a proton. (Images from https://commons. wikimedia.org/wiki/File:LIGO_Hanford_aerial_05.jpg and https://commons.wikimedia. org/wiki/File:Ligo-­interferometer-­(destructive-­interference).png)

14  Exploring the Invisible Universe 

281

sensitivity by creating what amounts to a 1200-km long interferometer. The sensitivity of LIGO was boosted further by increasing the lasers’ effective power, using a very strong 40-watt incoming signal and very advanced “power recycling” mirrors that boost the laser’s signal with each reflection (LIGO, 2019). On September 14, 2015, the LIGO effort, which began over 40 years earlier and was nearly discontinued for lack of funding, received near simultaneous signals at both sites that matched the predicted signature of merging black holes perfectly. Einstein’s theory predicted the luminosity to depend roughly on the mass of the source to the third power and to increase inversely as their separation to the fifth power. The merging pair of black holes provided the highest possible luminosity for the event, enabling the LIGO to detect a signal at 25 times the noise level, removing all doubt about the event’s significance. Subsequent astrophysical modeling revealed the masses and distance of the sources, which appeared to be a pair of black holes – one at 36 M⊙ and the other at 29 M⊙ – which had merged into a single gigantic black hole of 62 solar masses. Most amazingly, the difference in mass between the initial pair of black holes and the final merged black hole – 3 M⊙ – was radiated away in gravitational waves in just a few seconds. This makes the merging black holes one of the most luminous events ever detected – more powerful than a supernova by a factor of over a trillion – yet completely undetected in optical or any other electromagnetic wavelengths. This discovery enabled definitive direct detection of Einstein’s predicted gravitational radiation, and earned the LIGO principal investigators, Rainer Weiss, Barry Barish, and Kip Thorne, a Nobel prize in 2017 (LIGO, 2019) (Fig. 14.12). New gravitational wave detectors are being built in Italy and India and offer the prospect of a global gravitational wave telescope. These new systems include the VIRGO laser interferometer, a 3 km long system near Pisa, Italy, and a LIGO-India system located in the Indian state of Maharashtra in Western India. An even more powerful space-based gravitational wave detector known as LISA is expected to provide over 1000 times more sensitivity than LIGO. The space-based system will operate as a fleet of three satellites flying together to form an equilateral triangle that will be distorted by passing gravitational waves. The “arms” of the triangle will extend 2.5 million km and will be free of all noise from the Earth. LISA will have a million times more sensitivity to lower frequency waves (0.1  mHz to 1  Hz), and an ability to detect gravitational waves that arise from millions of sources inaudible to LIGO (which operates in 10–1000 Hz), and will open up the possibility of probing sources from the early universe which might leave imprints on space. These quantum fluctuations from the Big Bang could provide critical

282 

B. E. Penprase

Fig. 14.12  The simulated merger of two black holes that provided the first observed gravitational wave detection from LIGO in 2015. The bending of space from the black holes causes visible distortions; as the black holes merge, a ripple of spacetime spreads throughout the universe and is detected by LIGO. (Image from https://commons.wikimedia.org/wiki/File:Black_hole_collision_and_merger_releasing_gravitational_ waves.jpg)

information about the origins of our universe, much as the detailed study of electromagnetic Cosmic Microwave Background Radiation provided invaluable for understanding the early universe. The future network of future gravitational wave detectors will be able to detect and locate new sources of gravitational waves in the sky immediately. This capability will allow for coordinated optical, x-ray, and gravitational wave observations to study how massive black holes, neutron stars, and other “compact objects” form and end their lives. As a proof of concept, LIGO detected a pair of merging two neutron stars in 2017 (Yiu, 2017). Immediately after the two neutron stars merged to form a black hole, the LIGO team alerted a network of astrophysicists, who turned their global network of telescopes to seek optical, x-ray, gamma-ray and radio emission from the direction of the source (Burtnyk, 2017). Optical telescopes in Chile found the event within hours and could identify the galaxy that contained the source at 130 million light years distance. NASA’s Chandra X-ray telescope detected the event in X-rays 9 days later, and over two weeks later, a radio “afterglow” was seen by the Very Large Array (VLA) in New Mexico, USA. This one event also created vast amounts of new elements. According to Mansi Kasliwal, one of the lead investigators, the merging neutron stars produced a “cosmic mine that is forging about 10,000 Earth masses of heavy elements, such as gold, platinum, and neodymium” (California Institute of Technology, 2017).

14  Exploring the Invisible Universe 

283

References Abbot, et  al. (2019). First cosmology results using TypeIa Supernovae from the dark energy survey: Constraint on cosmological parameters. OSTI. https://www.osti.gov/ pages/servlets/purl/1488595 Aker, M., Altenmüller, K., Arenz, M., Babutzka, M., Barrett, J., Bauer, S., Beck, M., Beglarian, A., Behrens, J., Bergmann, T., Besserer, U., Blaum, K., Block, F., Bobien, S., Bokeloh, K., Bonn, J., Bornschein, B., Bornschein, L., Bouquet, H., & Brunst, T. (2019). Improved upper limit on the neutrino mass from a direct kinematic method by KATRIN. Physical Review Letters, 123(22). https://doi. org/10.1103/physrevlett.123.221802 Bahcall, J. (1996). WIMPs. Www.astro.princeton.edu. https://www.astro.princeton. edu/~dns/MAP/Bahcall/node8.html Burtnyk, K. (2017). LIGO Detection of Colliding Neutron Stars Spawns Global Effort to Study the Rare Event. LIGO Lab | Caltech. https://www.ligo.caltech.edu/news/ ligo20171016 California Institute of Technology. (2017, October 16). Caltech-led teams strike cosmic gold. California Institute of Technology. https://www.caltech.edu/about/news/ caltech-­led-­teams-­strike-­cosmic-­gold-­80074 Castelvecchi, D. (2019). Japan will build the world’s largest neutrino detector. Nature. https://doi.org/10.1038/d41586-­019-­03874-­w Coke, A. (2022). Detecting neutrinos | IceCube masterclass. Masterclass.icecube.wisc. edu. https://masterclass.icecube.wisc.edu/en/learn/detecting-­neutrinos Conover, E. (2022, July 22). A new dark matter experiment quashed earlier hints of new particles | Science News. Science News. https://www.sciencenews.org/article/ xenonnt-­axions-­dark-­matter-­experiment Davis, R. (1994). A review of the Homestake solar neutrino experiment. Progress in Particle and Nuclear Physics, 32, 13–32. https://doi. org/10.1016/0146-­6410(94)90004-­3 Djorgovski, G. (2004). The sun and the birth of neutrino astronomy. Ay20. https:// sites.astro.caltech.edu/~george/ay20/Ay20-­Lec8x.pdf Dobrescu, B., & Lincoln, D. (2015). New ideas in the search for dark matter. Scientific American, 313, 32–39. https://www.scientificamerican.com/article/ new-­ideas-­in-­the-­search-­for-­dark-­matter/ Fermilab. (2022). Neutrino beam. DUNE at LBNF. https://lbnf-­dune.fnal.gov/how-­ it-­works/neutrino-­beam/ Harwit, M. (2021a). Cosmic messengers (p. 114). Cambridge University Press. https:// doi.org/10.1017/9781108903318 Harwit, M. (2021b). Cosmic messengers (p. 117). Cambridge University Press. https:// doi.org/10.1017/9781108903318 Harwit, M. (2021c). Cosmic messengers (p. 144). Cambridge University Press. https:// doi.org/10.1017/9781108903318

284 

B. E. Penprase

IceCube. (2017). A first look at how the Earth stops high-energy neutrinos in their tracks. IceCube. https://icecube.wisc.edu/news/press-­releases/2017/11/first-­look-­at-howearth-­stops-­high-­energy-­neutrinos-­in-­their-­tracks/ Lemonick, M.  D. (2016). The search for planet X. Scientific American, 314(2), 30–37. https://www.jstor.org/stable/26046840 LIGO. (2010). Facts. LIGO Lab | Caltech. https://www.ligo.caltech.edu/page/facts LIGO. (2019). LIGO’s Interferometer. LIGO Lab | Caltech. https://www.ligo.caltech. edu/page/ligos-­ifo McGaugh, S. (2021, August 14). The neutrino mass hierarchy and cosmological limits on their mass. Triton Station. https://tritonstation.com/2021/08/14/the-neutrinomass-­hierarchy-­and-­cosmological-­limits-­on-­their-­mass/ nuclear-power.com. (2022). What is Isospin. Nuclear Power. https://www.nuclear-­ power.com/nuclear-­power/reactor-­physics/atomic-­nuclear-­physics/fundamental-­ particles/isospin/ PandaX. (2022). PandaX. Pandax.sjtu.edu.cn. https://pandax.sjtu.edu.cn/ Riess, A.  G., & Livio, M. (2016). The puzzle of dark energy. Scientific American, 314(3), 38–43. https://www.jstor.org/stable/26046875 Sanford Lab. (2020). LUX-ZEPLIN. Sanford Underground Research Facility. https:// sanfordlab.org/experiment/lux-­zeplin Schilling, G. (2022). The elephant in the universe: Our hundred-year search for dark matter. In JSTOR. Harvard University Press. https://www.jstor.org/stable/j. ctv2jfvcm6.27 Wood, C. (2019). Top dark matter candidate loses ground to tiniest competitor. Quanta Magazine. https://www.quantamagazine.org/why-­dark-­matter-mightbe-­axions-­20191127/ Yiu, Y. (2017). Gravitational waves throw light on neutron star mergers. Inside Science. https://www.insidescience.org/news/gravitational-­waves-­throw-­light-­ neutron-­star-­mergers

15 Physics of the Vacuum and Multiverses

We have seen how the vacuum of empty space brings about many mysterious properties of the universe. The presence of virtual particle-antiparticle pairs can emerge suddenly from apparently empty space to have measurable effects on decaying particles in particle physics experiments. The force from a “vacuum energy density” is driving much of the universe’s expansion in the form of dark energy, and the invisible Higgs field appears to have provided the energy of the Big Bang itself from the spontaneous symmetry breaking in the early universe. Modern models of physics suggest that empty space includes a sea of virtual particles that can have tangible and measurable effects. As we have seen, space in modern physics and astrophysics is not empty. Measurements of the nature of space itself, and the energy density within empty space, provide the promise of insights into the nature of the dark energy, yet are still in early stages. This vacuum energy density can also be experimentally verified through a phenomenon known as the Casimir effect. The idea, proposed first in 1948 by the Dutch physicist Henrik Casimir is to directly measure the energy density of the vacuum by studying the forces that arise from two parallel metal plates in a vacuum. Casimir predicted that the fluctuations of the vacuum that create virtual particle pairs would manifest in fluctuating electromagnetic waves (Weatherall, 2016). A small spacing between the metal plates would limit the number of waves present between the plates, which would cause the plates to be pressed together by the larger number of electromagnetic waves outside. These effects were measured in 2001 by a group at the universe of Padua. More recently, these forces have been measured with atomic force microscopes working with tiny metal cylinders spaced as small as 20 nm apart. These more © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7_15

285

286 

B. E. Penprase

recent experiments have verified the strength of the Casimir force to within 1% and confirms the theoretical prediction that the effect is inversely proportional to the spacing of the plates (Lambrecht, 2002). Another observational test of the vacuum comes from a phenomenon known as the Unruh effect – which comes from interactions between accelerated particles and the virtual particles in the vacuum. The effect arises from the fact that quantum fields fill empty space, and fluctuations in those fields can give rise to particles that in the right circumstances, can emit light (Thompson, 2022). Like Hawking radiation, which arises from interactions between virtual particle pairs and a black hole event horizon, the Unruh effect describes how an accelerated object in a vacuum can interact with virtual particle anti-particle pairs to emit light. However, the level of acceleration needed is incredibly high, so that an atom would have to accelerate to the speed of light in less than a millionth of a second – which is over a trillion times faster than the acceleration of a fighter jet. The accelerated particles would produce a very faint glow, which requires special arrangements to amplify this signal, which has been proposed by some new experiments (Chu, 2022). Hopefully, this Unruh radiation can be detected in the coming years with new experiments. Since general relativity established that the physics of an accelerated object and curved space are equivalent, the Unruh effect tests some of the same physics involved in Hawking radiation without the need for a black hole in the laboratory!

15.1 Multiverse Theory from Astrophysics – Fine Tuning and Anthropic Principle Our knowledge of cosmic expansion allows us to confidently predict much greater expanses of space beyond our observable universe and outside our horizon. Using our best cosmological models, it is expected that there are at least 45 billion light-years of space within our universe, but the light from these farthest expanses has yet to reach us. The rapid exponential expansion of the universe in the first 10−33 seconds, predicted by inflation theory, has created a vast amount of space outside our horizon that constitutes the simplest, or “level I” multiverse in which the properties of our universe (four dimensions, four forces and same physical constants) extend to a larger expanse of space. More complex and speculative possibilities exist, such as a universe outside our expanding bubble of spacetime with very different physical properties.

15  Physics of the Vacuum and Multiverses 

287

This other type of multiverse might have stronger or weaker forces or different dimensions than our own four-dimensional spacetime. These “level II” multiverses might be unrecognizable to us and probably would not support life as we might imagine it. For example, if gravity were much weaker, planets and stars might not be possible. If electromagnetism were weaker, atoms might be much larger – perhaps the size of grapefruits – and chemistry may not be possible as the force of chemical bonds would be too weak. Further problems arise if the weak or strong force had less strength – in both cases our atomic nuclei may become unstable. In the extreme case of all the forces being reduced in strength, the entire universe could consist of Hydrogen atoms the size of grapefruit, bouncing about in a vast space with no stars, no planets, and no imaginable life due to the lack of chemical complexity (Tegmark, 2003). Our universe has been configured with tiny atoms and even tinier nuclei, and with gravitation-bound galaxies and planets that provide long-term stability for our atoms and planets. This observation has led some to propose that our universe is “fine-tuned” for life. Of course, we do have a selection effect – this universe is definitely the only one we have access to! Some authors have described the “anthropic principle” in which the fine-tuning of our universe, just like our Earth, requires many factors to assure its habitability. While our universe is indeed well suited for life in its present state of evolution, it is less clear that with the longer-term expansion of the universe, especially as the dark energy drives exponential expansion of space, our universe will continue to be fine-tuned for life into the indefinite future.

15.2 The Ultimate Future of the Universe from Astrophysics With the discovery of dark energy, our best models of the universe predict a lonely future for the Milky Way galaxy as the cosmic expansion causes other galaxies to expand past the Milky Way’s horizon (or light cone) after hundreds of billions of years. If dark energy does indeed behave the way our models predict, the matter density of the universe will continue to drop. The star-­ formation activity within our galaxy will subside after many more generations of stars have formed and died, each one locking up some fraction of their material in the “remnants” left over after star death – white dwarf stars, neutron stars, and black holes. The remaining stars will continue to age, and as time continues, only the lowest-mass stars will continue to shine due to their miserly consumption of Hydrogen and their long lifetimes.

288 

B. E. Penprase

Our Sun is about halfway through its “main sequence” lifespan, and its remaining time is set by its reserves of Hydrogen fuel and its relatively low and stable luminosity. In about 4–5 billion more years, the Sun will be expected to swell into a red giant star as its core begins to falter in its energy generation. A final burst of nuclear fusion occurs at a shell of Hydrogen surrounding the inert Helium core. When the Sun swells into this phase, the Sun’s surface will be lapping on the atmosphere of Venus, and our Earth will need to have plans for moving its civilization farther away to avoid incineration. The Sun’s red giant phase will be followed by a collapse, which will compress the Sun’s core into an Earth-sized white dwarf star, which will continue radiating its wan UV energy for hundreds of billions of years. The light from the white dwarf will be much fainter than the present Sun, but it could support some ingenious life, perhaps after they relocate on what is left of Mercury after the red giant phase. We know the details of the future collapse of our Sun from observing the fates of other nearby stars like the Sun. The entire sequence of red giant and collapse is quick by cosmic standards, about 100 million years. While we can’t view this sequence directly, we can piece together the sequence of events from observations of the last phase of life of such stars at various stages of their demise (Williams, 2016). The JWST has provided some recent data on this phase of the Sun’s future with its image of a sunlike star in its last stage of life in the form of the Southern Ring Nebula. Fig. 15.1 shows the glowing ring of gas of the nebula, which is what is left of the star’s atmosphere after it formed the inert white dwarf core during its nova. This process produces a ring of fluorescent gas around what is left of the star and exposes the new white dwarf star in the center. We can model the universe’s longer term and ultimate future after the current generation of stars like our Sun have all died, with our knowledge of stellar evolution and star formation in an open universe. One of the most thorough studies of the later universe was by Freeman Dyson, whose 1979 piece entitled “Time without End: Physics and Biology in an Open Universe” examined the details of the expansion and cooling of the universe and the implications it might have for the evolution of life – not just in billions, but in trillions of years (Dyson, 1979). Like the contemplation of the multiverse, the contemplation of the longest timescales of the universe extends known astrophysical processes and physical laws to the extremes and is both intrinsically untestable and a very bold kind of speculation. Dyson wrote that “it is better to be too bold than too timid in extrapolating our knowledge from the known to the unknown.” To follow Dyson’s example, we would like to outline (and update) his model for the

15  Physics of the Vacuum and Multiverses 

289

Fig. 15.1  A star like the Sun in its final stages – in this case from the Southern Ring nebula, 2500 light years distant. The JWST has caught this image which shows the ring of gas that has just emerged from the star’s collapse at the end of its life, leaving behind the glowing white dwarf star in its center. After 5 billion years, our Sun will suffer a similar fate, and its core will be converted into a white dwarf star. (https:// www.nasa.gov/image-­f eature/goddard/2022/nasa-­s -­w ebb-­c aptures-­d ying-­ star-­s-­final-­performance-­in-­fine-­detail/)

physics of the open universe, updated for the prospect of dark energy, which was unknown at the time of Dyson’s 1979 article. Dyson points out that in an open universe, the number of galaxies visible over time would increase in proportion to the expanding volume of space within our horizon, even as the angular size of galaxies falls. The continued expansion of our horizon will intercept more galaxies for billions of years. Eventually, however, the exponential expansion predicted from dark energy will cause the most distant galaxies to recede at velocities that exceed the speed of light, reducing the number of galaxies visible within our horizon. This motion in no way violates Einstein’s theory of special relativity, since it is the result of the cumulative effects of spatial expansion, and therefore the galaxies are in no way “moving” through space, but rather it is the space that is expanding around them. We can confidently predict that in an open universe with dark energy, the number of galaxies visible in hundreds of billions of years will decrease. It is even possible that the Milky Way and its local group could fill the available horizon for a far-future Earth.

290 

B. E. Penprase

Dyson catalogs some of the expected effects of the long-term evolution of the universe over the longest possible timescales based on known astrophysics. Most stars would evolve well past their main-sequence (Hydrogen burning) lifespans by 100 trillion years, and gravitational encounters with other stars will also scatter many planets by 1000 trillion years. The dynamical scattering will continue to scramble our orderly galaxy further so that a significant fraction of stars will be scattered between the galaxies due to dynamical processes of galaxy interactions in 1018 years, or a million trillion years. After an even longer timeframe, Dyson predicts that all matter will decay to iron – the lowest energy nucleon in the periodic table. This process is active in governing the collapse of high-mass stars, as the iron core within the centers of such stars cannot be converted to produce fusion energy and are unable to resist collapse, bringing about the Type II supernova. Dyson speculates further that ordinary nuclei – given enough time – would eventually decay into iron, but at an unimaginably vast timeframe of 101500 years. Each of the decays mentioned above would emit some energy, but overall, the death and disappearance of stars would dictate a long-term cooling trend, with the universe dropping steadily in temperature as it approaches (but does not reach) absolute zero temperatures. We also can consider the prospect of life within this thinning and dimming far-future universe. Is life possible in the low-energy, low-temperature universe of the far-distant future? Dyson points out that life can evolve substantially in a few billion years, as evidenced by the dramatic shifts in lifeforms within Earth’s history so far. Dyson argues that there is ample time to allow for life to evolve further to adapt to the lower temperatures and the reduced amount of energy available in the long-term future universe. Dyson speculates that the nature of intelligence could be preserved through either “sentient black clouds” or “sentient computers” in this longer-term future, especially if the basis of consciousness could be created within more static structures of molecules – instead of the very dynamic molecular biochemistry of our present-day brains. Dyson notes that if the latter, life can continue “only as long as warm environments exist, with liquid water and a continuing supply of free energy to support a constant rate of metabolism.” Dyson also notes that “life is free to evolve into whatever material embodiment that best suits its purposes.” Dyson explores “scaling arguments” that consider the information content and energy of a life form in terms of its quantum states. Within Dyson’s analysis is the construct of “subjective time” which would be scaled with the longer lifespans of lower energy creatures, and the notion of a quantity Q, which characterizes a lifeform in terms of its “rate of entropy production per unit of

15  Physics of the Vacuum and Multiverses 

291

subjective time.” Dyson developed a  surprising mathematical derivation of the energy needed to maintain the complexity of life. Dyson concludes that “the total energy required for indefinite survival is finite.” Life could persist by exploiting new possibilities for hibernation and within a much slower type of “subjective time” where the clocks that govern conscious thought are allowed to continuously slow as the energy supplies of the far-distant universe dwindle. A more recent consideration of life in the far-future open universe points out that the exponential expansion of the universe predicted from the dark energy produces something like “an inside-out black hole” – with matter and radiation trapped outside the horizon rather than inside it.” As space expands faster away from our horizon, we remain trapped in an increasingly isolated patch of space, which will inevitably contain less information – and therefore will be less hospitable to complex life on the most extended timescales, despite Dyson’s optimism. With current models, the retreat beyond the horizon for other galaxies is expected to take about 100 billion years. Within that timeframe, regular galaxies will have merged into a “supergalaxy” from chance encounters, which in our case would include the merger of the Milky Way and Andromeda galaxy (expected to collide with the Milky Way in approximately 20 billion years), as well as merging with the Magellanic clouds (Krauss & Scherrer, 2008). The Large and Small Magellanic Clouds are currently the Milky Way’s closest neighbors at a 150,000 light-year distance. At this distance, tidal forces have “stripped” much of the interstellar medium from these small and irregular galaxies, creating a trail of gas in the wake of their motion known as the Magellanic Stream. The effects of the Magellanic clouds interacting with the Milky Way have also disrupted the interstellar medium within these galaxies, triggering star formation that can be seen in the Large Magellanic Cloud, in regions like the Tarantula nebula, a vast star-forming region stretching hundreds of light years across. The infrared images from the JWST show batches of thousands of stars forming within this nebula. These young stars are like the future batches of stars that will be triggered as the Milky Way merges further with the Magellanic clouds and the Andromeda galaxy in the next 20 billion years (Fig. 15.2). As our cosmic expansion becomes dominated by dark energy, the universe will accelerate at a nearly exponential clip. The expanding space will cause distant galaxies clusters to recede beyond our horizon first, followed by the nearer Coma and Virgo clusters, and then finally leaving our supergalaxy of the Milky Way as something of a cosmic island. The gravitational bonds within this supergalaxy are expected to hold together against the expansion of

292 

B. E. Penprase

Fig. 15.2  The Tarantula Nebula, a star-forming region in the Large Magellanic Cloud (LMC) triggered by interactions between our Milky Way galaxy and the LMC. As the universe continues to evolve, the nearby LMC and SMC are expected to merge with our galaxy. Interactions between the galaxies will trigger new generations of stars for many billions of years to come. (Image from https://stsci-­opo.org/STScI-­01GA7 6RM0C11W977JRHGJ5J26X.png)

space, according to most likely predictions of Dark Energy. At this point, the gradual decline of stars over about 100 trillion years would proceed much as Dyson predicted, albeit with reduced information content in the universe and a diminishing level of background energy from cosmic microwave background and other extragalactic sources (which will have receded beyond our horizon) (Fig. 15.3). Looking in more detail at the biological aspects of this new dark energy dominated future universe, Krauss and Starkman (1999) evaluated the information and energy content of the universe and its impacts on life. In their analysis, the ability to extract energy from black holes becomes a vital component of a far-future advanced civilization. The thinning matter content of the universe limits the feasibility of this energy source for providing indefinite growth. Like Dyson, Krauss and Starkman conclude that hope for continued life requires the migration of life into lower-energy formats, such as super-­ intelligent computers. They predict that greatly reduced power requirements will be needed for the most basic kinds of computing necessary to maintain life in the low-energy universe due to a revised scaling of energy use at low temperatures. This scaling mechanism implies that while there may be a linear decrease in processing speed with lower temperature, the power consumption

15  Physics of the Vacuum and Multiverses 

293

Fig. 15.3  Diagram of the open model of the universe that includes dark energy. The early universe expansion decelerates until the density of matter decreases below the energy density provided by dark energy, which does not diminish as the volume of the universe. The future of this universe involves an accelerated expansion in the future when we enter the era of a dark energy dominated universe. (Figure by the author)

could decrease by the temperature squared. This means that as an organism’s temperature is divided by half in response to declining energy resources in the universe, “an equivalently complex organism could think at half the speed but consume a quarter of the power.” This race to lower temperatures could continue, limited only by the availability for these low-temperature organisms to dissipate heat, which seems to create a floor of operable temperatures of life at about 10−13 K, after which such ultra-cold future life would require extended periods of hibernation (Krauss & Starkman, 1999). In our present-day universe, the lowest temperature regions can be found in the hearts of star-forming clouds, where the dark dust particles shield the molecules inside from starlight and allow for cooling to temperatures at or just above the cosmic microwave background temperature, currently at 2.7  K.  This microwave background will continue to bathe the interiors of molecular clouds with radiation for billions of years further and provides a

294 

B. E. Penprase

Fig. 15.4  In a view of what may the future harbor of life in the far-distant universe, JWST captures the view of the Carina nebula, whose dark clouds shelter cold molecular clouds that have rich molecular composition and complex chemistry. (https://www. nasa.gov/image-­feature/goddard/2022/nasa-­s-­webb-­reveals-­cosmic-­cliffs-­glittering­landscape-­of-­star-­birth)

limit to the amount of cooling possible in the future universe. Bathing in this intergalactic form of solar energy, our universe should be able to continue to have some level of chemical activity for billions of years afterwards, perhaps even supporting life in the center of these dark clouds (Fig. 15.4). As we consider the universe’s ultimate fate, we discover that the dark energy will drive the expansion toward lower and lower temperatures and transform the entire universe into something of a vacuum. It is interesting to note that invisible fields such as the Higgs field initially powered the expansion of space in the early Big Bang, and the hidden “vacuum energy” currently driving the acceleration of space will further dominate the universe’s evolution. In the most extended timescales, our universe will begin to resemble this ever-­ increasing vacuum – as galaxies and matter become even more dispersed in the later times of the universe. The origins and nature of this vacuum energy or “dark energy” will drive generations of astronomers, physicists, and astrophysicists toward new discoveries that will help us understand the hidden realities within what appears to be empty space. Our insights from physics and astrophysics have taken us very far toward understanding the ultimate future, the depths of space that define our cosmic horizon, and the smallest dimensions of space and time within the particle universe. Our telescopes and particle accelerators have mapped nearly the entire observable universe within the fundamental constraints placed upon us

15  Physics of the Vacuum and Multiverses 

295

by the speed of light and the indeterminacy of quantum physics. With tools like the JWST, the CERN LHC, and fleets of space and ground-based telescopes, our discoveries will continue to complete our mapping of the physical universe for decades to come. There will still be areas beyond our reach – such as the regions outside our light cone, inside the event horizons of black holes, and in the earliest instants of the Big Bang, when our universe was an undifferentiated pinpoint of energy. These regions are, for now, the realm of speculation – or inspired conjecture. As Carl Sagan stated in the introduction to his popular television series Cosmos (Sagan, 1980): We wish to pursue the truth, no matter where it leads. But to find the truth, we need imagination and skepticism both. We will not be afraid to speculate. But we will be careful to distinguish speculation from fact.

This admonition crisply reiterates Rene Descartes’ initial formulation of the scientific method from centuries ago and will continue to guide us as we further our explorations of the universe. Even as we recognize the fundamental limitations of our knowledge from physical science, we can still be inspired by ideas from earlier centuries about the nature of the vacuum, or “emptiness,” which presents many mysteries for modern physics and astrophysics. Early ideas in Buddhism describe “emptiness” as the ultimate state of the universe in which we live and that the objects and meanings we ascribe to them co-arise from this emptiness and from our consciousness. The Buddhist term of sunyata expresses the notion of emptiness that, along with karuna (compassion), can guide one’s actions and form a basis for wisdom in Mahayana Buddhism. Within Buddhism, the concept of dependent origination describes how the interactions between all living beings work together to create the reality we experience. As stated by the Buddhist philosopher Nagarjuna, active within the second century AD (Watson, 2014, p. 43), When emptiness is possible Everything is possible; Were emptiness impossible, Nothing would be possible.

Chinese philosophy can also provide meaningful insights into the nature of “emptiness.” In Taoism, the continuous change of the universe between yin and yang also includes a dynamic interplay between being and non-being or “emptiness.” As stated in the Tao Te Ching (Watson, 2014, p. 51):

296 

B. E. Penprase

Fig. 15.5  Detail from Wang Ximeng’s work A Thousand Li of Rivers and Mountains, painted in 1113, which shows the emphasis within Chinese art on both emptiness and form, or positive and negative space. This landscape painting also evokes the horizons contemplated by both our ancestors and our current generation as we map the universe to its fullest extent with our telescopes and particle accelerators. (Image from https://commons.wikimedia.org/wiki/Category:A_Thousand_Li_of_Rivers_ and_Mountains)

For though all creatures under heaven are the products of Being, Being itself is the product of Non-being.

In Chinese philosophy, emptiness, or the void, can be a source of energy, just as in astrophysics. The relationship between the notion of ch’i, or qi, which can be translated as “vital stuff,” and the concept of the “supreme void,” or tai xu, was described by the Neo-Confucian philosopher Zhang Zai. Chinese philosophy, like Greek philosophy, was aware of the transformations of “vital stuff” which underly our physical world. The qi can be condensed and transformed into the many things within our Earth while the lighter components differentiate to become the atmosphere and even to fill space and create the heavens or tian. Zhang Zai described the dynamic interplay between substance and non-substance as one that brings “life-giving generativity” through the change it makes possible. In this type of Chinese thought, the fundamental reality is the void, and even when qi is “maximally condensed,” its “inherent reality” is still the void (Angle & Tiwald, 2017). These profound modes of thinking more broadly about time and space can help us contemplate the new discoveries that will arise in the coming centuries and to better comprehend the many undiscovered mysteries of the vacuum energy which dominates our present-day universe (Fig. 15.5).

15  Physics of the Vacuum and Multiverses 

297

References Angle, S.  C., & Tiwald, J. (2017). Neo-Confucianism: A philosophical introduction. Polity. Chu, J. (2022, April 6). Physicists embark on a hunt for a long-sought quantum glow. MIT News | Massachusetts Institute of Technology. https://news.mit. edu/2022/physicists-­quantum-­glow-­0426 Dyson, F.  J. (1979). Time without end: Physics and biology in an open universe. Reviews of Modern Physics, 51(3), 447–460. https://doi.org/10.1103/ revmodphys.51.447 Krauss, L. M., & Scherrer, R. J. (2008). The end of cosmology? Scientific American, 298(3), 46–53. https://www.jstor.org/stable/26000516 Krauss, L. M., & Starkman, G. D. (1999). The fate of life in the universe. Scientific American, 281(5), 58–65. https://www.jstor.org/stable/26058484#metadata_ info_tab_contents Lambrecht, A. (2002). The Casimir effect: A force from nothing. Physics World. https:// indico.cern.ch/event/328613/contributions/1712485/attachments/639215/879605/Casimir_Force_PhysWorld_2002.pdf Sagan, C. (1980). The cosmos. Random House. Tegmark, M. (2003). Parallel universes. Scientific American, 288(5), 40–51. https:// www.jstor.org/stable/26060282 Thompson, J. (2022, May 20). Physicists find a shortcut to seeing an elusive quantum glow. Scientific American. https://www.scientificamerican.com/article/ physicists-­find-­a-­shortcut-­to-­seeing-­an-­elusive-­quantum-­glow1 Watson, G. (2014). A philosophy of emptiness. Reaktion Books. Weatherall, J. (2016). Void: The strange physics of nothing. Yale University Press. Williams, M. (2016). Will Earth survive when the sun becomes a red giant? Phys.org. https://phys.org/news/2016-­05-­earth-­survive-­sun-­red-­giant.html

Index

A

Aboriginal Australian idea of the Milky Way, 37 Acragas, 112 Aether, 113 Michelson-Morley experiment, 123 Alpha particles, 212 Anthropic principle, 287 Apian, P., 49 Aratea, 45 Aristarchus of Samos, 24 Astrolabe, 11 Astrometry, 174 Axion, 269 B

Baryons, 233 Beta decay, 226 Beta radiation, 212 Big Bang discovery of, 104 models of cosmology, 106 and models of the universe, 159 Black body spectrum, 171 Black hole detection by LIGO, 281

in galaxy M 87, 177 and time dilation, 153 Bohr, N., 218 Bose-Einstein condensate, 219 Bosons, 227 Bradley, J., 63 Brout-Englert-Higgs field, 253 Bubble chamber, 234 C

Casimir effect, 285 Cassini, G., 57 Celestial navigation, 12 Cepheid variable stars, 103 CERN, 238 ALICE experiment, 243 antimatter factory, 242 Compact Muon Solenoid (CMS) experiment, 243 discovery of the Higgs Boson, 254 electromagnets, 241 LHCb experiment, 243 particle collision geometry, 242 stages of energy boosting, 240 Super Proton Synchrotron, 238 technical details, 239

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. E. Penprase, Models of Time and Space from Astrophysics and World Cultures, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-27890-7

299

300 Index

Chandra X-ray telescope, 171 ch’i, 39 Chinese explorers, 19 Chinese star charts, 41 Chumash, 37 Clark, A., 82 Cook, J., 116 Cosmic microwave background, 172 angular power spectrum, 205 anisotropy, 204 discovery, 203 Cosmic rays, 226 Cosmological constant, 159 Curie, M., 214 biography, 214 discovery of radium and polonium, 215 humanitarian work, 216 Nobel prize, 215 Curie, P., 215 Curtis, H., 87 Curved space, 141 Cyclotron, 227 D

Dark energy and cosmic expansion, 274 discovery, 270 and supernovae, 272 Dark matter discovery through rotation curves of galaxies, 265 laboratory searches, 268 mapping with gravitational microlensing, 267 origins and discovery, 264 and supersymmetry, 257 De Broglie wavelength, 218–219 Degeneracy pressure, 221 Descartes, R., 64 Descartes vortex theory, 66 Diffraction limit, 173 Dwarf planets, 263 Dyson, F., 288

E

Earl of Rosse, 79 Early universe, 208, 255 matter and antimatter content, 255 nucleosynthesis, 256 particle “freeze out”, 256 recombination, 257 Eddington, A. and deflection of light, 146 Einstein, A. and aether, 126 biography, 129 general relativity, 141 mercury’s orbit, 145 and models of the universe, 106 and Schwarzschild, 151 and Schwarzschild solution, 151 solar eclipse observations, 147 spacetime, 135 Special Relativity, 130 special theory of relativity, 129 Einstein equation, 143 Electron volt, 228 Equivalence principle in general relativity, 143 Exoplanet systems, 195 F

Faraday, M., 120 Fermilab, 238 Flamsteed, J., 28, 48 Focal length, 54 Friedman, A., 107 Future of the universe, 287 G

GAIA, 175 Galilean relativity, 131 Galileo, 53 Gamma rays, 212 Gell-Mann, M., 235 General relativity and deflection of light, 146

 Index 

gravitational radiation, 143 mass and energy equivalence, 142 and mercury’s orbit, 145 models of the universe, 156 Schwarzschild solution, 151 time dilation, 153 and transmission of force, 141 Globular clusters, 77 Gluon force, 237 GPS satellites, 191 Gran Sasso experiment, 269 Gravitational microlensing, 267 Gravitational radiation detection with LIGO, 279 discovery by pulsar orbit decay, 278 Gravitational waves, 278

and navigation, 8 of observable universe, 199 Hubble constant, 106 Hubble, E., 103 Hubble space telescope, 171 Huygens, C., 55, 114 Hypersurface, 138 I

IceCube, 276 Inflationary epoch, 208 Interstellar (movie), 154 Islamic astronomy, 11 Isogonic chart, 31 J

H

Hafele and Keating experiment, 154 Hale, G.E., 88 Halley, E. comets, 27 and Flamsteed, J., 48 venus transit, 115 Hawking radiation, 252 Heisenberg Uncertainty Principle, 220 Heisenberg, W., 214 Herschel, C., 74 Herschel catalog, 75 Herschelian telescope, 75 Herschel, W., 74 Hevelius, J., 55 Higgs Boson, 251 detection at CERN, 254 mass, 253 theoretical prediction, 253 Higgs mechanism, 253 HIP 65426, 198 Hofstadter, R., 231 Hooke, R., 61 Horizon and European navigation, 26 and measurements of earth’s radius, 10

James Webb Space Telescope (JWST), 178 deep field image, 183 direct imaging of extrasolar planets, 198 jupiter observations, 181 SMACS 0723 observations, 183 Stephan’s Quintet observations, 182 technical specifications, 178 K

Kant, I., 70 Kepler, J., 38 King George III, 74 Korean star chart, 43 Korean world maps, 21 Kuiper Belt, 263 L

Large Magellanic Cloud, 291 Large scale structure from 2DF galaxy survey, 203 Great Wall, 202

301

302 Index

Laser interferometric gravitational observatory (LIGO), 279 detection of merging black holes, 281 technical description, 279 Lemaitre, G., 107, 158 Le Verrier, U., 81 LHC, 240, 277 Lick observatory, 85 Lick 36” refractor, 86 Light Einstein’s theory of relativity, 130 and geodesics, 143 Maxwell’s theory, 120 Newton’s corpuscular theory, 112 Light clock, 132 Light cone, 136 Light sphere, 191 LISA, 281 Longitude Act, 33 Lorentz contraction, 134 Lorentz, H., 125 Lowell observatory, 90 Lowell, P., 90 M

Mahayana Buddhism, 295 Marshall Islands, 7 Mass in Einstein’s theory, 135 Matter waves, 219 Mercator projection, 26 Mesons, 232 Michelson, A., 122 Michelson interferometer Michelson-Morley Experiment, 124 Microscope, 229 electron, 230 resolving power, 229 Milky Way, 73

Milky Way structure from GAIA spacecraft, 178 Herschel, W., 77 Kapteyn, J.C., 82 Lord Rosse, 79 photographic, 92 Shapley, H., 100 Minkowski, H., 135 Moons of Saturn, 60 Mount Wilson, 97 Mount Wilson observatory 60-inch telescope, 98 solar telescopes, 99 Multiverse, 201 level I, 286 level II, 287 Music of the spheres, 38 N

Nagarjuna, 295 Navigation using guide stars, 1 Hawaiian, 5 Marshall Islander, 6 Micronesian Star Compass, 3 Polynesian, 3 using sextant, 12 using wave maps, 7 using wind compass, 9 Neptune prediction and discovery, 81 Neutrino detection, 275 Neutrinos, 226 accelerator experiments, 277 families, 275 flux at earth, 275 IceCube detector, 276 underground experiments, 275 Newton, I., 30, 67 Newtonian reflecting telescope, 74 Newton’s laws of motion, 68

 Index  O

Oceania, 1 Oort Cloud, 263

303

Ptolemy, 10 Pulsars, 279 Q

P

Palomar observatory, 163 Palomar sky survey, 169 Parallax, 61 Paris observatory, 81 Particle physics alpha, beta and gamma radiation, 212 axion, 269 bosons, 220 CERN, 239 cyclotron experiments, 228 and De Broglie matter wave, 218 and early universe, 207 8-fold path, 233 fermions, 220 Higgs Boson, 253 neutrinos, 226 and Newton’s corpuscular theory, 68 proton structure, 232 quark families, 236 quarks, 235 Rutherford’s alpha particle experiments, 213 Schrödinger equation, 221 the Standard Model, 246 and structure of nuclei, 231 and symmetry groups, 233 uranic rays, 212 Z boson, 245 Partons, 235 Pauli exclusion principle, 220 Perihelion of Mercury, 152 Pluto, 262 discovery, 262 prediction, 262 Pole Star, 2 Proton, 232

Qibla, 9, 11 Quantum chromodynamics, 237 Quark model of matter, 235 Quarks families, 246 and gluons, 246 masses, 246 Quasar absorption line spectroscopy, 168 Quasars, 167 R

Radioactivity, 213 Redshift distant galaxies in SMACS 0723 cluster field, 185 Hubble, E., 104 and Hubble Constant, 106 Slipher catalog, 91 Rockefeller, C., 165 Roentgen, W.C., 212 Rubin, V., 265 Rutherford, E., 212 S

Sagan, C., 295 Sandage, A., 167 Schrödinger equation, 221 Schwarzschild, K., 151 Schwarzschild radius, 153 Search for extraterrestrial intelligence (SETI), 194 Shapley-Curtiss debate, 102 Shapley, H., 100 Sirius B, 82 Slipher, V., 90

304 Index

Southern Cross, 2 Southern Ring Nebula, 288 Spacetime, 135 Spacetime diagram, 156 Spectroscopic parallax, 90 Speed of light Fizeau experiment, 119 Galileo experiment, 112 Roemer experiment, 114 Spiral nebulae, 97 Spitzer Space Telescope, 171 Standard Model of Particle Physics, 246 Stanford Linear Accelerator Center, 231 Star ceiling, 41 Star compass, 3 Star Maps by Apian, 47 the Aratea, 46 Chinese, 41 Durer, 50 from Egypt, 40 Flamsteed, J., 49 Islamic, 47 Korean, 43 from Syria, 42 Stellar aberration, 63 Supersymmetry, 258 Synchro-cyclotron, 229 Synchrotron, 229 T

Tanabata festival, 37 Telescope achromatic refractor, 83 aerial long focus, 60 aluminization, 98 Cassini, G., 58 chromatic aberration, 84 Earl of Rosse, 78 European southern observatory extremely large telescope, 173 GAIA, 174 Giant Magellan telescope, 173

gravitational radiation, 281 great observatory, 171 Herschel, W., 74 Hevelius, J., 55 Hipparcos, 175 Hooke Archimedean engine, 61 Hubble space telescope, 171 Huygens, 55 James Webb space telescope, 178 Keck 10.5-meter, 173 Le Verrier, U., 98 Lick observatory, 86 Lowell observatory, 90 magnification, 53 Mount Palomar 200 telescope, 163 Mount Wilson, 98 neutrino, 277 and 1919 solar eclipse, 149 Palomar 200, 164 Palomar sky survey, 169 for 1769 venus transit, 117 thirty meter telescope, 173 Vera C. Rubin observatory, 173 vertical, 63 Wilkinson microwave anisotropy probe, 204 Yerkes refractor, 88 Temple of Hathor at Dendera, 40 Terra incognita, 33 Time in special relativity, 135 Time dilation near black hole, 154 for GPS satellites, 192 Tombaugh, C., 262 Trans-Neptunian objects, 264 T world map, 20 U

Unruh effect, 286 Uranic rays, 215 Uranus, 74

 Index  V

Vacuum and antimatter experiments, 242 and Buddhism, 295 Casimir effect, 285 at CERN LHC, 240 Chinese philosophy, 295 and cosmological constant, 159 and Heisenberg Uncertainty Principle, 252 and LIGO, 279 in Maxwell’s theory, 121 in Michelson Morley experiment, 123 and Unruh effect, 286 and virtual particles, 251 W and Z Boson detection, 252 Vacuum energy density, 285 Venus transit, 115 Vera C. Rubin observatory, 264 Very long baseline astrometry, 177

305

and degeneracy pressure, 221 Sirius B, 82 WIMPs, 267 Wind compass, 9 Worldline, 136 World Maps Chinese, 19, 22 European “T” map, 20 and geomagnetism, 30 Islamic cartographers, 21 Korean, 21 using Mercator projection, 31 16th and 17th century European, 26 and terra incognita, 33 Wright, T., 69 X

X-rays, 212 Y

W

W and Z bosons, 244 discovery and lifetime, 245 theoretical prediction, 245 Wave maps, 7 Wein displacement law, 170 White dwarf

Yerkes Observatory, 88 Z

Zenith stars, 5 Zheng He, 19 Zwicky, F., 169 Zwicky Transient facility (ZTF), 169