Use of Smartphones in Optical Experimentation 1510654976, 9781510654976

Use of Smartphones in Optical Experimentation shows how smartphone-based optical labs can be designed and realized. The

175 38 45MB

English Pages 180 [182] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
FM_online.pdf
Tutorial Texts Series Related Title
Other Related SPIE Press books:
Introduction to the Series
Contents
Preface
Ch01_online.pdf
Chapter 1 Smartphones and Their Optical Sensors
1.1 History and Current Utilization in Education
1.2 Smartphone Camera
1.2.1 Optical sensor
Resolution
Spectral response of a photosite
Color image, image intensity, and linearity
1.2.2 Adaptive optical system
1.3 Using the Smartphone Camera in Experiments
References
ch02_online.pdf
Chapter 2 Experimental Data Analysis
2.1 Experiments and Measurement Error
2.1.1 General physics experimental procedure
2.1.2 The experimental measurements
2.1.3 Errors in measurements
2.2 Numerical/Parameter Estimation
2.2.1 Estimation of a direct measurement
2.2.2 Estimation of a relationship
2.3 Model Testing
References
ch03_online.pdf
Chapter 3 Law of Reflection
3.1 Introduction
3.2 Smartphone Experiment (Alec Cook and Ryan Pappafotis, 2015)
3.2.1 General strategy
3.2.2 Materials
3.2.3 Experimental setup
3.2.4 Experimental results
ch04_online.pdf
Chapter 4 Law of Refraction
4.1 Introduction
4.2 Smartphone Experiment (Alec Cook and Ryan Pappafotis, 2015)
4.2.1 General strategy
4.2.2 Materials
4.2.3 Experimental setup
4.2.4 Experimental results
ch05_online.pdf
Chapter 5 Image Formation
5.1 Introduction
5.2 Smartphone Experiment (Michael Biddle and Robert Dawson, 2015; Yoong Sheng Phang, 2021)
5.2.1 General strategy
5.2.2 Materials
5.2.3 Experimental setup
5.2.4 Experimental results
References
ch06_online.pdf
Chapter 6 Linear Polarization
6.1 Introduction
6.2 Smartphone Experiment (Sungjae Cho and Aojie Xue, 2019)
6.2.1 General strategy
6.2.2 Materials
6.2.3 Experimental setup
6.2.4 Experimental results
ch07_online.pdf
Chapter 7 Fresnel Equations
7.1 Introduction
7.2 Smartphone Experiment (Graham McKinnon, 2020)
7.2.1 General strategy
7.2.2 Materials
7.2.3 Experimental setup
7.2.4 Preliminary results
ch08_online.pdf
Chapter 8 Brewster's Angle
8.1 Introduction
8.2 Smartphone Experiment (Robert Bull and Daniel Desena, 2019)
8.2.1 General strategy
8.2.2 Materials
8.2.3 Experimental setup
8.2.4 Experimental results
ch09_online.pdf
Chapter 9 Optical Rotation
9.1 Introduction
9.2 Smartphone Experiment (Nicholas Kruegler, 2020)
9.2.1 General strategy
9.2.2 Materials
9.2.3 Experimental setup
9.2.4 Experimental results
References
ch10_online.pdf
Chapter 10 Thin Film Interference
10.1 Introduction
10.2 Smartphone Experiment (Nicolas Lohner and Austin Baeckeroot, 2017)
10.2.1 General strategy
10.2.2 Materials
10.2.3 Experimental setup
10.2.4 Experimental results
Ch11_online.pdf
Chapter 11 Wedge Interference
11.1 Introduction
11.2 Smartphone Experiment (Graham McKinnon and Nicholas Brosnahan, 2020)
11.2.1 General strategy
11.2.2 Materials
11.2.3 Experimental setup
11.2.4 Experimental results
ch12_online.pdf
Chapter 12 Diffraction from Gratings
12.1 Introduction
12.2 Smartphone Experiment I: Diffraction from an iPhone Screen (Zach Eidex and Clayton Oetting, 2018)
12.2.1 General strategy
12.2.2 Materials
12.2.3 Experimental setup
12.2.4 Experimental results
12.3 Smartphone Experiment II: Diffraction from a Grating and a Hair (Nick Brosnahan, 2020)
12.3.1 General Strategy
12.3.2 Materials
12.3.3 Experimental setup
12.3.4 Experimental results
References
ch13_online.pdf
Chapter 13 Structural Coloration of Butterfly Wings and Peacock Feathers
13.1 Introduction
13.2 Smartphone Experiment I: Diffraction in a Box-Scale Spacing of Morpho Butterfly Wings (Mary Lalak and Paul Brackman, 2014)
13.2.1 General strategy
13.2.2 Materials
13.2.3 Experimental setup
13.2.4 Experimental results
13.3 Smartphone Experiment II: Barbule Spacing of Peacock Feathers (Caroline Doctor and Yuta Hagiya, 2019)
13.3.1 General strategy
13.3.2 Materials
13.3.3 Experimental setup
13.3.4 Experimental results
References
ch14_online.pdf
Chapter 14 Optical Rangefinder Based on Gaussian Beam of Lasers
14.1 Introduction
14.2 Smartphone Experiment I: A Two-laser Optical Rangefinder (Elizabeth McMillan and Jacob Squires, 2014)
14.2.1 General strategy
14.2.2 Materials
14.2.3 Experimental setup
14.2.4 Experimental results
14.3 Smartphone Experiment II: Estimating the Beam Waist Parameter with a Single Laser (Joo Sung and Connor Skehan, 2015)
14.3.1 General strategy
14.3.2 Materials
14.3.3 Experimental setup
14.3.4 Experimental results
ch15_online.pdf
Chapter 15 Monochromator
15.1 Introduction
15.2 Smartphone Experiment I: A Diffractive Monochromator (Nathan Neal, 2018)
15.2.1 General strategy
15.2.2 Materials
15.2.3 Experimental setup
15.2.4 Experimental results
15.3 Smartphone Experiment II: A Dispersive Monochromator (Myles Popa and Steven Handcock, 2016)
15.3.1 General strategy
15.3.2 Materials
15.3.3 Experimental setup
15.3.4 Experimental results
Ch16_online.pdf
Chapter 16 Optical Spectrometers
16.1 Introduction
16.2 Smartphone Experiment I: A Diffractive Emission Spectrometer (Helena Gien and David Pearson, 2016)
16.2.1 General strategy
16.2.2 Materials
16.2.3 Experimental setup
16.2.4 Experimental results
16.3 Smartphone Experiment II: Spectra of Different Combustion Sources (Ryan McArdle and Griffin Dangler, 2016)
16.3.1 General strategy
16.3.2 Materials
16.3.3 Experimental setup
16.3.4 Experimental results
Ch17_online.pdf
Chapter 17 Dispersion
17.1 Introduction
17.2 Smartphone Experiment (Eric Older and Mario Parra, 2018)
17.2.1 General strategy
17.2.2 Materials
17.2.3 Experimental setup
17.2.4 Experimental results
ch18_online.pdf
Chapter 18 Beer's Law
18.1 Introduction
18.2 Smartphone Experiment (Sean Krautheim and Emory Perry, 2018)
18.2.1 General strategy
18.2.2 Materials
18.2.3 Experimental setup
18.2.4 Experimental results
ch19_online.pdf
Chapter 19 Optical Spectra of Incandescent Lightbulbs and LEDs
19.1 Introduction
19.2 Smartphone Experiment I: Spectral Radiance of an Incandescent Lightbulb (Tyler Christensen and Ryan Matuszak, 2017)
19.2.1 General strategy
19.2.2 Materials
19.2.3 Experimental setup
19.2.4 Experimental results
19.3 Smartphone Experiment II: Spectral Radiance of White LED Lightbulbs (Troy Crawford and Rachel Taylor, 2018)
19.3.1 General strategy
19.3.2 Materials
19.3.3. Experimental setup
19.3.4 Experimental results
References
Ch20_online.pdf
Chapter 20 Blackbody Radiation of the Sun
20.1 Introduction
20.2 Smartphone Experiment (Patrick Mullen and Connor Woods, 2015)
20.2.1 General Strategy
20.2.2 Materials
20.2.3 Experimental setup
20.2.4 Experimental results
References
ch21_online.pdf
Chapter 21 Example Course Instructions for Smartphone-based Optical Labs
21.1 General Lab Instructions
21.1.1 Important notices for students
21.1.2 Lab materials
21.1.3 Lab instructions
21.2 Polarization Labs
21.2.1 Required lab materials
21.2.2 Lab instruction
21.2.3 Additional labs
21.3 Reflection Labs
21.3.1 Required lab materials
21.3.2 Lab instructions
21.3.3 Additional labs
21.4 Interference Labs
21.4.1 Required lab materials
21.4.2 Lab instruction
21.4.3 Additional labs
21.5 Diffraction Labs
21.5.1 Required lab materials
21.5.2 Lab instruction
21.6 Summary of Lab Results
App1_online.pdf
App2_online.pdf
app3_online.pdf
Outline placeholder
III.1 Starting ImageJ
III.2 ImageJ Menu
III.3 ImageJ Toolbar
III.4 Image Analysis Example Using ImageJ
App4_online.pdf
bio_online.pdf
Recommend Papers

Use of Smartphones in Optical Experimentation
 1510654976, 9781510654976

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Yiping Zhao and Yoong Sheng Phang

ZHAO, PHANG

Use of Smartphones in Optical Experimentation shows how smartphone-based optical labs can be designed and realized. The book presents demonstrations of fundamental geometric and physical optical principles, including the law of reflection, the law of refraction, image formation equations, dispersion, Beer’s law, polarization, Fresnel’s equations, optical rotation, diffraction, interference, and blackbody radiation. Many practical applications—how to design a monochromator and a spectrometer, use the Gaussian beam of a laser, measure the colors of LED lights, and estimate the temperature of an incandescent lamp or the Sun—are also included. The experimental designs provided in this book represent only a hint of the power of leveraging the technological capability of smartphones and other low-cost materials to create a physics lab.

Use of Smartphones in Optical Experimentation

Use of Smartphones in Optical Experimentation

Use of Smartphones in Optical Experimentation

This book can be used as a guide for undergraduate students and instructors for a hands-on experience with optics, especially for an online optical lab; elementary and high school science teachers to develop smartphone-based labs for classroom demonstrations; and anyone who wants to explore fundamental STEM concepts by designing and performing experiments anywhere.

Yiping Zhao Yoong Sheng Phang

P.O. Box 10 Bellingham, WA 98227-0010 ISBN: 9781510654976 SPIE Vol. No.: TT124 TT124

Tutorial Texts Series Related Title •

Optics Using MATLAB, Scott W. Teare, Vol. TT111

(For a complete list of Tutorial Texts, see http://spie.org/publications/books/tutorialtexts.)

Other Related SPIE Press books: • • • • •

Discovering Light: Fun Experiments with Optics, Maria Viñas-Peña, Vol. PM324 How to Set Up a Laser Lab, Ken L. Barat, Vol. SL02 Introducing Photonics, Brian Culshaw, Vol. PM324 Optics for Technicians, Max J. Riedl, Vol. PM258 Seeing the Light: Optics Without Equations, William L. Wolfe, Vol. PM349

Library of Congress Cataloging-in-Publication Data Names: Zhao, Yiping, author. | Phang, Yoong Sheng, author. Title: Use of smartphones in optical experimentation / Yiping Zhao, Yoong Sheng Phang. Description: Bellingham, Washington, USA : SPIE Press, [2022] | Includes bibliographical references. Identifiers: LCCN 2022023950 | ISBN 9781510654976 (paperback) | ISBN 9781510654983 (pdf) Subjects: LCSH: Optics–Experiments. | Smartphones–Scientific applications. Classification: LCC QC365 .Z43 2022 | DDC 535.078–dc23 /eng20220826 LC record available at https://lccn.loc.gov/2022023950

Published by SPIE P.O. Box 10 Bellingham, Washington 98227-0010 USA Phone: þ1 360.676.3290 Fax: þ1 360.647.1445 Email: [email protected] Web: http://spie.org

Copyright © 2022 Society of Photo-Optical Instrumentation Engineers (SPIE) All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means without written permission of the publisher. The content of this book reflects the work and thought of the authors. Every effort has been made to publish reliable and accurate information herein, but the publisher is not responsible for the validity of the information or for any outcomes resulting from reliance thereon. Printed in the United States of America. First printing 2022. For updates to this book, visit http://spie.org and type “TT124” in the search field.

Introduction to the Series The Tutorial Text series provides readers with an introductory reference text to a particular field or technology. The books in the series are different from other technical monographs and textbooks in the manner in which the material is presented. True to their name, they are tutorial in nature, and graphical and illustrative material is used whenever possible to better explain basic and more-advanced topics. Heavy use of tabular reference data and numerous examples further explain the presented concept. A grasp of the material can be deepened and clarified by taking corresponding SPIE short courses. The initial concept for the series came from Jim Harrington (1942–2018) in 1989. Jim served as Series Editor from its inception to 2018. The Tutorial Texts have grown in popularity and scope of material covered since 1989. They are popular because they provide a ready reference for those wishing to learn about emerging technologies or the latest information within a new field. The topics in the series have grown from geometrical optics, optical detectors, and image processing to include the emerging fields of nanotechnology, biomedical optics, engineered materials, data processing, and laser technologies. Authors contributing to the series are instructed to provide introductory material so that those new to the field may use the book as a starting point to get a basic grasp of the material. The publishing time for Tutorial Texts is kept to a minimum so that the books can be as timely and up-to-date as possible. When a proposal for a text is received, it is evaluated to determine the relevance of the proposed topic. This initial reviewing process helps authors identify additional material or changes in approach early in the writing process, which results in a stronger book. Once a manuscript is completed, it is peer reviewed by multiple experts in the field to ensure that it accurately communicates the key components of the science and technologies in a tutorial style. It is my goal to continue to maintain the style and quality of books in the series and to further expand the topic areas to include new emerging fields as they become of interest to our readers. Jessica DeGroote Nelson Edmund Optics

v

Contents Preface

xiii

1 Smartphones and Their Optical Sensors 1.1 1.2

History and Current Utilization in Education Smartphone Camera 1.2.1 Optical sensor 1.2.2 Adaptive optical system 1.3 Using the Smartphone Camera in Experiments References 2 Experimental Data Analysis Experiments and Measurement Error 2.1.1 General physics experimental procedure 2.1.2 The experimental measurements 2.1.3 Errors in measurements 2.2 Numerical/Parameter Estimation 2.2.1 Estimation of a direct measurement 2.2.2 Estimation of a relationship 2.3 Model Testing References 3 Law of Reflection

17 17 19 22 26 26 29 32 35 37

Introduction Smartphone Experiment (Alec Cook and Ryan Pappafotis, 2015) 3.2.1 General strategy 3.2.2 Materials 3.2.3 Experimental setup 3.2.4 Experimental results

4 Law of Refraction 4.1 4.2

1 3 3 8 11 11 17

2.1

3.1 3.2

1

37 37 37 38 38 39 41

Introduction Smartphone Experiment (Alec Cook and Ryan Pappafotis, 2015) 4.2.1 General strategy 4.2.2 Materials

vii

41 41 41 42

Contents

viii

4.2.3 4.2.4

Experimental setup Experimental results

5 Image Formation 5.1 5.2

Introduction Smartphone Experiment (Michael Biddle and Robert Dawson, 2015; Yoong Sheng Phang, 2021) 5.2.1 General strategy 5.2.2 Materials 5.2.3 Experimental setup 5.2.4 Experimental results References 6 Linear Polarization 6.1 6.2

Introduction Smartphone Experiment (Sungjae Cho and Aojie Xue, 2019) 6.2.1 General strategy 6.2.2 Materials 6.2.3 Experimental setup 6.2.4 Experimental results

7 Fresnel Equations 7.1 7.2

Introduction Smartphone Experiment (Graham McKinnon, 2020) 7.2.1 General strategy 7.2.2 Materials 7.2.3 Experimental setup 7.2.4 Preliminary results

8 Brewster’s Angle 8.1 8.2

Introduction Smartphone Experiment (Robert Bull and Daniel Desena, 2019) 8.2.1 General strategy 8.2.2 Materials 8.2.3 Experimental setup 8.2.4 Experimental results Reference 9 Optical Rotation 9.1 9.2

Introduction Smartphone Experiment (Nicholas Kruegler, 2020) 9.2.1 General strategy 9.2.2 Materials 9.2.3 Experimental setup

42 42 45 45 47 47 47 48 48 49 51 51 52 52 52 52 53 55 55 56 56 56 56 57 59 59 60 60 60 60 61 62 63 63 64 64 64 64

Contents

9.2.4 References

ix

Experimental results

10 Thin Film Interference 10.1 Introduction 10.2 Smartphone Experiment (Nicolas Lohner and Austin Baeckeroot, 2017) 10.2.1 General strategy 10.2.2 Materials 10.2.3 Experimental setup 10.2.4 Experimental results 11 Wedge Interference 11.1 Introduction 11.2 Smartphone Experiment (Graham McKinnon and Nicholas Brosnahan, 2020) 11.2.1 General strategy 11.2.2 Materials 11.2.3 Experimental setup 11.2.4 Experimental results 12 Diffraction from Gratings 12.1 Introduction 12.2 Smartphone Experiment I: Diffraction from an iPhone Screen (Zach Eidex and Clayton Oetting, 2018) 12.2.1 General strategy 12.2.2 Materials 12.2.3 Experimental setup 12.2.4 Experimental results 12.3 Smartphone Experiment II: Diffraction from a Grating and a Hair (Nick Brosnahan, 2020) 12.3.1 General Strategy 12.3.2 Materials 12.3.3 Experimental setup 12.3.4 Experimental results References 13 Structural Coloration of Butterfly Wings and Peacock Feathers 13.1 Introduction 13.2 Smartphone Experiment I: Diffraction in a Box—Scale Spacing of Morpho Butterfly Wings (Mary Lalak and Paul Brackman, 2014) 13.2.1 General strategy 13.2.2 Materials 13.2.3 Experimental setup 13.2.4 Experimental results

65 66 67 67 69 69 69 69 70 71 71 72 72 72 72 73 75 75 77 77 77 77 78 79 79 79 79 80 81 83 83 85 85 85 85 86

Contents

x

13.3 Smartphone Experiment II: Barbule Spacing of Peacock Feathers (Caroline Doctor and Yuta Hagiya, 2019) 13.3.1 General strategy 13.3.2 Materials 13.3.3 Experimental setup 13.3.4 Experimental results References 14 Optical Rangefinder Based on Gaussian Beam of Lasers 14.1 Introduction 14.2 Smartphone Experiment I: A Two-laser Optical Rangefinder (Elizabeth McMillan and Jacob Squires, 2014) 14.2.1 General strategy 14.2.2 Materials 14.2.3 Experimental setup 14.2.4 Experimental results 14.3 Smartphone Experiment II: Estimating the Beam Waist Parameter with a Single Laser (Joo Sung and Connor Skehan, 2015) 14.3.1 General strategy 14.3.2 Materials 14.3.3 Experimental setup 14.3.4 Experimental results 15 Monochromator 15.1 Introduction 15.2 Smartphone Experiment I: A Diffractive Monochromator (Nathan Neal, 2018) 15.2.1 General strategy 15.2.2 Materials 15.2.3 Experimental setup 15.2.4 Experimental results 15.3 Smartphone Experiment II: A Dispersive Monochromator (Myles Popa and Steven Handcock, 2016) 15.3.1 General strategy 15.3.2 Materials 15.3.3 Experimental setup 15.3.4 Experimental results 16 Optical Spectrometers 16.1 Introduction 16.2 Smartphone Experiment I: A Diffractive Emission Spectrometer (Helena Gien and David Pearson, 2016) 16.2.1 General strategy 16.2.2 Materials

87 87 87 88 88 89 91 91 93 93 93 93 94 95 95 95 96 98 99 99 100 100 100 100 102 102 102 103 103 105 107 107 109 109 109

Contents

16.2.3 Experimental setup 16.2.4 Experimental results 16.3 Smartphone Experiment II: Spectra of Different Combustion Sources (Ryan McArdle and Griffin Dangler, 2016) 16.3.1 General strategy 16.3.2 Materials 16.3.3 Experimental setup 16.3.4 Experimental results Reference 17 Dispersion 17.1 Introduction 17.2 Smartphone Experiment (Eric Older and Mario Parra, 2018) 17.2.1 General strategy 17.2.2 Materials 17.2.3 Experimental setup 17.2.4 Experimental results Reference 18 Beer’s Law 18.1 Introduction 18.2 Smartphone Experiment (Sean Krautheim and Emory Perry, 2018) 18.2.1 General strategy 18.2.2 Materials 18.2.3 Experimental setup 18.2.4 Experimental results 19 Optical Spectra of Incandescent Lightbulbs and LEDs 19.1 Introduction 19.2 Smartphone Experiment I: Spectral Radiance of an Incandescent Lightbulb (Tyler Christensen and Ryan Matuszak, 2017) 19.2.1 General strategy 19.2.2 Materials 19.2.3 Experimental setup 19.2.4 Experimental results 19.3 Smartphone Experiment II: Spectral Radiance of White LED Lightbulbs (Troy Crawford and Rachel Taylor, 2018) 19.3.1 General strategy 19.3.2 Materials 19.3.3 Experimental setup 19.3.4 Experimental results References 20 Blackbody Radiation of the Sun 20.1 Introduction

xi

109 110 112 112 112 112 113 114 115 115 117 117 117 117 118 119 121 121 122 122 122 122 123 125 125 128 128 129 129 129 130 130 130 130 131 132 133 133

Contents

xii

20.2 Smartphone Experiment (Patrick Mullen and Connor Woods, 2015) 20.2.1 General Strategy 20.2.2 Materials 20.2.3 Experimental setup 20.2.4 Experimental results References 21 Example Course Instructions for Smartphone-based Optical Labs 21.1 General Lab Instructions 21.1.1 Important notices for students 21.1.2 Lab materials 21.1.3 Lab instructions 21.2 Polarization Labs 21.2.1 Required lab materials 21.2.2 Lab instruction 21.2.3 Additional labs 21.3 Reflection Labs 21.3.1 Required lab materials 21.3.2 Lab instructions 21.3.3 Additional labs 21.4 Interference Labs 21.4.1 Required lab materials 21.4.2 Lab instruction 21.4.3 Additional labs 21.5 Diffraction Labs 21.5.1 Required lab materials 21.5.2 Lab instruction 21.6 Summary of Lab Results

135 135 135 135 135 136 139 139 139 139 140 141 141 141 142 142 142 142 143 143 143 144 144 145 145 145 145

Appendix I Materials Used in Labs

149

Appendix II Web Links and Smartphone Applications

151

Appendix III Introduction to ImageJ III.1 Starting ImageJ III.2 ImageJ Menu III.3 ImageJ Toolbar III.4 Image Analysis Example Using ImageJ Reference Appendix IV Connecting the Laser Diode

153 153 153 154 154 158 159

Preface Since 2015, I have incorporated smartphone-based optics projects in my Introduction to Modern Optics classes at the University of Georgia. In addition to completing the required optics labs, students in this class are asked to work in pairs on an optics project using a smartphone. Each project involves the use of some cheap household materials, LEGO® blocks, or a 3D printer to design an apparatus incorporating a smartphone to demonstrate an optical principle or application. The total cost of each project is less than $30. The teams pick a topic for their project from a list provided by the instructor or propose a project idea themselves during the second week of class. A list of topics is summarized below: • • • • • • • • • • • • • • • • • • • •

To demonstrate the laws of reflection and refraction To examine the dispersion relationship for a liquid To explore light interaction with a vapor, a liquid, and a solid (Beer’s law, etc.) To perform Gaussian beam analysis To demonstrate the image formation principle To demonstrate the polarization properties of light To demonstrate the Fresnel equations To demonstrate Rayleigh scattering To demonstrate Mie scattering (for milk or other liquids) To demonstrate the principle of interference To demonstrate the law of diffraction To determine the thickness of a dielectric thin film To assess the morphological features on a CD or a DVD To build a monochromator To measure the color of flowers or plants To measure the color of beetles To demonstrate the blackbody radiation principle To obtain the temperature of the Sun or a hot-top remotely To improve the quality of optical instruments based on a smartphone To construct a new optical instrument based on a smartphone

These projects cover many aspects of modern and introductory optics. For the class, the instructor advises each team on the feasibility of their

xiii

xiv

Preface

projects. Once the project topic is decided, the students spend roughly one month obtaining the necessary materials, constructing or outlining a strategy, and writing a two-page proposal. In the proposal, the students concisely state the project’s objective, the instrument/apparatus design plan, the calibration plan, and the experimental and data analysis plan. After each team submits their proposal, the instructor provides feedback on the project’s potential problems and ways to resolve them. Then, each team is given a 1–1.5-month period to complete the project (including experimental setup, data collection, data analysis, and conclusions), a final project report, and a PowerPoint® presentation. From 2015 to 2019, 33 different projects were completed. Some of the presentations can be found on the YouTube channel “UGA Modern Optics: Smartphone Projects” (https:// www.youtube.com/channel/UCDNH_mEXvy-Rp98ri96EuLw). In 2020, due to the COVID-19 pandemic and social distancing regulations, the labs for my Introduction to Modern Optics class were conducted remotely based on optical projects from previous years. The students were provided a package of lab materials (see Chapter 21), including a laser diode, a battery box, a pair of AA batteries, two rectangular plastic polarizer sheets, three 1 00  3 00 glass slides, one cuvette, one grating of known spacing, and one grating of unknown spacing. They were asked to perform experiments with the following objectives: to demonstrate Malus’s law, to measure the optical rotation of glucose solution, to measure the angledependent reflection and transmission of a glass slide for s- and p-polarizations, to determine Brewster’s angle, to measure the thickness of standard printer paper via thin film interference or wedge interference, and finally, to determine the spacing of the unknown grating and the diameter of a hair through diffraction. The students were asked to set up their own labs using household items, collect and analyze the data, find possible errors, reperform the experiments, and write manuscript-style reports. The results were very surprising. For a given experiment, different students had totally different setups and yet their results were more or less consistent. The optics projects and labs described above provide an alternative method of laboratory physics education for college students known as fingertip labs, which are widely accessible due to the massive user base and affordability of smartphones/tablets. According to a recent survey from January 2022, there are 6.648 billion smartphone users worldwide, making up 83.96% of the world’s population. Most children get their first mobile device around the age of 12. The 18–29 age group has 100% cell phone ownership in the U.S. and 94% cell phone ownership worldwide. This widespread market penetration makes smartphone/tablet hardware and software extremely cost-effective yet powerful, reliable, and capable of working almost anywhere in the world. There are very few high-end consumer electronics devices that can compete with the smartphone/tablet in terms of

Preface

xv

cost-effectiveness, ubiquity, computational power, and data connectivity and coverage. Smartphones and tablets also offer sophisticated opto-electronic sensing, micro-electromechanical systems (MEMS)-based sensing, motion sensing, and geo-referencing capability, coupled with a user-friendly environment for development of mobile application software (apps), making them compelling and unique educational tools. These smartphone-based optical projects and labs offer the following advantages: 1.

2.

3. 4. 5. 6.

To use hands-on smartphone-based experiments to enable a better understanding of basic optical concepts, especially for students who have difficulty with abstract thinking To encourage students to learn improved data analysis and modeling techniques while conducting authentic scientific research and obtaining a more rigorous training in scientific report writing To help students gain confidence in their ability to apply knowledge learned in the classroom to design a smartphone-based instrument or experiment To motivate students to be creative designers through hands-on experience with modern technology used for practical applications To improve students’ problem-solving skills To serve as a popular template for other science, technology, engineering, and mathematics (STEM) classes and extend the STEM education model to K-12 education and the general community

They also meet the five fundamental goals for introductory physics labs according to the American Association of Physics Teachers (AAPT): “I. The Art of Experimentation: The introductory laboratory should engage each student in significant experiences with experimental processes, including some experience designing investigations. II. Experimental and Analytical Skills: The laboratory should help the student develop a broad array of basic skills and tools of experimental physics and data analysis. III. Conceptual Learning: The laboratory should help students master basic physics concepts. IV. Understanding the Basis of Knowledge in Physics: The laboratory should help students to understand the role of direct observation in physics and to distinguish between inferences based on theory and on the outcomes of experiments. V. Developing Collaborative Learning Skills: The laboratory should help students develop collaborative learning skills that are vital to success in many lifelong endeavors.” (Am. J. Phys. 1998, 66, 483–485) In addition, these projects provide physics teachers an alternative to labs when the courses are taught remotely due to special circumstances, such as a pandemic.

xvi

Preface

This book is a summary of the reports of projects or labs based on the Introduction to Modern Optics course that I taught at the University of Georgia from 2015 to 2020. Yoong Sheng Phang and I wrote Chapters 1, 2, and 21, as well as the Appendix. In Chapters 3–20, we composed the brief introduction section on the physics principle. The design, setup, and preliminary results of the example smartphone-based labs were adapted from the students’ original lab reports. The names of the original student authors are explicitly given in each “Smartphone Experiment” section. The final descriptions of the lab and results were entirely reformatted and edited. We also replotted and/or refitted several figures for better illustration, data interpretation, analysis, and consistency. We expect that some of the results from these reports may not be completely accurate because they were obtained by inexperienced undergraduate students in 1–1.5 months. I will provide more accurate results once the experiments have been repeated and welcome readers who are interested in the experiments to provide improved experimental designs and results ([email protected]). Readers are also welcome to provide suggestions or their own topics and experiments to grow this book in the future. It should be noted that we did not intend to provide standard lab instructions. In each experiment, step-by-step instructions on the setup of the labs, the details of data analysis, or the stereolithography (STL) files for the 3D printer are not given. Cookbook-like detailed instruction would deviate from the goal of encouraging students to be creative and think critically. I firmly believe that the specific topics and one or two successful lab designs given in this book should provide readers enough information to inspire them to design their own experiments. In fact, Chapter 21 shows an example of the lab course format that I gave in 2020; for each of these labs, different students had very different experimental designs. We hope that this book can serve the following purposes: first, it may give some rough ideas about the versatility of smartphones in physics laboratories for education; second, it may serve as a guidebook for science teachers who want to incorporate these kinds of labs into their classroom or outside activities; third, it could provide inspiration for students or science hobbyists to design and construct their own labs; and finally, it could be a useful resource for parents who want to initiate a science journey for their child. We would like to thank the Office of Instruction at the University of Georgia for providing a Learning Technology Grant in 2017 to support this effort. The Department of Physics and Astronomy at the University of Georgia always provides generous support to reimburse students for the cost of materials. The following students who attended the Introduction to Modern Optics class during the 2015–2020 academic years are the true contributors to this book: Mohamed Ali, Luke Allen, Jackson Alvarez, Brenton Bailey, James Barksdale, George Barron, Matt Bellury, Michael Biddle, Nicholas Brosnahan, Thomas Brown, Henry Browne, Jackson Browne, Robert Bull, Natalie Byrd,

Preface

xvii

Alexia Caldwell, Grayson Carswell, Sungjae Cho, Tyler Christensen, Alec Cook, Noah Coriell, Ivy Cornelison, William Couch, Elijah Courtney, Joshua Courtney, Troy Crawford, Griffin Dangler, Robert Dawson, Daniel Desena, Caroline Doctor, Zach Eidex, John Ericson, Nathan Forbes, Trey Fountain, Autumn Gibson, Helena Gien, Caroline Grant, Christopher Grayson, Jonathan Griffey, Yuta Hagiya, Steven Hancock, Alex Harp, James Haverstick, Andrea Hill, Beckah Hufstetler, Andrew King, Sean Krautheim, Nicholas Kruegler, Nicolas Lohner, Justin Massey, Ryan Matuszak, Ryan McArdle, Sean McIlvane, Graham McKinnon, Aditya Mewar, Gaith Midani, Ansley Miller, Nathan Neal, Isabella Nienaber, Clayton Oetting, Eric Older, Dain Owen, Ryan Pappafotis, Mario Parra, Romil Patel, David Pearson, Emory Perry, Yoong Sheng Phang, Myles Popa, Austin Schulz, David Seiden, Cameron Shadmehry, Andrew Short, Jonathan Sinclair, Conner Skehan, John Sullivan, Joo Sung, Rachel Taylor, Zach Thomas, John Turnage, Anthony Weaver, Aojie Xue, and Malison Young. We would like to acknowledge the support of the graduate teaching assistants Mona Asadinamin, Yifan Dai, Nima Karimitari, Amara Katabarwa, and Shahab Razavi during these classes. We would also like to thank Karen Zhao for proofreading some of the chapters. Yiping Zhao August 2022 Athens, Georgia, USA

Chapter 1

Smartphones and Their Optical Sensors 1.1 History and Current Utilization in Education A smartphone is a handheld device that integrates the basic functions of a mobile phone with advanced computing capabilities. The concept of combining a telephone and a computer chip dates to the early 1970s when Motorola introduced the first handheld cellular mobile phones. However, these cell phones were hardly ergonomic and ran on low data rate networks at speeds of less than 100 Kb/s [1]. In 1994, the Simon Personal Communicator, which is widely regarded as the world’s first smartphone, was launched by IBM [2]. The Simon was the first phone to incorporate the functions of a cell phone with those of a personal digital assistant (PDA), allowing users to call, page, and fax from their cell phones [3]. The Simon was also the first to use a touchscreen and stylus and include other new features such as an address book, a calendar, a calculator, and an appointment scheduler. Access to the mobile web was introduced by the Nokia 9000 Communicator launched in 1996 [4]. In 2000, the first mobile phone camera was unveiled in Sharp’s J-SH04 model, which had a 0.3-megapixel (MP) resolution and allowed users to send images electronically. The deployment of 3G networks in 2001 resulted in bit rates that were high enough to accommodate the sending and receiving of photographs, video clips, and other media [5]. It was not until the launch of Apple’s iPhone in 2007 that the standards were set for the modern smartphone. For this reason, the history of smartphones has been classified into the pre-iPhone era (before 2007) and the post-iPhone era (after 2007) [6]. The iPhone brought hardware and sensors such as the accelerometer and the capacitive touchscreen into the mainstream, creating an interactive experience for the user [7]. Its iOS operating system also revolutionized internet applications on smartphones, introducing a high degree of portable accessibility and storage, and making them comparable to operating systems that run on a personal computer [8]. A year later in 2008, Google acquired Android, an open-source operating system, and licensed it to all handset 1

2

Chapter 1

makers, quickly becoming a competitor to Apple’s iOS. Following the rapid rise of Apple’s iPhone and Google’s Android, consumer behavior changed dramatically, as the number of smartphone users began to skyrocket over the next few years [9]. Smartphone technology has advanced considerably since the release of the first iPhone in 2007 and become increasingly accessible worldwide. With the recent release of 5G networks, smartphones have increased bandwidth and faster speeds than ever before, ushering in a new era of connectivity. They are also the host of a plethora of improved hardware and sensing systems. The most notable among these is the camera, which has achieved high resolution and image quality. Many contemporary smartphones now feature multiple cameras, and companies like Samsung and Xiaomi have begun the commercialization of smartphone cameras with resolutions of over 100 MPs [10]. Software advances have also improved camera technology through the introduction of novel techniques like computational photography. Mobile devices such as tablets have been used for education since 2001 [11]. A well-known model is called mobile learning (m-learning), which is defined by Crompton [12] as “learning across multiple contexts, through social and content interactions, using personal electronic devices” [12–16]. Smartphones and tablets are essential components of online (or e-learning) distance education, serving students whose courses require them to be highly mobile while communicating to them information regarding the availability of assignment results, venue changes, course cancellations, etc. They are particularly valuable to part-time students who do not wish to spend time away from their busy work schedules to attend formal classes or training events. Smartphones/tablets facilitate online interaction between the instructor and the student and among students themselves. This paradigm of blended learning takes the classroom out of a traditional brick-and-mortar setting, enabling students to participate in collaborative virtual communities. According to a 2013 survey, 77% of academic leaders felt that online learning was equivalent or even superior to face-to-face learning [17]. Although several m-learning environments use multimedia and virtual reality to enhance the learning experience, they are basically an extension of the traditional online distance education model, albeit with better interactive software and more personal communication. Importantly, current m-learning initiatives fail to provide students with the hands-on experience needed to enhance their enthusiasm for science, technology, engineering, and mathematics (STEM) education. Conventional m-learning environments have typically exploited only the connectivity and programming capabilities of smartphones/tablets, whereas the devices’ advanced opto-electronic/micro-electromechanical systems (MEMS)sensing and geo-referencing capabilities have been underexplored. Outside of learning environments, there has been a surge of recent research on the use of

Smartphones and Their Optical Sensors

3

smartphone/tablet hardware and interfaces to create point-of-care diagnostic tools or portable medical devices [18]. For example, smartphones have been integrated into powerful microscopes for image analysis, colorimetric detection, blood testing, and disease diagnostics [19–30]. In combination with nanotechnology, smartphones have also been modified for use as spectrometers for chemical and biological sensing [31]. Many do-it-yourself (DIY) enthusiast websites have also demonstrated the use of smartphones/tablets in electronics projects [32]. It is evident that besides using the communication and software capabilities of smartphones/tablets for conventional m-learning, their opto-electronic/MEMS-sensing capabilities can be exploited to build laboratory instruments for hands-on lab education.

1.2 Smartphone Camera The most important smartphone component for the experimental applications in this book is its camera. The smartphone camera usually consists of two parts: the optical sensor array and the adaptive optical system, as shown in Fig. 1.1.

Figure 1.1

Parts of a smartphone camera system.

1.2.1 Optical sensor The standard optical sensor used in smartphones is an active-pixel sensor called the complementary metal oxide semiconductor (CMOS). A CMOS sensor array is a silicon-based integrated circuit consisting of a twodimensional (2D) grid of light-sensitive elements called photosites, as shown in Fig. 1.2. Each CMOS photosite contains a photodiode, a capacitor, and up to four transistors. The photodiode is the primary light-sensing component in the imaging system and is based on a reverse-biased p–n junction for capturing photogenerated electrons. The working principle of each photosite can be simply described as the following: according to quantum mechanics, the

Chapter 1

4

Figure 1.2 The layout of a typical CMOS array. An expanded depiction of a single photosite is shown in the inset at the bottom-left corner. The horizontal and vertical lines represent the circuitry in the CMOS.

incoming light can be treated as a flux of small particles called photons. For light with a wavelength l, each photon in the light beam has an energy E p , Ep ¼

hvL , l

(1.1)

where h is the Planck constant and vL is the speed of light in vacuum. The material for the photodiode has an energy bandgap E g , which sets the lowest photon energy that can be absorbed by the device. For CMOS sensors, the photodiode material is silicon, which gives E g ¼ 1.12 eV. When the received photons have an energy E p $ E g , the photons will be absorbed by the photodiode and induce photogenerated electrons. These photogenerated electrons will be collected and subsequently converted into a voltage signal in the photodiode [33]. The primary advantage of the active-pixel CMOS compared to other image sensors like a charge-coupled device (CCD) is that a charge-to-voltage conversion amplifier and an analog-to-digital signal converter are integrated in each photosite, as shown in Fig. 1.2, which significantly increases readout speed and signal-to-noise ratio (SNR). To use the CMOS sensor in a smartphone camera to perform optical experiments, it is important to understand the following parameters: the resolution, the spectral response of the sensor, the SNR, and the linearity. Resolution

The resolution of a CMOS sensor refers to the total number of effective photosites that make up the light-sensitive area of the device [33]. If a sensor array has M rows and N columns, multiplying the quantities M and N returns the total pixel number, which smartphone manufacturers often use to report the resolution of their cameras. In fact, the actual dimension of each

Smartphones and Their Optical Sensors

5

individual photosite ultimately determines the resolution of the sensor. For a sensor array with a fixed dimension of a length l and a width w, assuming the pitch of each photosite (i.e., the distance between the centers of adjacent photosites) is d p  d p , then M ¼ l∕d p and N ¼ w∕d p . For example, the CMOS arrays of current smartphones have relatively large pitches, with d p usually between 1.0 and 2.0 mm, enabling higher photon collection efficiency and better noise reduction [34, 35]. The main (wide-angle) camera of the iPhone 12 Pro Max, a top-selling smartphone, has a sensor array size of 4.0 mm  3.0 mm, a pitch size d p ¼ 1.7 mm, and a total of 4032  3024 pixels (12 MPs) [36, 37]. Smartphones like the Samsung Galaxy S21 5G Ultra have a larger sensor array of 9.6 mm  7.2 mm, a smaller pitch size of 0.8 mm, and a much higher resolution of 12,000  9,000 pixels, i.e., 108 MPs [38–40]. Spectral response of a photosite

An important parameter of a CMOS sensor is its quantum efficiency (QE), or spectral response. QE is defined as the ratio of the number of photogenerated electrons to the number of photons at a particular wavelength l incident on a photosite: QE ¼

# photogenerated electrons  100%: # incident photons

(1.2)

An ideal photosite should have a QE of 100%; i.e., one incident photon should generate one electron in the photodiode. However, practically, a 100% QE is impossible to achieve due to the loss of photogenerated electrons during their transport inside the photodiode. The QE heavily depends on the photosensitive material used, especially its spectral absorption properties, as well as its thickness and reflectivity. It is also affected by the geometric characteristics of the photodiode [34]. Figure 1.3 shows a typical QE spectrum (the gray curve) for a silicon CMOS photodiode in the wavelength range between 400 and 1200 nm; the maximum QE is around 60% at l ¼ 500 nm. For spectroscopy applications in the visible wavelength region, the ideal spectral response of a photodiode should have a constant QE, as shown by the solid black line in Fig. 1.3. Because the variation of QE in the visible wavelength region (400–700 nm) is not very large, between 50 and 60%, in later experiments we always assume that the spectral response of the CMOS array of the smartphone is a constant. To enable color imaging, a color-filter array (CFA) is deposited on top of the photosites. The CFA consists of a periodic pattern of red (R), green (G), and blue (B) shown in Fig. 1.4. This arrangement is known as the Bayer pattern and consists of 25% red filters, 50% green filters, and 25% blue filters. The excess of green is by design, as the human eye is most sensitive to green. After the coating of a CFA, the spectral responses of the R, G, and B

Chapter 1

6 70 Grayscale Red photosite Green photosite Blue photosite Ideal QE of Si

Quantum Efficiency (%)

60 50 40 30 20 10 0

400

500

600

700

800

900

1000

Wavelength λ (nm)

Figure 1.3 The quantum efficiency (QE) spectrum of an uncoated CMOS photosite and the QE spectra of the corresponding RGB photosites. (Figure used with permission from Allied Vision [41].)

Figure 1.4 A Bayer filter covering the photosites on a CMOS sensor array. Each gray square block represents a photosite. (Figure reprinted from [42] under CC by 3.0.)

photosites are different, as shown by the corresponding colored curves in Fig. 1.3. In other words, each photosite will be sensitive to light in a narrow band of wavelength range according to the color of the filter: the R photosite is only sensitive to light in the 570–700-nm wavelength range, with a maximum QE of 44% at 600 nm, which corresponds to a red color; the G photosite is sensitive to light in the 460–630-nm range, with a maximum QE of 55% at 510 nm (green color); and the B photosite is sensitive to light in the 400–520-nm range, with a maximum QE of 48% at 450 nm (blue color). The combination of the RGB photosites should give the entire response of an uncoated photosite, i.e., the gray spectrum in Fig. 1.3. Color image, image intensity, and linearity

Because each photosite in the sensor array is covered by a single-color filter, a spatial interpolation operation called demosaicing is applied to deduce the missing color information from the readings of neighboring photosites to produce a full-color image, as shown in Fig. 1.5 [43, 44]. The color of each

Smartphones and Their Optical Sensors

Figure 1.5

7

Conversion process from color image to grayscale. (Figure reprinted from [43].)

pixel in an image is created from a combination of the relative intensities of the R, G, and B photosites, which are represented by three digital intensity values (DIVs), (R, G, B), where R, G, and B are integer numbers between 0 and 255. At 200, the silicon response starts to become nonlinear, and the response of a pixel is saturated when either the R, G, or B value(s) is at the maximum value of 255. The intensity of a pixel is also dependent on exposure time, and a large exposure time can also result in saturation. Pixel saturation should be avoided during experiments because it results in loss of experimental information [45]. In addition, because the spectral responses of R, G, and B photosites are limited to a specific wavelength range as shown in Fig. 1.3, one must consider the intensities detected by all the RGB photosites to determine the true intensity of light with a broad spectral range. This, in terms of image processing, corresponds to converting the colored pixels to grayscale (Fig. 1.5). Because the detectable spectral ranges of RGB photosites slightly overlap with one another, the simplest and most popular method for grayscale conversion is a linear combination of the R, G, and B values [46], Grayscale value ¼ aR R þ aG G þ aB B,

(1.3)

where aR , aG , and aB are weighting coefficients and aR þ aG þ aB ¼ 1. One simple way is to take aR ¼ aG ¼ aB ¼ 1∕3, i.e., the average of the R, G, and B values. Other common weighting coefficients, such as (aR , aG , aB ) ¼ (0.3, 0.59, 0.11) and (aR , aG , aB ) ¼ (0.2126, 0.7152, 0.0722), have also been widely used [46]. Note that the grayscale image has the same pixel number as the colored image. The linearity of the CMOS sensor refers to the degree with which the R, G, B, or grayscale value is proportional to the actual incident light intensity. For many experiments, good linearity is essential to accurately determine the probed physical relationships through image analysis [47]. One method of measuring the linearity of the smartphone image sensor is to vary the intensity of a light source, record the signal detected by the CMOS sensors, and fit the image sensor values versus the real light intensity relationship by linear regression [48]. Figure 1.6 shows the measured R, G, B, and grayscale values

8

Chapter 1

Figure 1.6 The linearity measurement of a CMOS image sensor of an iPhone 12 using a white light source: the plots of the R, G, B, and grayscale values versus the power of white light. The solid lines denote the linear fits.

taken from iPhone 12 images versus the measured powers of a white light source. The DIVs of the R, G, and B channels and the grayscale are calculated from the corresponding smartphone images using ImageJ (https://imagej.nih. gov/ij/; see Appendix III). The power of the white light is obtained by using a PM100D power meter from Thorlabs, Inc. As shown in Fig. 1.6, four linear relationships are revealed with a similar slope, around 68 DIV/mW. The reason that nonlinearity at DIVs .200 does not appear in Fig. 1.6 could be due to the large sampling step in white light intensity and few experimental data points. The linearity characteristics of CMOS sensors are not typically reported by smartphone manufacturers. 1.2.2 Adaptive optical system The adaptive optical system in a smartphone camera consists of an assembly of five to seven state-of-the-art mobile camera lenses positioned in front of the CMOS sensor array. Figure 1.7 shows an example of a five-lens system, and one of the lenses can be moved back and forth electronically to adjust the effective focal length of the system, i.e., the autofocus feature. Using a multiple lens system helps minimize the imaging distortion. The image equation from basic geometric optics assumes paraxial rays (rays with small angles with respect to the optical axis) and thin lenses (a lens with a thickness much smaller than its diameter). Under these approximations, the effects of aberration (the distortion of the image by an optical component such as a lens or a mirror) can be neglected. However, due to limited spacing and the relative size of the lenses, both the thin lens approximation and the paraxial approximation for the image formed at the edges of the CMOS array may not be valid [35]. In this situation, if a single lens were used for the smartphone optics, both spherical and chromatic aberrations could occur, as depicted in

Smartphones and Their Optical Sensors

9

Figure 1.7 A five-lens optical system of a smartphone camera. The dotted blue arrow indicates the adjusting location range of the moving lens, while the blue ray diagram demonstrates how the system of lenses acts as an effective converging lens. (Figure reprinted with permission from Reshidko and Sasian [50] © The Optical Society.)

Fig. 1.8 [49]. Spherical aberration is caused by the fact that the parallel monochromatic light beam, after passing through the lens, will focus on different locations depending on the distance of the beam from the optical axis [see Fig. 1.8(a)]. Such an effect induces blurriness and distortion in an optical image. Also, due to the dispersion in the refractive index of lens material, for a polychromatic light beam, rays of different colors will refract at different angles (see Chapter 15) when passing through the lens, which gives rise to chromatic aberration [see Fig. 1.8(b)]. It has been well understood in optical engineering that to correct both the spherical and chromatic aberrations, a multiple lens system with a combination of convex and concave lenses, or spheric lenses and aplanatic lenses, should be used. The first three lenses Spherical aberration

(a)

f

Chromatic aberration

(b) Figure 1.8

A schematic for (a) spherical aberration and (b) chromatic aberration.

Chapter 1

10

shown in Fig. 1.7 provide the effective degrees of freedom for correcting spherical aberration and chromatic change of focus. The rear group consists of one or two lenses that are designed to be strongly aspheric, so that they cancel the distortion introduced by the front group of lenses. The small lens sizes also help mitigate the effects of aberration [50]. In principle, this lens system can be treated as a single effective converging lens to form a real image on the CMOS sensor array. As shown in Fig. 1.7, because the maximum traveling distance of the moving lens is very small in the lens system, the relative distance between the effective center location (the gray dashed line) of the lens system and the CMOS array is not changed during the autofocus process, and one can approximate that the effective image distance i of the lens system is a constant. According to the image equation, 1 1 1 þ ¼ , p i f

(1.4)

where p is the effective object distance and f is the effective focal length of the adaptive optical system. When p is changed, f is accommodated accordingly by adjusting the location of the moving lens electro-mechanically to form a sharp image on the CMOS image sensor. Assuming that there is no image distortion in the smartphone camera, an object in an image can be scaled from its pixel size to its actual size [51]. For example, suppose the pixel length l p of an object in an image and the actual length of the object l a are known. Then, a distance-to-pixel scaling factor (or the inverse of the image magnification) h can be defined as h ¼ l a ∕l p :

(1.5)

Therefore, for a given pixel distance obtained in the image, one can scale it to the physical length, Physical length ¼ h  Pixel length:

(1.6)

In a smartphone experiment, h can be determined by simply including an object of known physical length (i.e., a ruler) in the image and determining its pixel length using ImageJ software. The combined focal length of a smartphone camera determines the camera’s field of view (FOV). Today’s smartphones may feature multiple cameras with different lenses and focal lengths for different FOV. The most common types of lenses found on smartphone cameras are telephoto (25–40-deg FOV), wide-angle (62–84-deg FOV), and ultrawide-angle lenses (90–120-deg FOV) [52]. Although one can manually choose which camera to use in different scenarios, the wide-angle lens serves as the primary camera of the smartphone and should be used for smartphone experiments.

Smartphones and Their Optical Sensors

11

1.3 Using the Smartphone Camera in Experiments In this book, the smartphone camera is central to the experiments as a detector in several different contexts. To apply the smartphone camera to scientific experiments, there are several fundamental assumptions: 1.

2. 3. 4. 5.

6.

The sensor’s response to light intensity is linear: either the RGB values or the grayscale value is proportional to the light intensity provided that the photosite is not saturated. All of the photosites in the sensor array are identical. The spectral response of the sensors in the visible wavelength range is flat. There is no image distortion. The adaptive optical system can be treated as a single converging lens that can form a real image on the sensor array. The autofocus feature of the camera can adjust the effective focal length of the optical system so that the effective image distance of the camera is fixed; i.e., the distance between the sensor array surface and the effective lens is a constant for any image taken. A scaling factor can be defined to convert the pixel length in a smartphone image to a real length.

The following precautions should also be taken during the experiments: 1.

2. 3. 4. 5.

6.

Avoid using the camera’s “auto” photography feature because it may automatically change the exposure time during consecutive image acquisition. Avoid using flash photography during image acquisition. Avoid high intensity of light flux because it will saturate the photosite(s). Avoid using images that are very small (i.e., only a few pixels in size) for length calibration, wavelength calibration, or image identification. Maintain a consistent smartphone orientation when taking images. In most smartphones, images taken with a vertically oriented smartphone will have different pixel dimensions than images taken with a horizontally oriented smartphone. If a multi-camera system is used, select the standard wide-angle camera for experimentation.

References [1] N. Islam and R. Want, “Smartphones: past, present, and future,” IEEE Pervasive Comput. 13(4), 89–92 (2014). [2] N. Ai, Y. Lu, and J. Deogun, “The smart phones of tomorrow,” SIGBED Rev. 5(1), Article 16 (2008).

12

Chapter 1

[3] S. Tweedie, “The world’s first smartphone, Simon, was created 15 years before the iPhone,” Business Insider, https://www.businessinsider.com/ worlds-first-smartphone-simon-launched-before-iphone-2015-6 (2015). [4] P. Ketola, H. Hjelmeroos, and K.-J. Räihä, “Coping with consistency under multiple design constraints: the case of the Nokia 9000 WWW browser,” Personal Technologies 4(2), 86–95 (2000). [5] Encyclopaedia Britannica Online, s.v. “Smartphone,” by W. L. Hosch, https://www.britannica.com/technology/smartphone, accessed May 13, 2022. [6] P. Kim, “The Apple iPhone shock in Korea,” Inf. Soc. 27(4), 261–268 (2011). [7] M. Elgan, “How iPhone changed the World,” Cult of Mac, https://www. cultofmac.com/103229/how-iphone-changed-the-world/ (2011). [8] F. B. Nasser and L. Trevena, “There’s an App for that: a guide for healthcare practitioners and researchers on smartphone technology,” Online J. Public Health Inform. 7(2), e218 (2015). [9] M. Campbell-Kelly, D. Garcia-Swartz, R. Lam, and Y. Yang, “Economic and business perspectives on smartphones as multi-sided platforms,” Telecomm Policy 39(8), 717–734 (2015). [10] L. Kelion, “Xiaomi smartphone has 108 megapixel camera,” BBC, https://www.bbc.com/news/technology-50301665 (2019). [11] H. Crompton, “A diachronic overview of mobile learning: a shift toward student-centered pedagogies,” Chap. 1 in Increasing Access through Mobile Learning, M. Ali and A. Tsinakos, Eds., Commonwealth of Learning Press and Athabasca University, Vancouver (2014). [12] H. Crompton, “A historical overview of mobile learning: toward learnercentered education,” Chap. 1 in Handbook of Mobile Learning, Z. L. Berge and L. Y. Muilenburg, Eds., Routledge, Florence, KY (2013). [13] M. L. Crescente and D. Lee, “Critical issues of m-learning: design models, adoption processes, and future trends,” JCIIE 28, 111–123 (2011). [14] R. Robinson and J. Reinhart, Digital Thinking and Mobile Teaching: Communicating, Collaborating, and Constructing in an Access Age, Bookboon Learning, Copenhagen (2014). [15] G. Trentin and M. Repetto, Using Network and Mobile Technology to Bridge Formal and Informal Learning, Woodhead/Chandos Publishing Limited, Cambridge (2013). [16] R. Oller, “The future of mobile learning,” Research Bulletin, EDUCAUSE Center for Analysis and Research, Louisville, CO, https://library. educause.edu/resources/2012/5/the-future-of-mobile-learning (2012). [17] I. E. Allen and J. Seaman, “Changing courses: ten years of tracking online education in the United States,” Babson Survey Research Group

Smartphones and Their Optical Sensors

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28] [29]

[30]

13

and Quahog Research Group, LLC, Babson Park, MA, http://www. onlinelearningsurvey.com/reports/changingcourse.pdf (2013). A. Ozcan, “Mobile phones democratize and cultivate next-generation imaging, diagnostics and measurement tools,” Lab Chip 14(17), 3187–3194 (2014). O. Mudanyali, S. Dimitrov, U. Sikora, S. Padmanabhan, I. Navruz, and A. Ozcan, “Integrated rapid–diagnostic–test reader platform on a cellphone,” Lab Chip 12(15), 2678–2686 (2012). A. F. Coskun, J. Wong, D. Khodadadi, R. Nagi, A. Teya, and A. Ozcan, “A personalized food allergen testing platform on a cellphone,” Lab Chip 13(4), 636–640 (2013). I. Navruz, A. F. Coskun, J. Wong, S. Mohammad, D. Tseng, R. Nagi, S. Phillipsac, and A. Ozcan, “Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array,” Lab Chip 13(20), 4015–4023 (2013). V. Oncescu, D. O’Dell, and D. Erickson, “Smartphone based health accessory for colorimetric detection of biomarkers in sweat and saliva,” Lab Chip 13(16), 3232–3238 (2013). T. S. Park, W. Li, K. E. McCracken, and J.-Y. Yoon, “Smartphone quantifies Salmonella from paper microfluidics,” Lab Chip 13(24), 4832–4840 (2013). H. Zhu, I. Sencan, J. Wong, S. Dimitrov, D. Tseng, K. Nagashimaa, and A. Ozcan, “Cost-effective and rapid blood analysis on a cell-phone,” Lab Chip 13(7), 1282–1288 (2013). H. Zhu, S. O. Isikman, O. Mudanyali, A. Greenbauma, and A. Ozcan, “Optical imaging techniques for point-of-care diagnostics,” Lab Chip 13(1), 51–67 (2013). H. Wang, Y.-J. Li, J.-F. Wei, J.-R. Xu, Y.-H. Wang, and G.-X. Zheng, “Paper-based three-dimensional microfluidic device for monitoring of heavy metals with a camera cell phone,” Anal. Bioanal. Chem. 406(12), 2799–2807 (2014). S. Sumriddetchkajorn, K. Chaitavon, and Y. Intaravanne, “Mobileplatform based colorimeter for monitoring chlorine concentration in water,” Sens. Actuators B Chem. 191, 561–566 (2014). V. Oncescu, M. Mancuso, and D. Erickson, “Cholesterol testing on a smartphone,” Lab Chip 14(4), 759–763 (2014). E. H. Doeven, G. J. Barbante, E. Kerr, C. F. Hogan, J. A. Endler, and P. S. Francis, “Red–green–blue electrogenerated chemiluminescence utilizing a digital camera as detector,” Anal. Chem. 86(5), 2727–2732 (2014). T. Cao and J. E. Thompson, “Remote sensing of atmospheric optical depth using a smartphone Sun photometer,” PloS One 9(1), 1–8 (2014).

14

Chapter 1

[31] Z. J. Smith, K. Chu, and S. Wachsmann-Hogiu, “Nanometer-scale sizing accuracy of particle suspensions on an unmodified cell phone using elastic light scattering,” PloS One 7(10), 1–7 (2012). [32] M. Westerfield, Building iPhone and iPad Electronic Projects: Real-World Arduino, Sensor, and Bluetooth Low Energy Apps in techBASIC, O’Reilly Media, Inc., Sebastopol, CA (2013). [33] S. Taylor, “CCD and CMOS imaging array technologies: technology review,” Technical Report EPC-1998-106, Xerox Research Centre Europe, Cambridge (1999). [34] R. Hain, C. J. Kähler, and C. Tropea, “Comparison of CCD, CMOS and intensified cameras,” Exp. Fluids 42(3), 403–411 (2007). [35] Y. T. Liu, “Review and design a mobile phone camera lens for 21.4 mega-pixels image sensor,” Master’s thesis, University of Arizona (2017). [36] DeviceSpecifications, Apple iPhone 12 Pro Max - Camera, https://www. devicespecifications.com/en/model-camera/39115355 (2021). [37] P. Ferenczi, “Apple iPhone 12 Pro Max Camera review: big and beautiful,” DXOMARK, https://www.dxomark.com/apple-iphone-12pro-max-camera-review-big-and-beautiful/ (2020). [38] B. Hillen, “Samsung officially unveils 108MP ISOCELL Bright HMX mobile camera sensor,” Digital Photography Review, https:// www.dpreview.com/news/0799990809/samsung-officially-unveils-108mpisocell-bright-hmx-mobile-camera-sensor (2021). [39] DeviceSpecifications, Samsung Galaxy S21 Ultra 5G SD888 - Specifications, https://www.devicespecifications.com/en/model-weight/964554dd (2021). [40] Samsung, Galaxy S21 Ultra 5G Specifications, https://www.samsung. com/global/galaxy/galaxy-s21-ultra-5g/specs/ (2021). [41] AlliedVision, Allied Vision Manta G-040, https://www.alliedvision.com/ en/camera-selector/detail/manta/g-040/ (2018). [42] Wikimedia Commons, s.v. “Bayer pattern on sensor,” by C. M. L. Burnett, https://commons.wikimedia.org/wiki/File:Bayer_pattern_on_sensor.svg (2006). [43] C. Bai, J. Li, and Z. Lin, “Demosaicking based on channel-correlation adaptive dictionary learning,” J. Electron. Imaging. 27(4), 043047 (2018) https://doi.org/10.1117/1.JEI.27.4.043047. [44] A. E. Gamal and H. Eltoukhy, “CMOS image sensors,” IEEE Circuits Devices Mag. 21(3), 6–20 (2005). [45] X. Zhang and D. H. Brainard, “Estimation of saturated pixel values in digital color imaging,” J. Opt. Soc. Am. A Opt. Image Sci. Vis. 21(12), 2301–2310 (2004). [46] Y. Wan and Q. Xie, “A novel framework for optimal RGB to grayscale image conversion,” 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), 345–348 (2016).

Smartphones and Their Optical Sensors

15

[47] F. Wang and A. Theuwissen, “Linearity analysis of a CMOS image sensor,” Electronic Imaging 2017(11), 84–90 (2017). [48] F. Wang, “Linearity research of a CMOS image sensor,” Ph.D. dissertation, Delft University of Technology (2018). [49] J. F. James, An Introduction to Practical Laboratory Optics, Cambridge University Press, Cambridge (2014). [50] D. Reshidko and J. Sasian, “Optical analysis of miniature lenses with curved imaging surfaces,” Appl. Opt. 54(28), E216–E223 (2015). [51] N. Horning, “Understanding image scale and resolution,” Center for Biodiversity and Conservation, American Museum of Natural History, http://www.nilerak.hatfieldgroup.com/French/NRAK/EO/2._Basics_ RS_Scale_Resolution_AMNH.pdf (2004). [52] F. Gallagher, “What is the focal length of an iPhone camera and why should I care?” Improve Photography, https://improvephotography.com/ 55460/what-is-the-focal-length-of-an-iphone-camera-and-why-should-icare/ (2021).

Chapter 2

Experimental Data Analysis 2.1 Experiments and Measurement Error 2.1.1 General physics experimental procedure Scientific experiments are used to evaluate a hypothesis or to measure a certain scientific quantity. They are crucial in advancing modern sciences and technologies. In particular, physics experiments are usually employed to establish a quantitative relationship among different physical parameters (e.g., the Cavendish experiment for measuring the force of gravity between masses, the measurement of electrostatic force as a function of test charge, etc.), to prove or confirm a particular theoretical prediction or hypothesis (e.g., general relativity, the existence of black holes, etc.), or to measure important physical constants (e.g., the speed of light in vacuum, the charge of an electron, Planck’s constant, etc.). In the 400 years of modern science (since Galileo Galilei, 1564–1642), rigorous guiding principles have been adopted for scientific experiments. In particular, a physics experiment should adhere to at least the following steps. Step 1: The purpose of an experiment. The purpose of an experiment could be to measure a specific parameter, to evaluate a relationship/hypothesis, or to establish a correlation. For example, to obtain the value of g, the acceleration of gravity, an experiment should be designed with the purpose of performing such a measurement. Step 2: The design of an experiment. Based on our knowledge of introductory physics (or high school physics), there are various methods for measuring the value of g, such as the investigation of a free-falling solid ball, the sliding of a block on a frictionless slope (or air track), or the measurement of the period of a pendulum. Among the three methods suggested, the pendulum method is quite reliable and simple because the period T of the oscillation of a pendulum only depends on the length l of the pendulum (see Fig. 2.1): pffiffiffiffiffiffiffi T ¼ 2p l∕g (2.1) or 17

Chapter 2

18

Figure 2.1 An illustration of a pendulum setup that can be used to measure the acceleration of gravity.

T 2 ¼ 4p2 l∕g:

(2.2)

Therefore, if the relationship between T and l can be measured and established, then from the linear slope of the plot of T2 versus l (Eq. 2.2), the value of g can be obtained. Notice that Eq. 2.1 is derived based on the assumption that the maximum oscillation angle u in Fig. 2.1 is very small. In the experiment, multiple pendulums with different l can be constructed, while the total time for a fixed number of oscillations of a pendulum can be measured using a stopwatch so that the corresponding T can be measured. Step 3: Measurements and error estimation. Once the experiment is designed and set up, both T and l should be measured by different instruments (a stopwatch and a ruler) to establish the T2–l relationship. In any scientific experiment, multiple measurements of the same parameter should be performed to ensure the accuracy of the measurement. For a fixed length (l) pendulum, T should be measured multiple times (say, 10 times); the variation in the measured T values represents the precision or the error in the T measurement (which will be discussed later in this chapter). A similar procedure should be followed for the l measurement. During the experiment, the accuracy and precision of the measurements are very important. To ensure an accurate measurement, several precautions need to be taken; for instance, the measurement instrument should function as intended, the instrument should be calibrated properly, the measured values should be presented reasonably with the correct units, and the error in the measurements should be sufficiently small. Step 4: Data analysis. Once all the measurements have been performed, a detailed data analysis needs to be performed to interpret the main findings of the experiment. For this example, the T2 versus l relationship needs to be analyzed [see Fig. 2.2(a)]. In the plot, all the data along with the

Experimental Data Analysis

(a)

19

(b)

Figure 2.2 (a) The plot of experimentally measured period T2 versus the pendulum length l. (b) The linear fitting of the experimental data to obtain the slope.

corresponding error bars are presented. The slope of the plot can be obtained through linear regression (or a line of best fit) of the experimental data. For the data shown in Fig. 2.2(a), the slope is obtained to be 4.1  0.1 s2/m [see Fig. 2.2(b)]. Thus, based on Eq. 2.2, this slope ¼ 4p2/g and g is extracted as 9.6  0.2 m/s2. In addition, a statistical test needs to be performed to determine whether fitting using a linear function is acceptable for the experimental data. Step 5: Conclusion. Based on the above data analysis, the main finding or findings can be established, and the following questions that conclude the entire experiment should be answerable: (1) Can the experiment facilitate the determination of the value of g? (2) How does the g value obtained compare to the standard g value given in textbooks or literature? (3) Is the result reasonable? If yes, is there a way to further improve the experiment so that a more accurate or precise estimate for g can be obtained? If the result is not reasonable, why not? And what improvements should be made to the entire experimental procedure to obtain a better and more reasonable g value? From this description, measurement and data analysis are the two most critical procedures in a physics experiment. A very good reference book on error and data analysis is A Student’s Guide to Data and Error Analysis by Herman Berendsen [1]. Below, we give a brief introduction to experimental measurements and errors, as well as some important data analysis procedures. 2.1.2 The experimental measurements An experimental measurement is an estimation of the real value of the measured quantity (or targeted object). A measurement can be cataloged as a direct measurement or an indirect measurement. For example, if a ruler is used to measure the length of a pencil, the value of the pencil length can be directly read out from the ruler. This kind of measurement is called a

Chapter 2

20

direct measurement. However, to measure g, as shown in the section above, its value needs to be extracted from the slope of the T2–l relationship given by Eq. 2.2, i.e., by measuring the pendulum length and oscillation period. This kind of measurement is called an indirect measurement. The measurement error generated in direct and indirect measurements is different and will be discussed in Section 2.1.3, “Errors in measurements.” Another way to catalog a measurement is based on measurement conditions. If the measurements are carried out repeatedly under the same experimental conditions, they are called equally accurate measurements, whereas if the experimental conditions are changed during repeated measurements, they are called unequally accurate measurements. The experimental conditions include the measurement method, the measurement instrument, the observer, the experimental setup, temperature, humidity, etc. Take the measurement of a pencil’s length as an example. If a regular ruler is used for the first measurement and a caliper for the second measurement, then these two measurements are considered unequally accurate measurements because a ruler and a caliper have different measurement precisions. Similarly, if Student A performs the observation during the first measurement and Student B performs the observation during the second measurement, these are also considered unequally accurate measurements. Clearly, good measurements must correspond to equally accurate measurements. For a given measurement, a value, an error, and a unit should be reported for the targeted measurement quantity. The value should give the most accurate estimation of the measurement, the unit defines the standard or the magnitude of the measured value, and the error represents the precision of the measurement, i.e., the measured value cannot be the exact true value. If x0 is the true value and x is the measured value, then Dx is the error, which is given by Dx ¼ x  x0 :

(2.3)

For any experimental measurement, the targeted quantity needs to be measured multiple times; i.e., repeated measurements are required to obtain the best estimation. Thus, for each measurement, the error can be positive or negative. The final reported measurement quantity should be the best estimated value and error based on multiple measurements in the format given by ðMeasured value of xÞ ¼ ðBest estimation of xÞ  Dx ðunitÞ:

(2.4)

For example, from Fig. 2.2, we obtain that g ¼ 9.6  0.2 m/s2. Note that here the reported error Dx is always larger than 0. In physics, the format and conventions for reporting the error are as follows:

Experimental Data Analysis

1. 2. 3.

21

(Measured x) ¼ xbest  Dx. The experimental uncertainties should always be rounded to one significant figure. The last significant figure in any stated value (xbest) should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.

The goodness of a measurement is affected by two factors: the best estimated value xbest and the error Dx. If the estimated xbest is very close to the true value x0, we call this an accurate measurement. If xbest is very far away from x0, the measurement is inaccurate. If multiple measurement values are very close to each other, i.e., the error Dx is very small, the measurement is precise; otherwise, it is not precise. The accuracy and precision of a measurement can be explained using a shooting board as shown in Fig. 2.3. When all the bullet holes are concentrated in the center of the board as shown in Fig. 2.3(a), the shots (or measurements) are precise (concentrated) and accurate (in the board’s center). The board in Fig. 2.3(b) shows that the distribution of the bullet holes is very scattered while the average location of all the bullet holes is close to the center of the board, which means the shots are accurate but not precise. The situation illustrated in Fig. 2.3(c) reveals that all the bullet holes are concentrated on a location far away from the center. Thus, the shots are precise but not accurate. Finally, Fig. 2.3(d) shows that all the bullet holes are not only sparsely distributed, but the average location of the holes is also away from the center of the board, indicating that the shots are neither accurate nor precise.

Figure 2.3

(a)

(b)

(c)

(d)

Explanations of measurement precision and accuracy.

22

Chapter 2

2.1.3 Errors in measurements Measurements can be influenced by many factors, such as the instrument, the observer, the measurement method, the experimental setup, temperature, humidity, etc. These factors could all contribute to errors in experimental measurements. Instrumental errors. Each instrument has its own limits in terms of the range and accuracy of the measured value. Thus, the error of a measured value is directly linked to the accuracy and precision that an instrument can provide. Random error. If multiple readings are made for a measurement, it is expected that each reading given by the instrument may be different. If the value is plotted as a function of the time it is measured, these values will fluctuate around an average value. This random fluctuation is called random error. It is caused by the inherent thermal dynamics (random noise) in an instrument, by the fluctuations of the readings of the observer, or by other factors in the measurement. This is the most important error source in all scientific measurements. Systematic error. Measurements consistently give different values from the true value in nature. The cause of systematic error is from a bias in the observation due to observing conditions or apparatus, measurement technique/procedure, or analysis. For example, friction in bearings of various moving components can cause incorrect readings. A similar situation could occur due to irregular spring tension in analog meters, outdated instrument calibration due to aging, or improper adjustment of the zero setting in an instrument. These errors can be avoided by selecting suitable instruments, always making sure the instruments are calibrated and up-to-date, and applying correction factors. Environmental errors. Ambient parameters such as temperature, pressure, humidity, magnetic and electrostatic fields, dust, and other similar parameters can affect the performance of an instrument. Improper housing of an instrument can also result in incorrect readings. Such errors can be avoided by air-conditioning, magnetic shielding, cleaning and maintaining the instruments, and housing the instruments properly depending on the application and type of instrument. Gross errors. These are essentially human errors caused by the operator using the instrument. An instrument may be in good condition, but the measurement may still be affected by the operator. Some examples of gross errors include incorrect readings, readings with parallax error, incorrect zero and full-scale adjustments, improper applications of instruments (for example, using a 0–100-V voltmeter to measure 0.1 V), and incorrect computation (for example, to determine electric power consumed by a resistor, the voltage V

Experimental Data Analysis

23

and the current I across the resistor must be measured; if the product of V and I is computed incorrectly or the units used for the computation are not correct, a faulty result will be reported, resulting in error). A specific measurement example is used below to illustrate how to account for different types of errors. As shown in Fig. 2.4, a multimeter is used to measure the voltage [i.e., the electromotive force (EMF)] of a battery. Assuming that the experimental procedure is designed well, and that the observer is very careful so that there are no systematic errors, environmental errors, and gross errors, the only sources of error are due to the instrument and random noise. For such a measurement, the instrument’s precision needs to be considered first. This is determined based on the instrument’s design and calibration. Most modern instruments have digital displays, and the number of total digits displayed limits their precision. As shown in Fig. 2.4, the reading in the multimeter gives a value of 16.402 V. What does this value mean? Since the sixth digit cannot be displayed by the instrument, the measured value means that the real voltage could be 16.4020 V, 16.4021 V, . . . , or 16.4029 V; i.e., due to the limited number of displayed digits, the measurement has an error of 0.001 V. Thus, the reported voltage value Vreport for such a measurement shall be 16.402  0.001 V. In addition, most measurement instruments come with a specification (or calibration). In most cases, the specification will contain important precision statements for an instrument. For example, for the multimeter used to measure the battery voltage, the specification sheet states “accuracy: 1% of range.” The range we used for the measurement is 20 V, which means the error due to the instrument will be 0.2 V. From the reading in Fig. 2.4, the error due to the number of digits is 0.001 V, which is much smaller than the error generated by the instrument calibration, 0.2 V. Thus, the real error due to the instrument for Fig. 2.4 will be 0.2 V; Vreport needs to be adjusted to

Figure 2.4

Measuring the voltage of a battery using a multimeter.

Chapter 2

24

Figure 2.5 (Left) Random noise measurement by shortening the two measurement terminals in a multimeter. (Right) The plot of the measured voltage values versus the measured times. The graph shows a total of 10,000 measurements.

16.4  0.2 V. Clearly, the instrument calibration accuracy plays a very important role in the measurement. For multiple readings, the values given by the multimeter are different due to random error. Such a random error can be measured directly by shortening the two measurement terminals in the multimeter, as shown in Fig. 2.5. If the measured value is plotted as a function of the number of readings, highly fluctuated data centered around 0 V are observed, as shown in Fig. 2.5. These data represent how the random error looks and are called random noise. To determine the random error, a statistic histogram or a probability distribution based on the data obtained in Fig. 2.5 can be generated and analyzed. Figure 2.6(a) shows the corresponding histogram plot. For most random noise, the probability distribution Prob(V) should follow a Gaussian function (or normal distribution): ðVmÞ2 1  ProbðV Þ ¼ pffiffiffiffiffiffi e 2s2 , s 2p

(2.5)

of Counts in a Voltage V bin , s is the standard deviation, and where ProbðV Þ  NumberTotal Voltage Counts m is the average voltage. Figure 2.6(b) shows the corresponding probability distribution function derived from Fig. 2.6(a), and the red curve in Fig. 2.6(b) is a best fit using Eq. 2.5. In fact, the fitting gives m ¼ 0 V and s ¼ 1 V. In Eq. 2.5, m represents the V- (x-) axis location where the function reaches a maximum (the center of a Gaussian function), and s represents the width of the curve (distribution). For any random noise, m ¼ 0 V is expected because the random noise should not affect the accuracy of the measurement. Based on the plot in Fig. 2.6(b), the width s of the distribution should represent the fluctuation of the random noise and how likely any measured voltage value is within a certain voltage range. As shown in Fig. 2.7, the probability of finding the measured voltage falling in between 1 V and þ1 V (i.e., 1s) is ∫ 11 pðV ÞdV ¼ 68.3%; the probability of finding the measured voltage falling in between 2 and þ2 V (i.e., 2s) is 95.5%; and the probability of finding

Experimental Data Analysis

25

(a)

(b)

Figure 2.6 (a) A histogram plot of the random noise data appearing in Fig. 2.5. (b) The probability distribution of the random noise found by dividing the histogram plot by 10,000, the total number of measurements. The solid red curve is a Gaussian fit based on Eq. 2.5.

(a)

(b)

(c)

Figure 2.7 The probability for a Gaussian distribution (Fig. 2.6) within (a) 1s, (b) 2s, and (c) 3s.

the measured voltage falling in between 3 and þ3 V (i.e., 3s) is 99.7%. Thus, for the measurements shown in Fig. 2.4, if we consider the effect of random noise shown in Fig. 2.5, the Vreport should be 16  1 V with a 68% confidence interval (i.e., 1s uncertainty) or 16  2 V with a 95% confidence interval (i.e., 2s uncertainty), and so on. For most measurements in physics, the 1s uncertainty is used by default. The above discussion focuses on a direct measurement. However, for an indirect measurement, such as the measurement to determine g using Eq. 2.2, both T and l have their own measurement error. How, then, would the error be estimated for g? According to statistics, if these two measurements (T and l) are independent, the error for g can be estimated by error propagation. Error propagation states that for an arbitrary function, z ¼ f(x, y, . . . ); if the variables x, y, . . . are measured independently with corresponding standard deviations sx sy, . . . (where sx, sy, . . . are independent, and the noise for x, y, . . . follows a Gaussian distribution), then the standard deviation sz can be estimated as

Chapter 2

26

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2  2  ­f ­f s s sz ¼ þ þ ::: : ­x x ­y y

(2.6)

For a two-variable function z ¼ f(x, y), with measured x  sx and y  sy, qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (2.7) if z ¼ x þ y, or x  y, sz ¼ s2x þ s2y ; s if z ¼ xy, or x∕y, z ¼ z

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2x s2y þ : x2 y2

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2   sl 2lsT 2 2 Thus; for Eq: 2:2; sg ¼ 4p : þ T2 T3

(2.8)

(2.9)

As an example, if the measurements give l ¼ 1.00  0.01 m and T ¼ 2.0  0.05 s in a particular pendulum experiment, then sg is calculated to be 0.4 m/s2 according to Eq. 2.9. Thus, the reported g value for this measurement is g ¼ 9.7  0.4 m/s2.

2.2 Numerical/Parameter Estimation 2.2.1 Estimation of a direct measurement Due to errors and repeated measurements, it is important to know how to estimate a directly measured quantity. If random errors dominate the measurements, the following equations can be used to numerically estimate the mean m and the standard deviation s: m ¼ x¯ ¼

N 1X x, N j¼1 j

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u 1 X ¯ 2, s¼t ðx  xÞ N  1 j¼1 j

(2.10)

(2.11)

where N is the number of measurements and xj is the value of the jth measurement; thus, the reported measurement value should be xreport ¼ m  s (unit). Because random noise is intrinsic to any instrument or measurement, it is expected that even for the jth measurement, the reported xj value should also

Experimental Data Analysis

27

have an associated random error sj. Based on Eq. 2.10 and the error propagation equation (Eq. 2.6), the error for x¯ in Eq. 2.10 will be 1 sx¯ ¼ N

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s21 þ s22 þ · · · þ s2N :

(2.12)

If all the repeated measurements are equally accurate measurements, i.e., the same observer is using the same instrument to perform all the measurements under the same conditions, then s1 ¼ s2 ¼ · · · ¼ sN ¼ sx , where sx is the standard deviation for each measurement. Thus, qffiffiffiffiffiffiffiffiffi 1 1 sx¯ ¼ Ns2x ¼ pffiffiffiffiffi sx ; N N

(2.13)

(2.14)

times i.e., because the uncertainty in the mean of N-noisy measurements is p1ffiffiffi N smaller than the uncertainty for one measurement, the N-time multiple measurements are more precise compared with a single measurement! This is the result of the law of large numbers in statistics. Based on this law, the average of values obtained from a large number of measurements should be close to the expected value and will become closer to the expected value as more and more measurements are carried out. For any experimental measurement, one of the most important questions is how to best estimate the measured value. For example, Eq. 2.10 uses the average value to give a numerical estimation for the measured x value. So, the question becomes, does Eq. 2.10 give the best estimation for the x value? If so, why? Take the voltage measurement (Fig. 2.4) as an example. After multiple measurements, a plot of the measured voltage value versus the number of measurements is obtained, as shown in Fig. 2.8. At least two different methods can be used to estimate the voltage value. One uses Eq. 2.10, which we call the mean estimator (the green line in Fig. 2.8). Alternatively, the middle value between the minimum and maximum measured values, called the median estimator (the red line in Fig. 2.8), can also be used. Which estimator can better predict the true voltage value? According to strict statistical theory [2], if the noise in the measurement is Gaussian, the mean is the best estimator. However, for a graph such as the one shown in Fig. 2.9, the situation becomes a little complicated. The fifth measurement gives a value well off of the previous four measurements. It also has a larger error; i.e., these five measurements are not equally accurate measurements. The green line in Fig. 2.9 represents the mean estimator for these data. Clearly, this is not the best estimation because the first four measurements give very repeatable values with a small error while the fifth measurement shows a larger error.

Chapter 2

28

2.2

Voltage (V)

2.0 1.8 11

1.6

10

1.4

7

9

8 6 5

1.2 4

1.0

3 2

0.8

1

0

5

10

15

20

Number of Measurement

Figure 2.8 The estimation for a measured quantity. The red line represents the median estimator, and the green line is obtained from the mean estimator. 4.0

Voltage (V)

3.5 3.0 2.5 2.0 1.5

1

2

3

4

5

Number of Measurement

Figure 2.9

The measured values with a nonequally accurate measurement.

Therefore, the fifth measurement is an outlier. However, because the fifth measurement is also experimental data, it cannot be neglected from the estimation. The conclusion is that the mean estimator is not appropriate for obtaining the best estimation for situations such as the one shown in Fig. 2.9, and a better estimator needs to be used. How can a better estimator be constructed for this situation? From Fig. 2.9, the fifth value has a larger error, which means that this value is less precise and possibly less accurate. Hence, it shall be given less consideration in the estimation. Because the four other values have less error, they shall be given more consideration. To realize this idea mathematically, a weighted mean estimator is proposed: PN j¼1 m ¼ x¯ w ¼ PN

xj W j

j¼1

Wj

,

(2.15)

where Wj is the weight for the measurement xj. For Fig. 2.9, based on the previous argument, we can take

Experimental Data Analysis

29

Wj ¼

1 : sj

(2.16)

The red line in Fig. 2.9 shows the result from the weighted mean, which looks far more reasonable. 2.2.2 Estimation of a relationship In many physics (scientific) experiments, a relationship between two (or more) physical quantities (parameters) usually needs to be established. For example, to test Ohm’s law, multiple pairs of voltage V and current I data of a resistor would be measured and plotted as a V–I relationship, as shown in Fig. 2.10. Even with the naked eye, the V–I relationship can be described by a linear function (or by Ohm’s law), which is represented by V ¼ RI þ C,

(2.17)

where R is the resistance of the resistor and C is a control parameter (which should be close to 0 according to Ohm’s law). Based on data shown in Fig. 2.10, how can the data be processed to obtain the best estimates of the parameters R and C? One option is to make a guess. For example, in Fig. 2.11, the red data are the predicted data points for the following two guesses: R ¼ 1 V and C ¼ 0, as well as R ¼ 1.05 V and C ¼ 0.1. Clearly, it is difficult to determine which guess gives the best estimation using the naked eye. Thus, a better quantitative strategy needs to be developed. It can be assumed that there exists a function called the goodness G of a model, which describes how good the estimations of the parameters R and C would be. Thus, G is a function of R and C. It is expected that if G is plotted against R and C, as shown in Fig. 2.12, then when G reaches a maximum, the corresponding values R0 and C0 are the best estimation for R and C. 6

Voltage V (V)

5 4 3 2 1 0

0

1

2 3 Current I (A)

4

5

Figure 2.10 The V–I measurement data for a resistor.

Chapter 2

30

(a)

(b)

Figure 2.11 Comparison of the estimated linear relationship and the experimental data for (a) C ¼ 0 and R ¼ 1 and (b) C ¼ 0.1 and R ¼ 1.05.

Figure 2.12 The imaginary function, the goodness of a model, versus R and C.

Therefore, the remaining question is how to mathematically construct such a G function. Least-squares fitting. One intuitive criterion for the goodness of a model is that in which all the estimated data points are closest to the experimental data points. Hence, a goodness function can be established according to the closeness of the estimated and experimental data. As shown in Fig. 2.13, assume that the experimental data is represented by (Ij, Vj) and the estimated data is represented by (Ij, V(Ij; R, C)); the closeness between the experimental and estimated data is characterized by the residue DVj ¼ V(Ij; R, C)  Vj. Thus, an intuitive way to estimate the goodness of the model is to set G equal to the sum over all DVj. However, as shown in Fig. 2.13, because DVj could be positive for some data points but also negative for others, the direct summation is not a good function for G. Because DVj represents the closeness between the experimental and estimated data, it is expected that DV 2j should serve the same purpose. Therefore, we can define a function x2 to resemble G:

Experimental Data Analysis

31

Figure 2.13 The residue between the experimental data and estimated data.

x2 ¼

N X

½V j  V ðI j ; R, CÞ2 :

(2.18) .

j¼1

Now the problem is to find appropriate R and C values to minimize x2 (i.e., the smallest difference between the experimental data and estimated data). This procedure is called least-squares fitting. This requires ­x2 ­x2 ¼ 0 and ¼ 0: ­R ­C

(2.19)

From the above two partial derivatives, one can obtain PN 2 PN PN PN j¼1 I j j¼1 V j  j¼1 I j j¼1 I j V j C0 ¼ , D vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X sC ¼ sV t I 2, D j¼1 j

R0 ¼

N

PN

j¼1 I j V j



PN

j¼1 I j

D rffiffiffiffiffi N sR ¼ sV , D

(2.20)

(2.21)

PN j¼1

Vj

;

(2.22)

(2.23)

PN P 2 2 where D ¼ N N j¼1 I j  ð j¼1 I j Þ and sV is the error from the voltage measurements. Here we assume that sV is the same for all the voltage measurements.

Chapter 2

32

Equations 2.20–2.23 give the results for the least-squares fitting for a linear function. In many cases, the relationships measured in an experiment may not be linear. In these cases, nonlinear least-squares fitting is needed. To obtain the best-fitting parameters for nonlinear functions, two different strategies can be used: the model function can be converted into a linear function, so that the above linear least-squares fitting equations can be used for the converted function to determine the desired parameters; or, if the function cannot be converted to a linear function, some special methods can be used to estimate the optimized parameters (see Johnson [3]). The latter strategy involves mathematical knowledge beyond the undergraduate level; if interested, refer to [3] or other related literature. The first strategy depends on the detailed expression of the nonlinear function. For example, if a power function is used to model the y–x relationship, y ¼ axb ða . 0Þ,

(2.24)

where a and b are the fitting parameters; this power function can then be converted into a linear function by letting X ¼ ln x and Y ¼ ln y, so that Eq. 2.24 becomes Y ¼ ln a þ bX :

(2.25)

Therefore, a linear least-squares fitting can be used for the X–Y relationship in Eq. 2.25 to determine the parameters ln a and b. Many other nonlinear functions can also be converted to a linear function. Table 2.1 lists some representative functions and the corresponding conversions. Table 2.1 functions.

Some representative nonlinear functions and their conversions to linear

Orginal function y ¼ ae

bx

(a . 0)

2

Conversion X ¼ x and Y ¼ ln y

Equivalent linear function Y ¼ ln a + bX

y ¼ aebx (a . 0)

X ¼ x2 and Y ¼ ln y

Y ¼ ln a + bX

y ¼ aeb/x (a . 0)

X ¼ 1/x and Y ¼ ln y

Y ¼ ln a + bX

y ¼ axb + c

X ¼ ln x and Y ¼ ln (y  c)

Y ¼ ln a + bX

x y ¼ axþb þ c ða, c . 0Þ

X ¼ 1/x and Y ¼ 1/(y  c)

Y ¼ a + bX

1 y ¼ axþb ða . 0Þ

X ¼ x and Y ¼ 1/y

Y ¼ b + aX

2.3 Model Testing Another important issue regarding experimental data analysis is how to assess the model used to fit the experimental data, i.e., whether the fitting function used is a good model to fit the experimental data. From the previous section, with the experimental data and a proposed model (relationship), a leastsquares method can be used to give the best estimation for the corresponding

Experimental Data Analysis

(a)

33

(b)

Figure 2.14 Examples of two sets of different experimental data.

parameters. For example, Fig. 2.14 shows two sets of experimental data (Cases I and II). A linear model, y ¼ kx + C, could be used to fit both data sets. The fitting results are shown as the straight lines in Fig. 2.14. For Case I, the fitting parameters k ¼ 2.02  0.01 and C ¼ 4.01  0.08 are obtained, while for Case II, k ¼ 1.77  0.06 and C ¼ 4.0  0.4. Clearly, these two sets of data give similar linear-fitting parameters. However, a visual examination of the fitted and experimental data reveals that the model matches well with the data for Case I, whereas for Case II, there are huge discrepancies between the data predicted by the model and the experimental data. This means that the linear model is not a good fit to the data in Case II, and the data do not follow a linear relationship. Evidently, such a judgement is based on visual appraisal by the observers. Is there a rigorous way to confirm such a conclusion; i.e., does the data confirm or contradict a model? Or how likely does the model represent a data set derived from noisy measurements? Answering this question is called model testing and can be addressed using rigorous statistical theory. Here, we present three model testing methods. Residue method. One of the simplest methods for model testing is to investigate the residues between the estimated data and experimental data, Dyj ¼ y(xj)  yj. Figure 2.15(a) shows the residues for the linear model fitting. The residues for Case I fluctuate randomly and uniformly around 0, whereas the residues for Case II follow a deterministic trend [Fig. 2.15(c)]: they decrease from a large value, reach a minimum, and then increase. Thus, if the residue appears to be more like random noise, the model fits the data well; if the residue does not resemble random noise, the model is not a good fit for the data. The distribution (histogram) of the residue can be investigated. The histogram for random noise should follow a Gaussian distribution as shown for Case I in Fig. 2.15(b), whereas for Case II, the distribution of the residue [Fig. 2.15(d)] does not show a peak.

Chapter 2

34

(a)

(b)

(c)

(d)

Figure 2.15 (a) The residue and (b) its corresponding distribution for Case I. (c) The residue and (d) its distribution for Case II.

Coefficient of determination (R2 or R-squared). The residue method essentially describes the goodness of the fitting, which can also be characterized by the overall error. Based on Eq. 2.18, once the parameters k and c are estimated via a least-squares fitting, the goodness of the fitting can be characterized by the coefficient of determination R2 [4], MSE R2 ¼ 1  , (2.26) MST where MSE refers to the mean square error of the fitting and can be expressed as N 1X MSE ¼ x2 ∕N ¼ ½y  yðxj ; k, cÞ2 , (2.27) N j¼1 j whereas MST represents the mean total sum of squares, MST ¼

N 1X ½y  y¯ 2 , N j¼1 j

(2.28)

Experimental Data Analysis

35

P and where y¯ ¼ N1 N i¼1 yi is the average value of yi. For an appropriate function to fit the experimental data, the residue will be very small compared to the average sum; i.e., MSE ≪ MST, so that R2 approaches 1. However, if the proposed function cannot fit the experimental data well, the R2 will be significantly less than 1. As shown in Fig. 2.14(a), the linear model gives R2 ¼ 0.998 for Case I, while for Case II [Fig. 2.14(b)], the linear model results in R2 ¼ 0.930. Chi-square test. A more quantitative test method is to compare the weighted x2w to the degrees of freedom (DOF) of the fitted data. Here, x2w is defined as x2w

¼

 N  X yj  yðxj ;m,bÞ 2 j¼1

sj

,

(2.29)

where sj is the error from the jth measurement. The DOF is defined as DOF ¼ total number of data points  number of fitting parameters: (2.30) If the x2w value is comparable to the DOF, the model fits the data very well. If x2w ≫ DOF, the data lie unreasonably far from the best-fit model. If x2w ≪ DOF, the estimated errors may be wrong. For example, in the data for both Cases I and II, the DOF ¼ 58 for the linear model. Based on Fig. 2.15, x2w ¼ 28 for Case I, which is relatively close to the DOF of 58. Thus, the linear model fits the data in Case I very well. For Case II, x2w ¼ 679, which is ≫58. Thus, we conclude that the linear model is not good for Case II.

References [1] H. J. C. Berendsen, A Student’s Guide to Data and Error Analysis, Cambridge University Press, New York (2011). [2] J. E. Freund, Mathematical Statistics, 5th ed., Prentice Hall, Englewood Cliffs, NJ (1992). [3] M. L. Johnson, “Nonlinear least‐squares fitting methods,” Methods Cell Biol. 84, 781–805 (2008). [4] N. R. Draper and H. Smith, Applied Regression Analysis, 3rd ed., John Wiley & Sons, Inc., New York (1998).

Chapter 3

Law of Reflection 3.1 Introduction The law of reflection is a principle that describes a ray of light reflecting off a smooth surface. It states that when a ray strikes the surface, the angle of reflection ur is equal to the angle of incidence u1, i.e., u1 ¼ ur, as shown in Fig. 3.1. Notice that the angles ur and u1 are defined with respect to the surface normal.

Figure 3.1

The law of reflection.

3.2 Smartphone Experiment (Alec Cook and Ryan Pappafotis, 2015) 3.2.1 General strategy In this experiment, the smartphone acts as an imaging device that records the incident and the reflected laser rays into and out of a mirror’s surface. After capturing photos at various laser incident angles, both the angle of incidence u1 and the angle of reflection ur can be extracted from the image, and a plot of ur versus u1 can be generated with a slope equal or close to 1, which demonstrates the law of reflection.

37

Chapter 3

38

3.2.2 Materials • • • • • •

A smartphone A laser pointer Two biconvex lenses (focal lengths f = 50 and 200 mm) A flat mirror LEGO® bricks Printable protractor

3.2.3 Experimental setup The experimental setup is shown in Fig. 3.2. The two lenses are positioned so that they form a beam expander (see Fig. 3.3) to make the laser pointer’s beam larger and collimated.

Figure 3.2

Experimental setup for the law of reflection.

Figure 3.3 Beam expander setup. The f1 and f2 are the focal lengths of Lens 1 and Lens 2, respectively.

Law of Reflection

39

Figure 3.4

Examples of reflection images taken by a smartphone.

The printed protractor and the mirror are fixed on a book. A line in the middle of the protractor marks the center of the mirror. The book with the protractor and the mirror can be rotated manually against a second book underneath to change the incident angle u1. During the experiment, the incident beam is adjusted so that its incident location on the mirror coincides with the point at which the black center line of the protractor meets the mirror. Figure 3.4 shows two example images taken in the experiment. The circle (half real and half in the mirror) in these images denotes the incident location. 3.2.4 Experimental results From multiple smartphone pictures at various angles of incidence, a set of angles (u1, ur) can be obtained. A plot of ur against u1 is shown in Fig 3.5. Note that the graph is linear, and a linear fit with the function ur ¼ ku1 yields a slope of k = 0.99  0.01, which is expected from the law of reflection, u1 = ur. 40 k = 0.99

Reflection Angle

r

(o)

35 30 25 20 15 10 5 0

0

5

10

15

20

25

Incident Angle

30

35

40

o

1

()

Figure 3.5 A plot of ur against u1 using the experimentally obtained data. The line is a linear fit of the data, with a slope of k ¼ 0.99  0.01.

Chapter 4

Law of Refraction 4.1 Introduction The law of refraction is the principle describing a ray of light that is incident from one optical medium to another optical medium at a smooth interface. The angle of incidence (u1) and the angle of refraction (u2) as shown in Fig. 4.1 obey Snell’s law, n1 sin u1 ¼ n2 sin u2, which is determined by the refractive indices n1 and n2 of the two media. Notice that each angle is defined with respect to the surface normal.

Figure 4.1

The law of refraction.

4.2 Smartphone Experiment (Alec Cook and Ryan Pappafotis, 2015) 4.2.1 General strategy In this experiment, the smartphone acts as an imaging device for recording the incident and refracted laser beams at the air–glass interface. After capturing images at various incident angles, the angle of incidence u1 and the angle of

41

Chapter 4

42

refraction u2 can be extracted from the image and a plot of sin u1 versus sin u2 can be generated. The plot should be linear, and its slope should be the ratio of the refractive indices of the two media (air and glass). 4.2.2 Materials 1. 2. 3. 4. 5. 6.

A smartphone A laser pointer Two biconvex lenses ( f ¼ 50 and 200 mm) A semicircular glass prism A collection of LEGO® bricks A printed circular protractor

4.2.3 Experimental setup The experimental setup is identical to the setup for the law of reflection (Fig. 3.2), except that the mirror is replaced by a semicircular glass prism. One should ensure that the laser beam passes through the center of the glass prism so that the beam can go straight through at the curved interface. Figure 4.2 shows two example images from the experiments.

Figure 4.2 Example images demonstrating the law of refraction taken using a smartphone: (a) u1 = 32 deg and (b) u1 = 40 deg.

4.2.4 Experimental results Using multiple smartphone images taken for different angles of incidence u1, a set of angles (u1, u2) can be obtained and a plot of sin u1 versus sin u2 can be generated, as shown in Fig. 4.3. It is expected that the plot is linear, and a fit with the equation sin u1 ¼ k sin u2 shows that the slope is k ¼ 1.51  0.01. Because it is expected that k ¼ n2/n1, the refractive index of glass n2 is calculated to be 1.51  0.01, which is close to the expected value.

Law of Refraction

43

Figure 4.3 The experimentally obtained sin u1 versus sin u2 plot. The line is a linear fit of the data, with a slope of k = 1.51  0.01.

Chapter 5

Image Formation 5.1 Introduction For both mirrors and lenses under the paraxial approximation, the image formed by both components follows the same image equation, 1 1 1 þ ¼ , p i f

(5.1)

where p is the object distance, i is the image distance, and f is the focal length of the mirror or the lens. The paraxial approximation is the small angle approximation for geometric optics. It applies to objects that lie close to the optical axis throughout the system and radiate light rays that make a small angle to the optical axis. Figure 5.1 shows the real images formed by a concave mirror and a converging lens as well as defined parameters associated with these two components. The lateral magnification M is defined as the ratio of the image height hi to the object height ho, i.e., M ¼ hi/ho. Note that when the image is upside down with respect to the object, hi is negative. Based on the image equation above, one can also obtain M ¼ i/p. Thus, to experimentally validate the O O F F I I

f

f p

p

i

i (b)

(a) Figure 5.1

A real image formed by (a) a concave mirror and (b) a converging lens.

45

Chapter 5

46

image equation, one can (1) directly test the image equation (i.e., by measuring p and i) or (2) indirectly measure the lateral magnification M (using its relationship to p and i). Most standard image formation labs use the first method to determine the focal length of a mirror or a lens; this method can only be implemented in the case of real image formation, i.e., a concave mirror or a converging lens [1, 2]. To determine the focal length of a convex mirror or a diverging lens, an extra known converging lens is needed [3, 4]. However, with a smartphone, one can take advantage of the autofocus feature of the camera and test the image equation for any type of mirror or lens using the second method, which is illustrated as a two-step process. Step 1. The smartphone is used to directly observe the image I1 formed through a converging or a diverging lens L1, as shown in Fig. 5.2. In this step, the image I1 acts as an object for the smartphone camera and forms the second image I2 in the image sensor. The smartphone camera system can be treated as a single converging lens with varying focal lengths. The final image distance i2 is always fixed at a constant due to autofocus (see Chapter 1), and the height of the image I2 is hi. The first object distance p1 and the distance d between L1 and the smartphone can be measured experimentally. For this image formation case, p2 ¼ d  i1. Step 2. The lens L1 is removed while the locations of the object and the smartphone are intact. Using the autofocus, an image Ii0 of the object O can be directly captured in the smartphone and the image height changes to hi0. Therefore, for Step 1, the final lateral magnification M of image I2 is M¼

Figure 5.2

hi i i ¼ M 1M 2 ¼ 1 2 , ho p1 p2

A two-step procedure for testing the image formation equation.

(5.2)

Image Formation

47

where M1 is the lateral magnification of image I1 (first image formation) and M2 is the lateral magnification of image I2 formation with image I1 as the object (second image formation). According to Eq. 5.1, the formation of image I1 gives 1 1 1 þ ¼ , p1 i1 f 1

(5.3)

where f1 is the focal length of L1. For Step 2, because the smartphone has the autofocus feature, the image distance i2 is assumed to be a constant while the focal length of the smartphone camera system is automatically adjusted to accommodate for the change of the object distance d þ p1. Thus, the lateral magnification M0 is M0 ¼

h0i i ¼ 2 : ho d þ p1

(5.4)

Based on Eqs. 5.2–5.4, the ratio of the heights of these two-step images can be represented as a function of pd1 ,   hi M i1 d þ p1 d ¼ ¼ ¼ þ1 h0i M 0 p1 p2 p1

1 di1 i1

!

 ¼

!  p1 1 þ1 : d  fd1 pd1 þ pd1 þ 1 (5.5)

Equation 5.5 is valid for both converging and diverging lenses. Thus, if one plots hhi0 versus pd1 for a fixed d, one can obtain the focal length f1 of L1. i

5.2 Smartphone Experiment (Michael Biddle and Robert Dawson, 2015; Yoong Sheng Phang, 2021) 5.2.1 General strategy The two-step method outlined above is used to demonstrate image formation by both a converging and a diverging lens. The smartphone camera is used to acquire images of the object and obtain the image height ratio for Steps 1 and 2. 5.2.2 Materials 1. 2. 3. 4. 5.

A A A A A

smartphone converging lens ( f1 ¼ 100 mm) diverging lens ( f1 ¼ 50 mm) meterstick hex key with a rubber sleeve

48

Chapter 5

Figure 5.3 (a) The experimental setup for determining the focal length of L1 using a smartphone, and example images taken from a diverging lens for (b) Step 1 and (c) Step 2 at p1 ¼ 10 mm and d ¼ 125 mm.

5.2.3 Experimental setup An example of the experimental setup is shown in Fig. 5.3(a). A converging ( f1 ¼ 100 mm) and a diverging lens ( f1 ¼ 50 mm) are used as L1, respectively. A meterstick is held in place on the table using masking tape. A smartphone is secured perpendicularly to a flat surface using a makeshift holder at one end of the meterstick. The lens holder is fixed into the optics table a distance d ¼ 125 mm from the smartphone, and the hex key with a purple rubber sleeve is used as the object. Before experimentation, it is important to ensure that the heights of L1, the hex key, and the phone camera are adjusted so that they are centered and perpendicular to one another. The object distance p1 is varied from 10 to 100 mm in 10-mm increments by sliding the object along the meterstick. At each p1 location, two autofocused photos of the hex key are taken, one with L1 in place and one without L1 in place [see example in Figs. 5.3(b) and (c), respectively]. 5.2.4 Experimental results The pixel values of hi and hi0 are determined using ImageJ (https://imagej.nih. gov/ij/; see Appendix III for a brief tutorial) for the photos taken at Steps 1 and 2 at each p1, and hi/hi0 is plotted against p1/d as shown in Fig. 5.4. By fitting the data with Eq. 5.5, the focal length f1 can be extracted as the fit parameter. For the converging lens, we obtain f1 ¼ 100  0.8 mm, and for the diverging lens, f1 ¼ 49.6  0.7 mm is extracted. These values are very close to the focal lengths reported by the lens manufacturer.

Image Formation

49

Figure 5.4 The experimental plot of h0i versus pd1 for a converging (blue data points) and hi diverging (red data points) lens. The corresponding fits with Eq. 5.4 are shown by the red and blue curves.

References [1] A. Ogston, A. Hanks, and D. Griffith, “Basic optics system OS-8515C,” PASCO, https://d2n0lz049icia2.cloudfront.net/product_document/BasicOptics-System-Manual-OS-8515C.pdf (2001). [2] J. Wang and W. Sun, “Measuring the focal length of a camera lens in a smart-phone with a ruler,” Phys. Teach. 57(1), 54–54 (2019). [3] H. Roy, “A new method for measuring the focal length of a diverging lens,” Am. J. Phys. 40(12), 1869–1870 (1972). [4] S. Brown, “Finding the focal length of a diverging lens,” Phys. Teach. 35(8), 452–452 (1997).

Chapter 6

Linear Polarization 6.1 Introduction Light is a transverse electromagnetic (EM) wave in which the oscillation directions of the electric field and the magnetic field are perpendicular to its propagation direction. Linearly polarized light is an EM wave with a constant orientation direction for the oscillating electric field, wherein the orientation of its electric field defines the direction of the polarization. As shown in Fig. 6.1, a linearly polarized light with a polarization along the x direction and a propagation direction along the z axis can be written as ~ x ¼ xE ˆ x0 cosðk l z  vtÞ, E

(6.1)

where Ex0 is the amplitude of the electric field, xˆ is the unit vector along the x axis, and kl and v are the wave number and angular frequency of the EM wave, respectively. The output of light sources such as the Sun, flashlights, and household lamps, are not linearly polarized, whereas a laser’s output, on the other hand, is linearly or partially polarized. One can use a polarizer to change unpolarized light into linearly polarized light. Every polarizer has a polarization axis identified by the manufacturer. An ideal polarizer only allows the component of the electric field parallel to its polarization axis to be fully transmitted. For instance, Fig. 6.1 depicts a polarizer whose polarization axis forms an angle a with respect to the x axis. If a linearly polarized light ~ p transmitted through the propagates through the polarizer, the electric field E polarizer is expressed as

Figure 6.1

Linearly polarized light passing through a polarizer. 51

Chapter 6

52

~ p ¼ E x0 cos aðxˆ cos a  yˆ sin aÞ cosðk l z  vtÞ: E

(6.2)

Thus, the intensity Ip of the light passing through a linear polarizer can be written as I p ðaÞ ¼ I 0 cos2 a,

(6.3)

1 where I 0 ¼ cε0 E 2x0 is the intensity of the incident light. Equation 6.3 is 2 known as Malus’s law, and it describes the basic properties of a linearly polarized light transmitted through a polarizer.

6.2 Smartphone Experiment (Sungjae Cho and Aojie Xue, 2019) 6.2.1 General strategy To demonstrate Malus’s law, two polarizers are used to measure the change of light intensity as a function of the relative angle of the polarization axes. The smartphone camera is used as a light intensity detector. One should ensure that the intensity of light does not saturate the camera’s highest intensity limit. 6.2.2 Materials 1. 2. 3. 4. 5. 6. 7.

A smartphone A laser pointer A laser pointer holder Two sheet polarizers Two polarizer holders A printed protractor A smartphone holder

6.2.3 Experimental setup The experimental setup is shown in Fig. 6.2 (more experimental setups can be found in Chapter 21). Depending on the dimension of the laser pointer, polarizer sheets, and the smartphone, three simple holders, as shown in Fig. 6.2, are designed and 3D printed. Both the holders for the laser pointer and the smartphone are U-shaped slots that allow items to fit snugly within them. A hole is designed in the smartphone holder to accommodate the smartphone camera. The holders for the polarizer sheets are rectangular frames with rectangular openings. The polarizer sheets are taped to these rectangular frames so that they can be removed and reattached conveniently. The orientation of one polarizer is changed using a protractor. In this setup, one should ensure that the center of the laser pointer, the hole in the smartphone holder, and the center of the rectangular opening in the polarizer holder are aligned and at the same height.

Linear Polarization

53

Figure 6.2 The setup for Malus’s law experiments and the corresponding CAD for different holders.

The alignment of all the experimental components is very important in this experiment. For example, to align the laser beam and the phone’s camera, shine the laser beam directly into the camera lens at a distance specific to the experiment and search for the reflected laser beam from the lens using a piece of white paper facing the smartphone camera. If a reflected laser spot is observed on the paper, the laser beam is not perpendicular to the smartphone lens. Tilt the phone slightly until there is no reflected laser spot observed on the paper, or until the location of the reflected laser spot is closest to the center of the laser pointer. Doing so ensures that the laser beam is perpendicularly incident on the smartphone lens, i.e., the incident angle u1 ¼ 0 deg. The same alignments should be repeated for the two polarizers. The two polarizer sheets are cut into rectangular shapes along the same polarization direction and taped onto the holders in their lengthwise direction so that the relative rotation angle a ¼ 0 deg and a maximum transmission intensity can be initially achieved. The second polarizer is rotated in increments of 15 deg using a protractor, and the transmitted laser intensity Ip through the two polarizers is recorded at the phone camera until the rotation angle a reaches a full 180 deg. During the experiment, any ambient light should be turned off. The light intensity in units of lux (one lumen per square meter) is determined using a phone app called a light meter. 6.2.4 Experimental results The experimentally determined Ip/I0 versus a data is shown in the scatter plot in Fig. 6.3, and the corresponding fit based on a modified Eq. 6.3, I p ðaÞ I0

¼ A cos2 a þ B, is shown by the solid red curve. Here, both A and B

Chapter 6

54

Figure 6.3

A plot of the relative light intensity Ip/I0 versus the polarizer rotation angle a to I ðaÞ

show Malus’s law. The data points are fit with the function pI0 ¼ A cos2 a þ B. The fitting parameters are determined to be A ¼ 0.91  0.04 and B ¼ 0.07  0.02.

are the fitting parameters. Ideally, A should approach 1 while B is close to 0. The fit of the experimental data in Fig. 6.3 gives A ¼ 0.91  0.04 and B ¼ 0.07  0.02. The x2 value calculated using multiple data sets is 0.003423, which is a low statistical value.

Chapter 7

Fresnel Equations 7.1 Introduction When a polarized light is incident on an interface between two media, the reflectance R and transmittance T of the light depend on the intrinsic properties of the two media, the indices of refraction, and the polarization state of the incident light. The incident plane of light is defined by the light ray and the surface normal, as shown in Fig. 7.1. When the incident light’s electric field oscillation direction is perpendicular to the incident plane, the incident light is called s-polarized. When the electric field is parallel to the incident plane, the incident light is called p-polarized. If we only consider the reflection, then the reflectance of the s- or p-polarized light at the interface can be written as 2 32 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   2

n2 cosu1   sin2 u1 7 n1 I rs 6 6 7 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Rs ¼ i ¼ 6 7,  2 5 Is 4 n2 2  sin u1 cosu1 þ n1

32 2  rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2 n2 2 n2 cosu1   sin2 u1 7 n1 I rp 6 7 6 n1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Rp ¼ i ¼ 6   7,  2 5 I p 4 n2 2 n2 2  sin u1 n1 cosu1 þ n1

Figure 7.1 The definition of two different polarization states of incident light.

55

(7.1)

(7.2)

Chapter 7

56

where I rs ðI rp Þ and I is ðI ip Þ are the intensities of the reflected and incident lights for s- (or p-) polarization, u1 is the angle of incidence, and n1 and n2 are the indices of refraction of the two media. To test Eq. 7.1 or 7.2, one needs to (1) generate polarized light, (2) vary the angle of incidence, and (3) detect the intensity of the light.

7.2 Smartphone Experiment (Graham McKinnon, 2020) 7.2.1 General strategy In this experiment, the smartphone camera is used as a light intensity detector. One should ensure that the intensity of light does not exceed the camera’s highest intensity limit and that this experiment is performed in a dark environment for the best results. 7.2.2 Materials 1. 2. 3. 4. 5. 6. 7.

A smartphone A red laser diode A sheet polarizer A glass slide Play-Doh® Cardboard A circular lid

7.2.3 Experimental setup The experiment is set up on a cardboard base as shown in Fig. 7.2. The light source is a red laser diode fixed on a book by tape (see Chapter 21). The polarizer sheet is secured by a cardboard stand in front of the laser. A glass slide is held perpendicular to the top surface of the circular lid by two chunks of Play-Doh. A protractor is printed or drawn around the base of the lid so that the incident angle u1 can be easily read, or a straw can be added to the center of the circular lid as an extension to easily indicate u1. A piece of white paper taped to a cardboard frame serves as a screen. A smartphone holder (not shown) can be designed using LEGO® bricks or other household items to prevent the phone from shifting during measurements and is positioned behind the screen. The smartphone holder and the screen can be moved along the circumference of a large circle drawn on the cardboard base to take photos of the reflected beams. The incident intensity of the laser beam I is is measured by removing the glass slide and taking a picture of the screen when it is positioned directly in the path of the laser. The glass slide is rotated from an angle of 10 to 85 deg in 5-deg increments, and at each incident angle, a photo of the reflected light on the

Fresnel Equations

Figure 7.2

57

The experimental setup on a large cardboard base.

Figure 7.3 Increase in reflected intensity with increasing incident angle u1.

screen is taken. Examples of photos taken at u1 ¼ 10, 30, 60, and 80 deg, respectively, are shown in Fig. 7.3, revealing a steady increase in reflected intensity. The reflected intensity is determined by integrating the light intensity of the laser spot using ImageJ (https://imagej.nih.gov/ij/download. html; see Appendix III). 7.2.4 Preliminary results The average reflectance Rs over 10 pictures at each u1 is calculated based on the photos taken and plotted as" a function of u1 in Fig.# 7.4. A modified pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2 n cosðu1 u0 Þ ðn2 Þ2 sin2 ðu1 u0 Þ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi is used to fit the version of Eq. 7.1 given by Rs ¼ n cosðu1 u0 Þþ ðn2 Þ2 sin2 ðu1 u0 Þ 1 data shown in Fig. 7.4. The refractive index of glass n2 is extracted as n2 ¼ 1.5  0.2.

58

Chapter 7

Figure 7.4 The plot of the experimentally obtained reflectance Rs versus u1. The solid curve is the fit based on a modified Eq. 7.1. The refractive index of glass n2 is extracted, giving n2 ¼ 1.5  0.2.

Chapter 8

Brewster’s Angle 8.1 Introduction According to the Fresnel equations, p-polarized light incident on an interface at a specific angle uB will be completely transmitted at the interface, and there will be no reflection, i.e., Rp ¼ 0 (see Fig. 8.1). This particular angle uB is called Brewster’s angle. According to Eq. 7.2 (see Chapter 7), Brewster’s angle is determined by tan uB ¼

n2 , n1

(8.1)

where n1 and n2 are the refractive indices of the two media at the interface, as shown in Fig. 8.1. In addition, the transmission angle ut should also satisfy the following condition: uB þ ut ¼ 90 deg:

(8.2)

According to Eq. 8.1, if one can measure Brewster’s angle at an interface, the refractive index n2 can be easily determined for a given n1 (for example, air).

B

n1

n2 t

Figure 8.1

P-polarized light at an interface when the incident angle u1 = uB.

59

Chapter 8

60

8.2 Smartphone Experiment (Robert Bull and Daniel Desena, 2019) 8.2.1 General strategy The smartphone camera is used as a light intensity detector. Alternatively, it can be used to take a photo of the incident and refracted light beams to determine the relative angle between these two beams. An accurate value for Brewster’s angle can also be determined by analyzing the photo of the reflected and refracted laser beams because uB þ ut ¼ 90 deg when the incident angle is at Brewster’s angle. 8.2.2 Materials 1. 2. 3. 4. 5. 6. 7. 8. 9.

A smartphone A laser pointer A polarizer Two 1 00  1 00  4 00 wooden sticks One strip of industrial-strength VELCRO® A wooden screw Two stands A ruler Light corn syrup (glucose syrup)

8.2.3 Experimental setup The experimental setup is shown in Fig. 8.2. Two long wooden sticks act as rotary arms to change the laser incident angle and smartphone detection angle. The smartphone is attached to the end of one arm using a clamp, while a laser pointer and a polarizer are attached to the end of the other arm with VELCRO to ensure that the incident laser beam is p-polarized. The laser beam is adjusted so that it is parallel to the long edge of the wooden stick. The other ends of the two arms are screwed together at a base to form a “V”-shaped joint so that both arms can be rotated freely. The two arms are supported by two stands of

Figure 8.2

Experimental setup for the measurement of the Brewster angle.

Brewster’s Angle

61

the same height. By moving the stands horizontally along the table, the angle of the arms can be adjusted. A petri dish filled with water or glucose solution with different concentrations is placed on the table. The height of the petri dish is adjusted so that when Arm 1 is rotated, the position of the incident laser beam on the surface of the liquid in the petri dish remains roughly the same. One can determine the incident angle u1 from the geometric arrangement of the arm, the stand, and the petri dish. During this lab, it is essential to keep the petri dish in the center of the apparatus and the beam in the center of the petri dish. Brewster’s angle uB is determined when the reflected intensity received by the smartphone reaches a minimum value. For glucose solution with different concentrations, the refractive index is determined by Eq. 8.1 after identifying the corresponding uB. 8.2.4 Experimental results Figures 8.3(a) and (b) show the obtained reflection Rp as a function of the incident angle u1 for water and 45% glucose solution, respectively. Based on the fits using Eq. 7.2, the corresponding uB is determined to be 53.3 and 53.7 deg, respectively. Theoretically, uB ¼ 53 deg for water and uB ¼ 54.6 deg for 45% glucose solution (n2 ¼ 1.4062) [1]. The experimental measurements agree well with these theoretical values. Based on the measurement strategy above, one can determine the uB for glucose solutions with different concentrations. Figure 8.4 shows the experimental results, which reveal that uB increases almost linearly with the glucose concentration c. A linear fit gives uB ¼ uB0 þ kc, where uB0 ¼ 51.70  0.03 deg and k ¼ 5.4  0.1 deg. In fact, it is expected that uB0 should be close to 53 deg, the uB of water. The resulting uB0 ¼ 51.70 deg is much smaller than 53 deg, suggesting that there could be a systematic error in the measurement.

Figure 8.3 The experimentally obtained reflectance Rp versus u1 for (a) water and (b) 45% glucose solution. The curves are the fits based on Eq. 7.2.

62

Chapter 8

Figure 8.4 The plot of the Brewster angle uB versus the glucose concentration c. The data point at 0% refers to pure water.

Figure 8.5 The plot of the index of refraction n2 versus the glucose concentration c. The data point at 0% refers to pure water.

Additionally, based on the measurement results shown in Fig. 8.4 and Eq. 8.1, the relationship between the refractive index n2 and the concentration c of glucose can be obtained and plotted in Fig. 8.5. A linear relationship is found, n2 ¼ n20 þ kc, where n20 ¼ 1.265  0.002 and k ¼ 0.260  0.006. For more accurate results, one should expect the n20 to be close to 1.33, which is the value of n2 for water.

Reference [1] D. R. Lide, CRC Handbook of Chemistry and Physics, 87th ed., CRC Press, Boca Raton, FL (1998).

Chapter 9

Optical Rotation 9.1 Introduction Optical rotation occurs when linearly polarized light passes through an optically active medium such as solutions or crystals consisting of molecules or unit cells that exhibit so-called mirror symmetry breaking; i.e., the spatial arrangement of these molecules or unit cells cannot overlap with their mirror images simply by reorientation. The polarization direction of the incident light ray will be rotated about the optical axis of the medium (see Fig. 9.1). This property of molecules or crystals is also known as chirality, which causes the medium to exhibit optical activity. Any linearly polarized light can be decomposed into a linear combination of a right-hand (RH) and a left-hand (LH) circularly polarized light. When circularly polarized light propagates through an optically active medium, it experiences different refractive indices, i.e., nRH ≠ nLH. Such a phenomenon is called circular birefringence. When the two circularly polarized lights emerge from the end of the medium and recombine, their electric fields propagate into different phases and cause the recombined linear polarization direction to rotate. In principle, for a solution containing optically active molecules, the optical rotation angle Df is a linear function of concentration c [1], Df ¼ blc,

(9.1)

where l is the path length of the solution that the light is propagating through and b is a constant called the specific rotation. It has been reported that b for sucrose is 66.37 deg · mL/(g · dm) [2]. Here, 1 decimeter (dm) ¼ 0.1 m. Polarization orientation

Incident light ray

Figure 9.1

Rotation of the polarization

Optical active medium

An illustration of optical rotation.

63

Chapter 9

64

9.2 Smartphone Experiment (Nicholas Kruegler, 2020) 9.2.1 General strategy To demonstrate the polarization rotation, two polarizers are necessary for detecting the change of light intensity. Here, the smartphone camera is used as a light intensity detector. One should make sure that the intensity of light does not exceed the camera’s highest intensity limit. 9.2.2 Materials 1. 2. 3. 4. 5. 6. 7. 8.

A smartphone A red laser diode A cardboard box Two polarizing film sheets A cuvette Sucrose (from any grocery store) A kitchen scale A paper protractor

9.2.3 Experimental setup The experimental setup is shown in Fig. 9.2. A cardboard box serves as the base of the setup. A laser diode is connected to its power source and taped onto the cardboard to prevent movement. A fixed polarizer (Polarizer I) is positioned in front of the laser diode to ensure that the polarization of the laser beam is in the horizontal direction and is held in place by a small slit in the cardboard box. A rotating polarizer (Polarizer II) is attached to the printed paper protractor and placed against a small cardboard extension for support. The smartphone camera is placed behind the rotating polarizer to detect the light passing through Polarizer II. A cuvette containing sucrose solution sits in a hole between the two polarizers. The two polarizers, the

Figure 9.2

The setup for optical rotation experiments.

Optical Rotation

65

cuvette, and the smartphone camera should be perpendicular to the path of the laser beam. Different concentrations of sucrose from 0.1 to 1.3 g/mL are prepared for the cuvette, which has a width of 0.1 dm. For each concentration, Polarizer II is rotated from 0 to 180 deg, and photos of the light received by the smartphone camera are obtained and used to determine the light intensity Ip by taking the integrated grayscale values over a smartphone photo via ImageJ (see Appendix III for detailed instructions). 9.2.4 Experimental results The relative light intensity Ip/I0 (see Chapter 6) versus polarizer rotation angle a for water and 1 g/mL sucrose are shown in Fig. 9.3. Clearly, the intensity minimum occurs at different a for water and 1 g/mL sucrose, indicating the optical rotation. According to Malus’s law (Eq. 6.3; see Chapter 6), the Ip/I0 I ðaÞ

versus a relationship for water is fitted by the function pI 0 ¼ Aw cos2 ða þ fw Þ þ Bw (the dashed blue curve), where Aw ¼ 0.89  0.03, Bw ¼ 0.06  0.02, and fw ¼ 6  3 deg are obtained. For the data of the Ip/I0 versus a due I ðaÞ

to the sucrose solution, a similar function, pI 0 ¼ As cos2 ða þ fs Þ þ Bs , is used to fit the data. The red fitting curve shown in Fig. 9.3 gives As ¼ 0.81  0.03, Bs ¼ 0.1  0.04, and fs ¼ 13  5 deg, respectively. Therefore, the optical rotation angle for 1 g/mL sucrose is Df ¼ fs  fw ¼ 7  6 deg. The large error is a result of systematic error in the sucrose trial. This experiment is repeated for a range of sucrose concentrations c from 0.1 to 1.3 g/mL, and Df for each c is extracted. Figure 9.4 plots the extracted Df versus the sucrose concentration c. A linear relationship is observed. According to Eq. 9.1, a linear fit in Fig. 9.4 gives the slope

I

Figure 9.3 A plot of the relative light intensity Ip0 versus the polarizer rotation angle a for water and a 1 g/mL sucrose solution. The solid blue and red curves are the fitting results. The vertical blue and red dotted lines indicate the location of the minima of the fits.

66

Chapter 9

Figure 9.4 The plot of Df versus c for the sucrose solution. A linear fit gives k ¼ 6.8  0.1 and B ¼ 0.1  0.2. Note that k ¼ bl according to Eq. 9.1.

bl ¼ 6.8  0.1 deg · mL/g. Because l ¼ 0.1 dm, one obtains the specific rotation b ¼ 68  1 deg · mL/(g · dm), which is close to the value of 66.37 deg · mL/ (g · dm) reported in [2].

References [1] R. N. Compton, S. M. Mahurin, and R. N. Zare, “Demonstration of optical rotatory dispersion of sucrose,” J. Chem. Educ. 76(9), 1234 (1999). [2] D. R. Lide, CRC Handbook of Chemistry and Physics, 87th ed., CRC Press, Boca Raton, FL (1998).

Chapter 10

Thin Film Interference 10.1 Introduction When light shines onto a thin film sandwiched between two dielectric media, an interference pattern can be formed due to the reflection at these two interfaces when the two reflected beams merge together at a large distance from them. For example, the rainbow colors in soap bubbles are a result of thin film interference. As shown in Fig. 10.1(a), a thin film with a thickness dfilm and a refractive index n2 is sandwiched between two media with refractive indices n1 and n3, respectively. The light with an incident angle u1 at the interface of media n1 and n2 reflects and transmits at the location A. This reflected beam is denoted as Beam 1. The transmitted beam reflects at the interface of media n2 and n3 at the location B and emerges at location C of the n1 and n2 interface. This emerging beam is denoted as Beam 2. Both Beam 1 and Beam 2 are parallel to each other and can interfere with each other at a large distance away from the thin film surface. Because this interference occurs far away, a lens is needed to observe the fringes. Clearly, Beam 2 propagates an extra distance when both beams meet at location P. Based on the geometric relationship between Beam 1 and Beam 2, the optical path length difference L between the two beams is L ¼ 2n2 d f ilm cos u2 :

(10.1)

The bright interference fringe will form when L ¼ ml, where m is an integer and l is the wavelength of the light in vacuum (or air). Thus, the corresponding refractive angle um 2 is d f ilm cos um 2 ¼

ml : 2n2

(10.2)

m The refractive angle um 2 is closely linked with the incident angle u1 by Snell’s law. Therefore, to form a thin film interference pattern, the incident light

67

Chapter 10

68

Figure 10.1 (a) The ray configuration of a light beam reflected and transmitted at the interfaces of a thin film, and (b) a sample interference pattern due to thin film interference at u1 ¼ 0 deg.

beam should have a diverse incident angle, i.e., either a divergent or convergent light beam. Figure 10.1(b) shows a sample interference pattern when a focused beam is incident on the film at u1 ¼ 0 deg. However, if n2 , n3, there is a phase shift of p between Beam 1 and Beam 2, and the corresponding condition to generate bright fringes changes to d f ilm cos um 2 ¼

ð2m þ 1Þl : 4n2

(10.3)

Therefore, the separation between the adjacent two bright fringes can be written as d f ilm ðcos umþ1  cos um 2 2Þ¼

l : 2n2

(10.4)

 um If Du ¼ umþ1 2 ≪ 1, then 2 d f ilm sin um 2 Du ¼

l : 2n2

(10.5)

If only a few bright fringes at a large distance away from the thin film are n1 considered, um sin u1 . Thus, 2  u2 and sin u2 ¼ n2 d f ilm ¼

l ; 2n1 Du sin u1

(10.6)

i.e., the film thickness can be determined if n1 and l are known, and u1 and Du can be determined or measured experimentally.

Thin Film Interference

69

10.2 Smartphone Experiment (Nicolas Lohner and Austin Baeckeroot, 2017) 10.2.1 General strategy A smartphone is used to take the photo of the interference pattern and determine the separation between the adjacent bright fringes. Also, because a high coherence laser is required, this experiment is carried out using components in a modern optics lab. 10.2.2 Materials 1. 2. 3. 4. 5. 6. 7.

A smartphone A helium–neon (He–Ne) laser 3 lenses ( f ¼ 20, 10, and 10 cm) 2 glass slides A paper viewing screen with a printed scale 3M™ double-sided tape Various holders

10.2.3 Experimental setup The experimental setup shown in Fig. 10.2 is designed to collect multiple interference patterns from an air gap between two glass slides generated by a He–Ne laser. A Keplerian telescope (see Fig. 3.3 in Chapter 3) is used to expand the laser beam using a converging lens of f1 ¼ 20 cm and a diverging lens of f2 ¼ 10 cm with a separation of 10 cm. Another converging lens of f3 ¼ 10 cm is placed approximately 65 cm away from the beam expander. The effective focal point of the lens is then determined, and the glass slides that are used to produce the interference pattern are placed within the focal point to allow multiple angles of the converging beam to interfere with each other.

Figure 10.2 The setup for the thin film interference experiment.

Chapter 10

70

The glass slides are separated by the width of a piece of double-sided tape. An appropriate incident angle is determined by slowly rotating the glass slides until the interference pattern is clearly visible on the viewing screen. Then, the distance L between the glass slides and the viewing screen is varied and measured using a meterstick. The interference patterns on the viewing screen with a printed ruler are captured using the smartphone at different L. 10.2.4 Experimental results Figure 10.3 shows several interference patterns taken at different distances L from the glass slides at u1 ¼ 25 deg. With the increase of the L, the separation Dr of the adjacent bright fringes also increases monotonically, as shown in Table 10.1. Thus, Du can be determined as Du ¼ Dr/L. For an air gap formed between two glass slides, n1 ¼ 1 and l ¼ 632.8 nm, the thickness of the doublesided tape dfilm can be determined from Eq. 10.6. Table 10.1 summarizes the calculated d at different L. Based on the average of these five measurements, the final measured thickness is determined to be dfilm ¼ 58  3 mm. This measurement value is in the order of the reported typical thickness, 70–150 mm, of 3M double-sided tape.

Figure 10.3 Selected interference patterns taken at different distances L. Table 10.1 The calculated thickness dfilm of the double-sided tape from the interference pattern at different distances L. Measured Dr (mm)

Calculated dfilm (mm)

16

1.9

62.8

30

4.1

55.0

44

5.4

61.2

63

8.2

57.7

79

10.4

56.6

L (cm)

Chapter 11

Wedge Interference 11.1 Introduction If the two interfaces of a thin film are not parallel to each other, an interference pattern can be formed when the film is illuminated by an extended light source. This interference pattern is called the fringes of equal thickness. When viewed at nearly normal incidence as shown in Fig. 11.1, parallel fringe lines called Fizeau fringes are formed. The origin of this kind of interference is similar to that described in Chapter 10: if the top and bottom interfaces form a very small angle d (≪1, in radians), the thickness dfilm of the film at a particular location x can be expressed as d film ¼ dx: For a small incident angle u1, the bright fringe occurs at   1 m 2n2 d film ¼ m þ l, 2

(11.1)

(11.2)

which corresponds to

Figure 11.1 (a) The ray configuration of a light beam reflected and transmitted at the interfaces of a thin wedge, and (b) an example interference pattern due to wedge interference at u1 ¼ 0 deg by white light.

71

Chapter 11

72

xm f ilm ¼

m þ 12 l: 2n2 d

(11.3)

Thus, the separation between the adjacent bright fringes is Dxf ilm ¼

l : 2n2 d

(11.4)

Therefore, by measuring the fringe separation for a thin air wedge (n2 ¼ 1) and based on the known wavelength of the incident light, one can determine the wedge angle d. Conversely, if the wedge angle d is known, then by measuring fringe spacing Dxfilm, one can determine the wavelength of the light. Note that the incident light should be a parallel or collimated beam, and the incident angle should be very close to 0 deg to form Fizeau fringes. Furthermore, the fringes can only be formed close to the top of the wedge.

11.2 Smartphone Experiment (Graham McKinnon and Nicholas Brosnahan, 2020) 11.2.1 General strategy The purpose of the experiment is to determine the thickness of a sheet of printer paper using wedge interference. The smartphone is used to take images of the interference pattern and determine the separation between the adjacent bright fringes. Due to the requirement for a high coherence laser, this experiment is carried out using components in a modern optics lab. 11.2.2 Materials 1. 2. 3. 4. 5. 6. 7. 8.

A smartphone A laser (He–Ne laser) 2 lenses (f ¼ 25 and 100 mm) Two 1 00  3 00 glass slides Printable ruler A piece of printer paper (cut into three equal pieces) Lens cleaning cloth Tape

11.2.3 Experimental setup The experimental setup is shown in Fig. 11.2. A beam of the He–Ne laser is directed at a beam expander composed of the two lenses (see Chapter 10), which widens the beam before it reaches the air–wedge interface. Such a widened beam will result in a large area for the interference pattern on the viewing screen. Before constructing the wedge, the two 1 00  3 00 glass slides are

Wedge Interference

73

Figure 11.2 The setup for the wedge interference experiment.

wiped with the lens cleaning cloth and handled with gloves. This minimizes the chance of dust, smudges, or oils being transferred to the slides and altering the resulting interference pattern. The slides are held against each other, and a piece of tape is used to hold one end of the slides together. One, two, or three pieces of paper are sandwiched between the glass slides at the un-taped end. Two pieces of double-sided tape are used to attach the wedge to a post and stand. The orientation of the wedge can be adjusted such that the laser beam’s angle of incidence is small. A white paper with a printed ruler scale is used as a viewing screen. The marks on the ruler are used for distance-to-pixel calibration during image analysis. Images of the interference patterns formed on the screen are captured by a smartphone and analyzed by ImageJ (see Appendix III). 11.2.4 Experimental results Figure 11.3 shows an example of the interference pattern of the air wedge formed by three pieces of paper. Each image is cropped down to a section containing clear major fringes (an example of two adjacent fringes is outlined by the blue boxes). The secondary fringes inside each blue box are caused by interference from multiple laser beams reflected or refracted from the optical

Figure 11.3 The interference pattern of the air wedge formed by three pieces of paper. An example of two adjacent fringes is outlined by the blue boxes.

Chapter 11

74 Table 11.1 Estimated thickness of one, two, and three sheets of printer paper. Sheets of paper

Estimated thickness dfilm (mm)

1

0.109

2

0.226

3

0.345

components in the optical path. ImageJ is used to measure the pixel number for the spacing between each of the major fringes. A distance-to-pixel ratio is obtained using the printed ruler scale and implemented to convert pixel length to real length. This process is repeated for 10 adjacent fringes. After determining the fringe separation distance, the thickness of the paper can be calculated using Eqs. 11.1 and 11.4. The estimates for the thickness of one, two, and three pieces of paper are shown in Table 11.1. These measurements are comparable to the thickness of a standard sheet of A4 paper, which is about 0.1-mm thick.

Chapter 12

Diffraction from Gratings 12.1 Introduction Diffraction is a phenomenon in which a path of light bends when it passes around the edge of an object. The amount of bending depends on the relative ratio of the wavelength l of light to the size a (effective diameter) of the a a object. If ≫ 1, the bending of the light is almost unnoticeable. But if  1 l l a or , 1, the amount of bending is considerable or even significant, and can l be easily seen by the naked eye. If multiple objects are arranged in close vicinity, then the bent light from each object can superposition, interfere, and generate a new intensity distribution. As shown in Fig. 12.1, if those small objects are regularly (periodically) spaced, a diffraction grating is formed and the light intensity can form regular dark/bright fringes. If the slit spacing in the diffraction grating is ds, and the projected screen angle is u, then the condition for finding the bright fringes is d s sin um ¼ ml, m ¼ 0, 1, 2, : : : ,

(12.1)

where the integer m indicates the order of the diffraction fringe. From a simple geometric relationship shown in Fig. 12.1, one has

m=2 m=1 m=0 m=1 m=2 L Grating

Screen

Figure 12.1 The principle of a diffraction grating. 75

Chapter 12

76

tan um ¼

ym , L

(12.2)

where ym is the vertical location of the mth-order bright fringe on the screen and L is the separation between the grating and screen. At small um (i.e., ym ≪ L), sin um  tan um, so ym ¼

ml L: ds

(12.3)

The separation Dy (¼ ym+1  ym) of two adjacent bright fringes can be approximated as Dy ¼

l L: ds

(12.4)

Based on Eq. 12.3, when a beam of light of different wavelengths (color) is diffracted by the same diffraction grating, the location of the mth-order bright fringe (m ≠ 0) will not be the same. When m ¼ 0, um ¼ 0 regardless of the wavelength of the light; i.e., if a white light is diffracted by the diffraction grating, thebright  fringe of order m ¼ 0 is always white. However, for m ≠ 0, ml um ¼ sin1 ; i.e., for a fixed m (≠ 0), the larger the wavelength, the larger ds the um. Thus, compared with the fringe locations of a red light and a blue light, the mth-order bright fringe of the blue light is closer to the center fringe (m ¼ 0). Therefore, dispersion is achieved as shown in Fig. 12.1. Such dispersion is the foundation for using a diffraction grating to construct a monochromator (see Chapter 13) and spectrometer (see Chapter 14). Typically, a single wavelength light source and a diffraction grating are used to demonstrate the principles of diffraction. The diffraction grating can be the grating slide shown in Fig. 12.2(a) or other periodic structures with a feature size comparable to the wavelength of the corresponding laser.

Figure 12.2 (a) A photo of a plastic grating. (b) A pixel layout of an iPhone 3G (image adapted with permission from Jones [1]).

Diffraction from Gratings

77

For example, the screen of a smartphone is made of many regularly spaced color display units. Each unit (pixel) of the display has RGB three-color emitters, and the resolution of the screen determines the spacing among adjacent display units. The more pixels the screen has per inch (ppi), the better the picture shown by the phone. The resolution of the screen determines the spacing among adjacent display units. A screen with 1080-pixel highdefinition (HD) resolution, for instance, means that the average separation among display units is 24 mm. Figure 12.2(b) shows a pixel layout for an iPhone3G [1]. It is a 2D grating with a vertical pixel-to-pixel distance of 137 mm and a horizontal pixel-to-pixel distance of 101 mm. Different generations of smartphones have different screen resolutions; thus, their screens can be treated as 2D gratings with different spacings.

12.2 Smartphone Experiment I: Diffraction from an iPhone Screen (Zach Eidex and Clayton Oetting, 2018) 12.2.1 General strategy In this experiment, the screen of an old iPhone 3G is used as a 2D grating and another smartphone is used as a photography device to record the diffraction pattern. To quantitatively analyze the diffraction pattern, one must ensure that ym ≪ L. The distance between the screen and the surface of the iPhone 3G as well as the exact separation between diffraction bright spots will be measured. If the distance between the photography phone and the screen is fixed, one can use a printed ruler with known dimensions to calibrate the image pixel size. 12.2.2 Materials 1. 2. 3. 4. 5. 6.

A smartphone An iPhone (or other smartphone) A He–Ne laser (l ¼ 632.8 nm) or a red laser pointer A meterstick A printed screen and a holder Two 3D-printed phone holders

12.2.3 Experimental setup The experimental setup is shown in Fig. 12.3. A straight angular block is 3D printed to hold the diffracting iPhone at 45 deg with respect to the two sides of the block. The He–Ne laser is fixed at least 12 cm away from the diffracting iPhone and directs its beam parallel to one side of the block; i.e., the incident angle for the laser beam is 45 deg. A screen, made of a piece of white paper, is placed along the other side of the block. The distance between the screen and the diffracting iPhone is initially at least 50 cm, and then increases in

Chapter 12

78

Figure 12.3

The experimental setup for diffraction from an iPhone 3G screen.

increments of 10 cm until it reaches 200 cm. At each screen distance, a photo of the diffraction pattern on the screen is taken by the photography smartphone. The separations of the adjacent diffraction spots in horizontal (Dyhorizontal) and vertical (Dyvertical) directions are extracted via ImageJ analysis (see Appendix III), and their relationships as a function of the screen distance L are obtained. Then, based on Eq. 12.3, the spacing between the adjacent pixels can be obtained from the slopes of the plots. 12.2.4 Experimental results Figure 12.4 shows a representative diffraction pattern obtained from an iPhone 3G screen. The diffraction spots form a rectangular lattice, and the spacings between the adjacent bright spots are different in the vertical and horizontal directions. Figure 12.5 plots the adjacent bright fringe distance Dyhorizontal and Dyvertical versus the screen distance L. From the linear data fitting using the function Dy ¼ kL, where k is the fit parameter representing

Figure 12.4

A typical diffraction pattern from an iPhone 3G screen.

Diffraction from Gratings

79

Figure 12.5 The plots of the distance between adjacent bright spots Dy versus the screen distance L for (a) the horizontal and (b) vertical directions. The solid lines represent fittings using a linear function Dy ¼ kL. For the horizontal direction, kh ¼ 0.00627  0.00008, and for the vertical direction, kv ¼ 0.00459  0.00008.

the slope, the pixel spacing values in vertical dv and horizontal dh directions are obtained as dv ¼ 140 mm and dh ¼ 100 mm, implying a screen resolution of 150  20 ppi, which is very close to the specified screen resolution of an iPhone 3G, 163 ppi.

12.3 Smartphone Experiment II: Diffraction from a Grating and a Hair (Nick Brosnahan, 2020) 12.3.1 General Strategy In this experiment, the wavelength of a laser diode is determined using a diffraction grating of known slit spacing. As in Section 12.2, one should ensure that ym ≪ L to quantitatively analyze the diffraction pattern. 12.3.2 Materials 1. 2. 3. 4. 5.

A A A A A

smartphone laser diode diffraction grating (1000 lines/mm) single human hair ruler

12.3.3 Experimental setup In the experimental setup shown in Fig. 12.6, a diffraction grating with ds ¼ 1 mm [Fig. 12.2(a)] is placed in between a laser diode and a screen. The diffraction grating is supported by a cardboard frame, and the screen is a piece of white paper with a printed ruler. The beam of the laser diode is adjusted to be level and perpendicular to the diffraction grating and the screen to avoid

Chapter 12

80

Figure 12.6 Experimental setup for determining the wavelength of a laser diode.

discrepancies in the wavelength calculation. The distance L between the diffraction grating and the screen is measured with the ruler. After taking a photo of the diffraction pattern on the screen, one can determine the fringe separation Dy from the printed ruler. Using Eq. 12.4, the wavelength l can be calculated. The diameter of a human hair can be determined based on the same experimental setup, except the diffraction grating is replaced with a single human hair. The hair should be kept taut to obtain a clear diffraction pattern. Using the previously calculated value for l, one can determine the width of the hair by solving for dhair using Eq. 12.4. 12.3.4 Experimental results Figure 12.7 shows a representative photo of the diffraction pattern of the diffraction grating. Based on the separation between the m ¼ 1 fringes and the

Figure 12.7

Diffraction pattern from the grating with 1000 lines/mm.

Diffraction from Gratings

Figure 12.8

81

The diffraction pattern from a human hair.

m ¼ 0 fringe, as well as the screen-to-grating separation, the wavelength of the laser is found to be l ¼ 620  6 nm. The manufacturer-marked wavelength for the laser diode is 650 nm. The above l value is then used to measure the width of a hair. Figure 12.8 presents a typical diffraction pattern from a hair. Based on Eq. 12.4, the hair diameter is determined to be dhair ¼ 67  4 mm. According to published reports [2], the diameter of a human hair is strongly determined by genetics and typically falls within the range of 17 to 181 mm.

References [1] B. Jones, “Apple retina display,” JONESBLOG, https://prometheus. med.utah.edu/~bwjones/2010/06/apple-retina-display/comment-page-2/, accessed June 13, 2022. [2] B. Ley, “Diameter of a human hair,” The Physics Factbook, https:// hypertextbook.com/facts/1999/BrianLey.shtml, accessed June 13, 2022.

Chapter 13

Structural Coloration of Butterfly Wings and Peacock Feathers 13.1 Introduction Many of the colors one perceives in nature are the result of an optical phenomenon called structural coloration. Broadly, there are two types of structural color: iridescent and non-iridescent, in which the observed color changes or does not change with the viewing angle, respectively. Structural colors may originate from different phenomena, such as thin film interference (see Chapter 10), diffraction gratings (see Chapter 12), and scattering, but intrinsically they are the results of optical interference and diffraction, i.e., the outcomes from physical optics or wave optics. For instance, the wings of the Morpho butterfly are composed of two sets of partially overlapping scales: a basal set and a cover set, as shown in Fig. 13.1(a). The basal set consists of periodically spaced ridges separated by cross-ribs and formed by stacks of Christmas tree–shaped lamellae [see the cross section of the basal scales shown in Figs. 13.1(b) and (c)]. These stacks of lamellae act as layered thin films, so their vertical spacing is responsible for the color of light produced by

Figure 13.1 (a) Morpho butterfly wing scales. A cover scale is outlined in red, and a basal scale is outlined in green. (b) Periodically spaced ridges separated by cross-ribs in the basal scale. (c) Cross section of ridges in basal wing scales revealing Christmas tree–shaped lamellae. (Figure images reprinted from [2] under CC by 4.0.)

83

84

Chapter 13

Figure 13.2 (a) The apparent iridescent structural colors of a peacock feather (image reprinted from [9] under CC by 3.0). (b) The barbules on the barb of a peacock feather taken by scanning electron microscopy (SEM), and (c) a typical SEM image of 2D photonic crystals made of melanin rods in the cortex of a barbule (images used with permission from Yoshioka and Kinoshita [5]).

multilayer interference. Meanwhile, the wider transparent cover scales act as an optical diffuser to give the wing a glossy effect [1]. Together, these periodic multilayered microstructures produce blue iridescent structural colors through a combination of interference and diffraction. Similar features are observed for peacock tail feathers. The feathers shown in Fig. 13.2(a) are naturally pigmented brown, but structural coloration produces the iridescent colors one observes [3]. The feathers comprise a central stem with barbs on each side [Fig. 13.2(b)]. An array of tiny barbules sprouts from each side of the barbs. The cortex of a barbule contains a layer of 2D structures made of periodically spaced melanin rods connected by keratin [Fig. 13.2(c)] [4]. These structures are termed 2D photonic crystals. They act as diffraction gratings and contribute to interference effects by behaving as a multilayer in the proximity of other barbules [5]. Thus, the structural color of the peacock feather is determined by the lattice structure and rod spacing of these periodic structures [3]. The 3D periodic structures, such as the butterfly wings and peacock feathers that can change their optical properties, are called photonic crystals. The concept of photonic crystals was proposed in 1987 [6–8], and they currently have many applications such as opto-communication, optoelectronics, sensors, etc. [8]. Because the dimensions in structurally colored objects are too small to be seen by the naked eye, scientists typically apply advanced imaging techniques such as scanning electron microscopy (SEM), transmission electron microscopy (TEM), and atomic force microscopy (AFM) to view them. In addition, due to the diffraction and interference nature of structural color, one can determine the periodic spacing of these structures by treating them as diffraction gratings and applying the grating equations in Chapter 12. A generic setup for such an experiment is shown in Fig. 13.3 and is employed in the following smartphone experiments. The light source is a laser of wavelength l, which passes light through the sample (either a butterfly wing or a peacock feather) and projects

Structural Coloration of Butterfly Wings and Peacock Feathers

85

Figure 13.3 The setup for measuring the periodic spacing in the microstructures.

an interference pattern onto a semitransparent screen a distance L from the sample. The smartphone camera situated behind the screen can capture an image of the pattern. Typically, the central maximum (m ¼ 0) and the firstorder maxima will be visible and are separated by a distance Dy. Based on Eq. 12.4, one can determine the period of the structure.

13.2 Smartphone Experiment I: Diffraction in a Box—Scale Spacing of Morpho Butterfly Wings (Mary Lalak and Paul Brackman, 2014) 13.2.1 General strategy The smartphone is used as a device for capturing images of the diffraction patterns produced by a Morpho menelaus butterfly wing, which can be treated as a transmission diffraction grating. 13.2.2 Materials 1. 2. 3. 4. 5. 6. 7. 8.

A smartphone A shoebox with lid A laser pointer with a 650-nm wavelength A diffraction grating (1000 lines/mm) A screen (i.e., tissue paper or thin material) M. menelaus butterfly wings A ruler Tape

13.2.3 Experimental setup The diffraction-in-a-box setup is shown in Fig. 13.4. First, two ports, Port 1 to fix a laser pointer and Port 2 to accommodate the camera of a smartphone,

Chapter 13

86

Figure 13.4

The diffraction-in-a-box setup.

are cut into two sides of the box. Then, a makeshift slide holder is constructed from cardboard to hold the butterfly wing. A second, larger slide holder is constructed to hold a viewing screen, which is made of thin materials such as tissue wrapping paper that transmits enough laser light but not so much as to saturate the smartphone camera. A ruler and a pen are used to mark a line of known length la (in millimeters) on the side of the screen that faces Port 2 for the upcoming distance calibration. The two slide holders are placed at a distance of L ¼ 65.8 mm apart as measured by a ruler. The smartphone camera is fixed at Port 2 so that it looks into the shoebox to capture images of the screen. Before closing the lid on the shoebox, the distance must be calibrated by defining a distance-to-pixel ratio. First, one must ensure that the interior is illuminated and completely visible, and that the line of length la on the screen can be clearly viewed by the smartphone camera. A picture of the screen is captured by the smartphone camera, and the length of the line in pixels lp is extracted. The distance-to-pixel ratio is defined as h ¼ la/lp. Finally, the shoebox is closed by putting the lid over the top so that no external ambient light can penetrate the interior during the experiment. The laser beam passes through the microstructures on the butterfly wing, projecting a visible interference pattern that can be seen on both sides of the screen. A picture of the pattern is captured by the smartphone camera. 13.2.4 Experimental results The image of the diffraction pattern generated by a butterfly wing is processed by rotating the image so that the pattern is horizontal across the image, as seen in Fig. 13.5(a). A graph of diffraction intensity versus horizontal position y (in pixels) is generated by summing over the pixel intensities in each column

Structural Coloration of Butterfly Wings and Peacock Feathers

87

Figure 13.5 (a) Central maximum and first-order maxima of the diffraction pattern created by the M. menelaus wing, and (b) the corresponding plot of intensity I versus position y.

of the image, and the horizontal axis is then scaled to the actual position by multiplying it by the distance-to-pixel calibration ratio h, as shown in Fig. 13.5(b). The distances Dy between the first-order peaks from the central maximum are extracted to be 21.45 mm (the left peak) and 21.74 mm (the right peak), respectively. Based on Eq. 12.4, the spacing dscales between the scales on the butterfly wing is determined by dscales ¼ lL/Dy, which gives dscales ¼ 1.980  0.003 mm. The periodicity of M. menelaus ridges is reported as roughly 2 mm in the literature [10], which is consistent with this observation.

13.3 Smartphone Experiment II: Barbule Spacing of Peacock Feathers (Caroline Doctor and Yuta Hagiya, 2019) 13.3.1 General strategy The smartphone is used as a device for capturing images of the diffraction patterns produced by peacock feather barbules. This experiment used laboratory equipment and an optics table (optional). 13.3.2 Materials 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

A smartphone 3D-printed or wood-constructed frames A semitransparent paper (e.g., tracing paper) Nontransparent paper (e.g., thick graph paper) Peacock tail feathers A He–Ne laser and mirrors (optional) Chemistry clamps A diverging lens ( f ¼ 25.4 mm) and a converging lens ( f ¼ 100 mm) A ruler and a meterstick A microscope (50 magnification)

88

Chapter 13

13.3.3 Experimental setup The barbule spacing of peacock feathers in different regions of iridescent color is estimated. Two square screens, one large and one small, made of a piece of semitransparent tracing paper and nontransparent graphing paper are constructed using either a wood or a 3D-printed frame. A ruler is held underneath the large screen for image calibration. The He–Ne laser is fixed at one end of the optics table, about 2.4 m from the large screen. A mirror is used to redirect the laser beam as shown in Fig. 13.6. A beam expander is constructed using the diverging ( f ¼ 25.4 mm) and converging lenses ( f ¼ 100 mm) so that the laser beam can be large enough to illuminate the desired surface area of the feather. A chemistry clamp is used to hold the peacock feather specimen. The location of the feather can be adjusted so that the laser beam can fall sequentially on the patch of distinct color (yellow, green, blue, or purple). The small screen is placed about 10 cm after the peacock’s feather. The location of the feather is then fine-adjusted until the clearest possible diffraction pattern can be seen on the small screen by eye. Then, the small screen is removed so that the diffraction pattern can be projected onto the large semitransparent screen. A photo of the diffraction pattern (as well as the ruler beneath the screen) is captured using the smartphone. The feather is repositioned so that the laser falls on another patch of color, and another diffraction patten is captured. This step is repeated until diffraction patterns resulting from the yellow, green, blue, and purple sections of the peacock feather are all obtained, as shown in Fig. 13.7. 13.3.4 Experimental results The images shown in Fig. 13.7 are the diffraction patterns created by different regions on the peacock tail feather. The ruler in each image is used to evaluate the distance Dy between the first-order maxima and the central maximum. Then, the barbule spacing is calculated using dbarbules ¼ lL/Dy, where l ¼ 632.8 nm and L ¼ 190 cm, as shown in Table 13.1. The observed regular

Figure 13.6 Experimental setup for capturing the diffraction pattern from a peacock feather. The laser passes through two lenses and the peacock feather, projecting a diffraction pattern onto the large screen.

Structural Coloration of Butterfly Wings and Peacock Feathers

89

Figure 13.7 Examples of diffraction patterns obtained from the yellow, green, blue, and purple regions of the peacock feather. Table 13.1 Estimated barbule spacing in different color regions on a peacock tail feather. Color Yellow

Barbule spacing dbarbules (mm) 0.40

Green

0.10

Blue

0.204

Purple

0.193

spacing at different colored regions is between 100 and 400 mm. According to Fig. 13.2, the spacing between the barbules is around 100 mm.

References [1] H. Butt, A. K. Yetisen, D. Mistry, S. A. Khan, M. U. Hassan, and S. H. Yun, “Morpho butterfly-inspired nanostructures,” Adv. Opt. Mater. 4(4), 497–504 (2016). [2] G. Zyla, A. Kovalev, M. Grafen, E. L. Gurevich, C. Esen, A. Ostendorf, and S. Gorb, “Generation of bioinspired structural colors via two-photon polymerization,” Sci. Rep. 7(1), 17622 (2017). [3] J. Sun, B. Bhushan, and J. Tong, “Structural coloration in nature,” RSC Adv. 3(35), 14862–14889 (2013). [4] J. Zi, X. Yu, Y. Li, X. Hu, C. Xu, X. Wang, X. Liu, and R. Fu, “Coloration strategies in peacock feathers,” Proc. Natl. Acad. Sci. U S A. 100(22), 12576–12578 (2003). [5] S. Yoshioka and S. Kinoshita, “Effect of macroscopic structure in iridescent color of the peacock feathers,” FORMA-TOKYO 17(2), 169–181 (2002). [6] E. Yablonovitch, “Inhibited spontaneous emission in solid-state physics and electronics,” Phys. Rev. Lett. 58(20), 2059–2062 (1987).

90

Chapter 13

[7] S. John, “Strong localization of photons in certain disordered dielectric superlattices,” Phys. Rev. Lett. 58(23), 2486–2489 (1987). [8] D. W. Prather, S. Shi, J. Murakowski, G. J. Schneider, A. Sharkawy, C. Chen, and B. Miao, “Photonic crystal structures and applications: perspective, overview, and development,” IEEE J. Sel. Top. Quantum Electron. 12(6), 1416–1437 (2006). [9] S. L. Burg and A. J. Parnell, “Self-assembling structural colour in nature,” J. Phys. Condens. Matter 30, 413001 (2018). [10] F. Liu, W. Shi, X. Hu, and B. Dong, “Hybrid structures and optical effects in Morpho scales with thin and thick coatings using an atomic layer deposition method,” Opt. Commun. 291, 416–423 (2013).

Chapter 14

Optical Rangefinder Based on Gaussian Beam of Lasers 14.1 Introduction Rangefinders have a variety of applications, from robotics to airborne topographic mapping. Most rangefinders in the market utilize complicated electronic and optical systems. For example, as shown in Fig. 14.1, an electrooptical rangefinder makes use of the transit time of an electromagnetic signal reflected from the target to estimate the distance between itself and a target. Other types of rangefinders, ultrasonic sensors, radar, and sonar, operate on similar principles. In fact, an optical rangefinder can also be designed based on the Gaussian beam property of a laser. The spatial distribution of the intensity I of a single-mode (known as the TEM00 mode) laser beam as a function of the radial distance r from the beam axis follows a Gaussian profile, I ðrÞ ¼ I 0 e2r ∕w , 2

2

(14.1)

Figure 14.1 Electro-optical rangefinder: a laser beam travels at speed vL from the exit port, reflects from the target, and enters the receiving port after time t.

91

Chapter 14

92

Figure 14.2 The propagation of a Gaussian laser beam. The two curves in the upper plot represent the beam width while the profiles in the lower plot show the intensity distribution along the z axis.

where the parameter w is the width of the beam at which the intensity reduces by e12 , or roughly 13.5%, of its peak intensity value I0. The width w is also a function of propagation distance z, as shown in Fig. 14.2: when the laser beam propagates along the z direction in free space, at a specific z, w will reach a minimum value w0, which is known as the beam waist radius. If w0 occurs at z ¼ 0, then sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2ffi z w ¼ w0 1 þ , (14.2) z0 k w2

where z0 ¼ l2 0 is called the confocal parameter (or Rayleigh range) and kl ¼ 2p/l is the wavenumber of the laser. For most lasers available in the market, the minimum waist is very close to the laser beam emission surface, so the cross-section of the beam becomes larger and larger when moving away from the laser surface; i.e., the beam is divergent. The divergence angle is determined by how the beam waist changes as a function of z according to Eq. 14.2. As shown in Fig. 14.2, the asymptotic expansion of the Gaussian beam waist defines the so-called angle of divergence f of the laser, tan Usually, f ≪ 1, so that

f w ¼ : 2 z

(14.3)

Optical Rangefinder Based on Gaussian Beam of Lasers

f2

w : z

93

(14.4)

Clearly, the divergence angle f is an intrinsic property of a laser. Once the laser is manufactured, the f angle is fixed. If the f angle can be determined ahead of time and the laser beam diameter w can be measured at an unknown distance z, then the distance z can be estimated according to Eq. 14.3 or 14.4. In fact, the laser beam diameter can be determined from a photo of a laser spot at a distance by a smartphone. The following smartphone experiments detail the design of two different optical rangefinders. The first is based on two diode lasers, and the other uses only a single laser.

14.2 Smartphone Experiment I: A Two-laser Optical Rangefinder (Elizabeth McMillan and Jacob Squires, 2014) 14.2.1 General strategy In this two-laser optical rangefinder setup, an iPhone is used to acquire photos of the two laser beam projections on a target. The separation of the two laser spots is used to calibrate the laser spot image size, thus determining the distance of the target from the rangefinder. 14.2.2 Materials 1. 2. 3. 4. 5. 6.

A smartphone Two lasers with 405- and 532-nm wavelengths A specially designed smartphone case A tape measure Two small blocks of dense foam Adhesives

14.2.3 Experimental setup The divergence angle f of each laser is first determined using Eq. 14.3 by measuring the beam diameter at 50-cm intervals over a range of 5 m from the lasers’ output ports with a gauge card (a card marked with units of length). The rangefinder can be constructed with the materials listed above. An iPhone case can be purchased at a low cost or 3D printed. Two holes with the diameter of the laser pointer barrel and separated by a known distance of 10.5 cm from center to center should be gouged all the way through each of the foam blocks. The laser pointers are held in place by plugging each end into the holes in the foam blocks. Finally, the iPhone case can be fixed to the foam block supporting the output ports of the lasers using adhesives. A photo of the actual apparatus is shown in Fig. 14.3.

Chapter 14

94

Figure 14.3 A photo of the two-laser optical rangefinder with a smartphone.

In this experiment, the apparatus is placed a large distance (here 5.50 m is used) away from a wall, and the distance between the two laser spots (centerto-center) on the wall is measured. By adjusting the laser mountings in the foam block, one can ensure that the two laser spots on the wall are spaced 10.5 cm apart from center-to-center (as determined by a tape measure against the wall). This confirms that these two laser beams are parallel to each other. After this calibration procedure is performed, a photo of the two laser spots on a screen at an unknown distance z is taken by the smartphone. An example is shown in Fig. 14.4. Using ImageJ software (see Appendix III), one can measure the pixel distance lp between the centers of the two laser spots, which corresponds to 10.5 cm. Thus, the length represented by each pixel in the photo can be determined. Based on this distance calibration value, the real diameters 2w of the two laser spots at the unknown distance can be determined, and the target distance can be calculated as z ¼ 2w/f with the small angle approximation applied. 14.2.4 Experimental results The divergence angles of the 532- and 405-nm laser are determined to be 0.0010  0.0001 and 0.0030  0.0001 rad, respectively. After the parallel beam

Figure 14.4 A representative photo of two laser spots on a wall at a distance z away from the apparatus. The pixel distance lp between the centers of the two laser spots is determined using ImageJ software. The calculated real distance per pixel is used to determine the diameter of the laser spots.

Optical Rangefinder Based on Gaussian Beam of Lasers

95

Figure 14.5 The plot of the extracted distance zmeasured from the rangefinder versus the actual distance z. The data are fit with a linear function zmeasured ¼ kz þ B. The extracted fit parameters are k ¼ 0.90  0.05 and B ¼ 23  4 cm.

calibration process, a series of photos of the two laser spots on a movable screen is taken when the screen is placed at different distances z from the rangefinder. The photos of the two laser spots on the screen are then analyzed by ImageJ software, the actual laser spot diameter 2w in each photo is determined, and the distance zmeasured is extracted using Eq. 14.4. The actual distance z is also measured with a tape measure. Figure 14.5 plots zmeasured versus z obtained in the experiment. Evidently, they follow a linear relationship, and the least-squares fitting gives a slope of 0.90  0.05.

14.3 Smartphone Experiment II: Estimating the Beam Waist Parameter with a Single Laser (Joo Sung and Connor Skehan, 2015) 14.3.1 General strategy In this single-laser optical rangefinder setup, the smartphone is used to take photos of the laser beam projection on a target from various distances. The images are used to calibrate a distance from the rangefinder to the target. 14.3.2 Materials 1. 2. 3. 4. 5. 6.

A smartphone A red laser (650 nm) LEGO®-constructed laser holder A piece of white paper A ruler A cardboard backdrop

Chapter 14

96

7. 8.

A light brown cardboard stand Wooden laundry clips

14.3.3 Experimental setup A LEGO laser holder is designed to support the laser and attaches onto the back of the smartphone to form a smartphone–laser system, as shown in Fig. 14.6. The target is constructed using a square piece of paper, a blue cardboard backdrop, and a rectangular cardboard stand. They are stacked and held together by a wooden laundry clip, as shown in Fig. 14.7. The blue color of the cardboard backdrop is chosen because it is easily distinguishable from the surrounding environment. The cardboard stand should be sturdy enough to hold the target upright and perpendicular to the surface it sits upon.

Figure 14.6

A LEGO laser holder attached to a smartphone.

Figure 14.7 Single-laser rangefinder setup consisting of a laser a distance z from the target. The actual length of any edge of the blue cardboard backdrop la will be important for the calibration procedure.

Optical Rangefinder Based on Gaussian Beam of Lasers

97

The following iterative data collection procedure is adopted to find w at a distance z between the laser and the target: 1. 2. 3. 4. 5.

Position the smartphone–laser system at a distance z ¼ 45 cm away from the target. Activate the laser and direct the beam so that it is incident on the target. Capture a photo of the target with the smartphone. Move the smartphone–laser system an additional 21 cm from the target (i.e., z ¼ 66 cm from the target). Repeat steps 2–4, and increase z by an increment of 21 cm until the last measurement is taken at z ¼ 234 cm. A total of 10 photos should be taken.

Each of the 10 smartphone photos is calibrated with the following image calibration procedure to find w as a function of the distance z from the target (before calibration, the actual length la of any one of the edges of the cardboard backdrop, as shown in Fig. 14.7, is measured with a ruler): 1.

2. 3.

la is divided by the length in pixels lp of the corresponding blue edge extracted from the photo of the target. A distance-to-pixel calibration ratio is defined as h ¼ llap . The laser spot is located on the target and approximated as a perfect circle. The radius of the circle in pixels rp is extracted from the image. The actual radius of the circle w (in meters) is calculated by multiplying rp by the distance-to-pixel ratio, i.e., w ¼ rp h ¼ rp · llap . This value of w is the estimated laser beam radius at a given z.

A visual depiction of the image calibration procedure is shown in Fig. 14.8.

Figure 14.8 Estimation of the beam radius from a photo of the target using the image calibration procedure.

98

Chapter 14

14.3.4 Experimental results According to the experimental procedure described above, the beam radius w on the target at different z is measured; w is plotted against z as shown in Fig. 14.9, and a single outlier corresponding to the measurement at z ¼ 129 cm is removed because its w value is much smaller than the neighboring values. The outlier is likely the result of environmental or gross errors introduced while repositioning the smartphone between measurements. A linear fit of the data estimates that the slope k ¼ 0.90  0.01 and w0 ¼ 1.3  0.1 mm. It is important to note the key differences between this section and Section 14.2. In Section 14.2, the separation of the two lasers is used as the calibration, i.e., it is intrinsically embedded in the system. If one can acquire a good, resolved photo of the two laser spots, one can determine z. In this section, the experiment always needs an external calibration frame on the target, which is often not practically feasible.

Figure 14.9 A plot of the measured beam radius w versus the laser beam propagation distance z. The data are fit with a linear function w ¼ kz þ w0. The resulting fit parameters are k ¼ 0.020  0.002 and w0 ¼ 1.3  0.1 mm.

Chapter 15

Monochromator 15.1 Introduction A monochromator is an optical device that takes a polychromatic light beam as an input and produces a light beam with a specified wavelength or band of wavelengths. It is particularly useful for obtaining optical spectra because almost all materials and structures in nature have unique optical characteristics in different wavelength regions. The principles of a monochromator are based on the dispersion of light when a beam passes through a prism or a diffraction grating (see Chapters 16 and 17). As shown in Fig. 15.1, when a collimated polychromatic light passes through a prism at a specific incident angle, light beams with different colors (wavelengths) will refract differently due to the dispersion of the refractive index. A small aperture (the exit slit in Fig. 15.1) can be used to select a small waveband of light and allow it to pass through. This waveband can be selected by scanning the aperture up and down or rotating the prism. The prism in this setup is called the dispersion element, and this setup is called the dispersive monochromator. The other type of monochromator is the diffractive monochromator shown in Fig. 15.2. In this setup, the prism is replaced by a reflective diffraction grating (one can also use the transmission diffraction grating shown in Chapter 12, Fig. 12.1). According to the case for m ≠ 0 in Eq. 12.3, the diffraction angle necessary to form a given order of bright fringe will be different for light beams with different wavelengths, so that light beams of

Figure 15.1

Basic layout of a dispersive monochromator. 99

Chapter 15

100

Figure 15.2

Basic layout of a diffractive monochromator.

different colors are spatially separated. Usually, one uses m ¼ 1 to ensure a high intensity light beam passing through the exit slit.

15.2 Smartphone Experiment I: A Diffractive Monochromator (Nathan Neal, 2018) 15.2.1 General strategy In this setup, a diffraction grating is used in the monochromator to separate white light into light with different wavebands. The smartphone can serve as a detector by capturing images of the spectrum. The RGB values of each wavelength of light can be extracted from the images acquired by the smartphone and compared to known RGB values. 15.2.2 Materials 1. 2. 3. 4. 5.

A smartphone A polychromatic light source (sunlight) A diffraction grating (1000 lines/mm) 3D-printed housing and rotating platform A laser of known wavelength (for calibration)

15.2.3 Experimental setup The monochromator is 3D printed, as shown in Fig. 15.3, and consists of a housing and an interior rotating holder for the grating. The housing has a front wall with slits for sunlight to enter and a body consisting of the other three walls, a floor, and a lid. The front wall slides into a slot to form an “open-box”–style monochromator housing. The exterior is designed so that a smartphone can rest snugly in a holder with the camera looking into the interior through an exit port. A circular rotating platform designed to hold the grating is fit snugly in a circular indentation on the floor and can rotate in an

Monochromator

101

Figure 15.3

3D-printed monochromator design.

increment of 4 deg. The top of the box is covered by a black plastic board to prevent ambient light from entering the monochromator. This system is calibrated by shining a laser of known wavelength (i.e., a He–Ne laser) into the monochromator and capturing an image. The average RGB values are extracted from the image to ensure that it is similar to the known RGB values corresponding to that wavelength, which can be found here: https://academo.org/demos/wavelength-to-colour-relationship/. When computing the right RGB values from an image in this experiment, it is important to note that the color of each image is not uniform. Figure 15.4 is an example: an image of blue light (l ¼ 484 nm) taken by the monochromator. It is clear from Fig. 15.4(a) that the blue light does not cover the entire image, so one cannot use the average RGB values of the entire image to represent the blue color because the dark patches and saturation regions in the image would contribute to the overall average RGB values. Thus, one needs to select the right zone that closely matches the actual color to obtain the correct RGB

Figure 15.4 (a) The original smartphone image of blue light (l ¼ 484 nm) from the monochromator. (b) The smartphone image is divided into five different zones via visual inspection. (c) The square piece taken from Zone IV in the original image compared with the actual color.

Chapter 15

102

Table 15.1 The average R, G, and B values over the entirety of Fig. 15.4(a) and each zone from Fig. 15.4(b), as well as the actual RGB values for l ¼ 484 nm light. Average G value

Average B value

43

90  70

110  80

Zone I in Fig. 15.4(b)

23  8

60  20

70  20

Zone II in Fig. 15.4(b)

21

11  4

15  5

Zone III in Fig. 15.4(b)

60  9

140  20

190  30

18

250  10

254  3

254  1

254  1

230

255

Analyzed image

Average R value

Fig. 15.4(a)

Zone IV in Fig. 15.4(b) Zone V in Fig. 15.4(b)

251  6

Actual color

0

values. By visual inspection, Fig. 15.4(a) can be divided into five different color zones, as illustrated in Fig. 15.4(b). Zones I and II at the edge of Fig. 15.4(b) can immediately be discarded because they contain dark patches. Zone V is heavily saturated, so it should also not be considered. Although Zone III certainly has a blue hue, it still contains contributions from the dark region and the distribution of color is not uniform; thus, it should be removed from consideration as well. Evidently, the blue light in Zone IV is the most suitable region for analysis, which can be confirmed by visual appraisal of Fig. 15.4(c). In fact, Table 15.1 summarizes the average RGB values from each zone in Fig. 15.4(b) as well as the actual RGB values for l ¼ 484 nm light. Clearly, the RGB values extracted from Zone IV match very well with the actual RGB values. 15.2.4 Experimental results Sunlight is directed onto the diffraction grating through the slits on the front wall. Five images are taken over wavelengths in the visible range (400–700 nm) by rotating the diffraction grating so that different wavelengths of light are incident on the smartphone camera at different incident angles. Figure 15.5 shows the images produced by the monochromator from different wavelengths of light. The actual color of the corresponding wavelength is presented under the image for comparison. For each image, the angle of rotation is recorded. The obtained RGB results are shown in Table 15.2. The experimentally obtained RGB values correspond quite well with the actual RGB values.

15.3 Smartphone Experiment II: A Dispersive Monochromator (Myles Popa and Steven Handcock, 2016) 15.3.1 General strategy In a custom dispersive monochromator setup, a prism separates white light into light of different wavebands. The smartphone can serve as a detector by

Monochromator

103

Figure 15.5 Comparison of light images of different wavelengths generated by the monochromator at different slit angles (top) versus the actual color (bottom). Table 15.2 Comparison of measured and known R, G, and B values at different wavelengths. The angle u is the slit angle. l (nm)

u (deg)

R value

G value

B value

422

25

Measured Known

110  10 97

40  10 0

245  0 255

484

29

Measured Known

18 0

250  10 230

254  3 255

544

33

Measured Known

116  1 143

248  2 255

75  0 0

601

37

Measured Known

238  1 255

129  7 187

57  9 0

656

41

Measured Known

234  0 255

51  0 0

88  8 0

capturing images of the spectrum. The RGB values of each wavelength of light can be extracted from the images and compared with known RGB values. 15.3.2 Materials 1. 2. 3. 4. 5.

A smartphone A white light source A He–Ne laser (for calibration) A prism 3D-printed rotating stage and LEGO® housing

15.3.3 Experimental setup In this design, the monochromator consists of two major components: an interior rotation stage and a housing. The rotation stage is made of a

Chapter 15

104

Figure 15.6 Worm wheel design for a rotatable platform.

3D-printed worm screw and a worm gear, as shown in Fig. 15.6. A glass prism is glued on top of the worm gear so that it can be rotated by a 3D-printed knob outside the housing and mounted to the worm screw. Such a design can prevent ambient light from entering the housing. The housing is built around the rotation stage using LEGO bricks, as shown in Fig. 15.7. The knob is placed on the left of the housing so the user can rotate the prism. To further reduce error, one can black out the interior by covering it in dark tape, for example. A white light is shined into the interior through a small entrance slit from one end of the monochromator and is captured by the smartphone on the other end. Images of different colors are obtained by rotating the prism with the worm wheel. The calibration process is similar to that described in Section 15.2.

Figure 15.7 Representative views of monochromator housing from (a) the smartphone port side and (b) the entrance slit side.

Monochromator

105

Figure 15.8 The observed l ¼ 437.5 nm light image from monochromator versus the actual color.

15.3.4 Experimental results Figure 15.8 shows one example image captured by this monochromator at the wavelength of l ¼ 437.5 nm. The image of actual color of 437.5 nm light is shown side-by-side for comparison. The corresponding RGB values are also presented in Fig. 15.8. Although there are some slight deviations, the measured RGB values are generally consistent with the actual values.

Chapter 16

Optical Spectrometers 16.1 Introduction Atoms, molecules, and solids can absorb light at appropriate wavelengths due to the transition of their electrons between two different energy levels, and they can produce light emissions according to these transitions when they are excited by external power inputs. These emissions yield an optical spectrum, e.g., the emitted light intensity versus the wavelength, that consists of a set of spectral lines unique to the system. As such, optical spectrometry is one of the most powerful tools for a wide variety of applications across physics, chemistry, and biology. Optical spectrometers (also known as spectrophotometers) measure the intensity of light as a function of a given range of wavelengths, providing insight into the amount of electromagnetic radiation that is reflected, transmitted, absorbed, or scattered after interaction with a sample. An optical spectrometer is similar to a monochromator (see Chapter 15) in that it contains two main components: a diffractive or dispersive element and a detector array. However, its purpose is to collect a characteristic spectrum (the light intensity as a function of wavelength l) over a range of wavelengths rather than to isolate a single wavelength of light. In a transmission optical spectrometer setup as shown in Fig. 16.1, a broadband (white) light source enters the sample of interest with an initial intensity I0(l) and exits the sample with intensity IT(l). The absorbance A(l) of the light by the sample is given by Beer’s law (see Chapter 18): Al

ln

IT : I0

(16.1)

The transmitted light enters the entrance slit of the spectrometer and passes through the diffractive or dispersive element, which separates the light into its spectral components. When collected by a detector array or digital camera, the intensities of dispersed wavelengths appear at different locations of the array or camera. Once the wavelength at different locations of the array or camera is 107

Chapter 16

108

Figure 16.1 General schematic of an optical spectrometer consisting of a light source, an entrance slit, a dispersive or diffractive element, and a detector.

calibrated, the intensity of light detected by each individual detector at different locations versus the wavelength constitutes the measured spectrum. Throughout this book, a diffractive spectrometer is used for experiments due to its compatibility with the smartphone CMOS detector array. Like the monochromator setup described in Chapter 13, different wavelengths of light will be diffracted from the grating at different angles, resulting in their spatial separation when captured by the camera based on Eq. 16.1. Once a spectrum is produced, one can calculate the resolution power Rspec at a given wavelength l using the equation Rspec

l , Dl

(16.2)

where the spectral resolution Dl is found by considering the minimum difference in resolvable wavelengths. For example, the sodium emission spectrum (produced by a sodium gas-discharge lamp) displays two resolvable emission lines at l1 589.0 nm and l2 589.6 nm, as shown in Fig. 16.2, and

Figure 16.2 Sodium doublet spectrum. (Figure used with permission from IOP Publishing [1].)

Optical Spectrometers

109

the two emission lines are well separated and distinct. Thus, for the spectrometer used to take the spectrum in Fig. 16.2 [1], one can obtain that Dl 589.6 nm – 589.0 nm 0.6 nm and Rspec 589.6 1000, where we have 0.59 taken l to be the average of l1 and l2. Because a smartphone has a well-designed CMOS sensor array, an optical spectrometer can be easily built by combining it with a diffraction grating. However, depending on the type of smartphone and the light path, different designs can be obtained. Sections 16.2 and 16.3 describe two example smartphone spectrometers. More designs are provided in later chapters.

16.2 Smartphone Experiment I: A Diffractive Emission Spectrometer (Helena Gien and David Pearson, 2016) 16.2.1 General strategy In the setup for an emission spectrometer, a smartphone camera can be used as a detector for capturing images of the spectra. Using these images, one can generate plots of the intensity versus wavelength and estimate an experimental value for the spectral resolution. 16.2.2 Materials 1. 2. 3. 4. 5.

A smartphone Hydrogen, helium, neon, and sodium gas-discharge lamps A diffraction grating (1000 lines/mm) 3D-printed spectrometer components An incandescent bulb and associated electronic components

16.2.3 Experimental setup The emission spectrometer is 3D printed and consists of three pieces, a scope, a mounting ring, and a phone case for the iPhone 6S, as shown in Fig. 16.3.

Figure 16.3 The scope viewed from the (a) side and (b) slanted end. (c) The mounting ring with a lip. (d) The smartphone case with a circular cutout for the mounting ring.

110

Chapter 16

Figure 16.4 A smartphone photo of the emission bands/lines from a helium gas-discharge lamp (top) and a full calibration spectral photo from an incandescent bulb (bottom).

The scope is a 3-cm diameter cylindrical tube with a flat end and a slanted end, as shown in Figs. 16.3(a) and (b). An entrance disk, a black cardstock with a cutting slit ( 0.5-mm wide) roughly passing its center, is attached to the flat end of the scope [Fig. 16.3(a)], and a diffraction grating is suspended using the other black cardstock at the slanted end of the scope [Fig. 16.3(b)]. One must ensure that light entering the entrance slit can pass through the grating and be captured by the smartphone. The mounting ring shown in Fig. 16.3(c) is designed to be attached to the slanted end of the scope using superglue and placed in the circular cutout on the phone case [Fig. 16.3(d)]. One also needs to make sure that the diffraction grating can completely cover the smartphone camera. Light from each gas-discharge lamp is shined into the spectrometer, producing emission spectra that are captured by the smartphone camera. A useful method for ensuring the best results from this design is to print the components with dark-colored materials and to cover all contact points between each of the pieces with black tape to prevent the possible introduction of ambient light. An example of a calibrated spectrum (taken from a helium gas-discharge lamp) is shown in Fig. 16.4 (top). The horizontal dimension of the image is scaled to the visible wavelength range (400–700 nm). This spectral image consists of six discretized bands/lines of different colors. Each band produced by a gas-discharge lamp is rescaled to align with the corresponding wavelengths in the reference spectrum. This process is known as a wavelength calibration. A photo of the spectrum from an incandescent bulb is presented at the bottom of Fig. 16.4. It is a continuum spectrum. 16.2.4 Experimental results Figure 16.5 shows some representative spectral photos captured by the smartphone camera from different gas-discharge lamps. All show discretized spectral lines at different horizontal locations (wavelengths). The corresponding emission spectra extracted from these photos is plotted in Fig. 16.6, where sharp and distinct emission peaks are visible in most of the spectra.

Optical Spectrometers

111

Figure 16.5 Spectral photos of emissions from (a) hydrogen, (b) helium, (c) neon, and (d) sodium lamps.

Figure 16.6 Representative emission spectrum for (a) hydrogen, (b) helium, (c) neon, and (d) sodium lamps. The dashed blue lines in the helium spectrum indicate the closest resolvable peaks.

The peak wavelengths corresponding to Fig. 16.6 for different lamps are plotted against the actual peak wavelengths in Fig. 16.7. Linear regression is used to fit the experimental data, which gives a slope k 0.97 0.01 and an intercept B 14 6 nm. Clearly, the value for k is very close to 1 and that for B is very close to 0, indicating a good agreement between the measured and actual peak wavelengths. Evidently, the closest resolvable peaks occur in the helium lamp at l1 477.0 nm and l2 487.7 nm, as indicated by the dashed blue lines in Fig. 16.6(b). Hence, the spectral resolution for this spectrometer is estimated

Chapter 16 Measured Peak Wavelength (nm)

112 700 650 600

H He Na Ne k = 0.97 ± 0.01

550 500 450 400 400

450

500

550

600

650

700

Acutal Peak Wavelength (nm)

Figure 16.7 Measured peak wavelengths for each emission source versus the corresponding actual peak wavelengths.

as Dl 10.7 nm, and the resolution power is calculated as Rspec 482.3 10.7 45.12: Notice that the characteristic sodium D lines (as discussed in the Introduction) are not resolvable, as shown in Fig. 16.6(d), and only a single peak at 587.7 nm is observed.

16.3 Smartphone Experiment II: Spectra of Different Combustion Sources (Ryan McArdle and Griffin Dangler, 2016) 16.3.1 General strategy Similar to the strategy described in Section 16.2.1, a spectrometer is designed to take the spectra of different combustion sources. 16.3.2 Materials 1. 2.

3. 4.

A smartphone Various emission sources: • Combustion sources: matchsticks, match heads, wax candle, and paper • Electrical sources: RGB computer monitor and fluorescent lightbulb A diffraction grating (1000 lines/mm) 3D-printed spectrometer components

16.3.3 Experimental setup The 3D-printed spectrometer has four pieces, as shown in Fig. 16.8. A long light entrance tube, consisting of two separated printed parts shown in Figs. 16.8(a) and (b), is designed with a 1-mm wide gap on one end (top). The

Optical Spectrometers

113

Figure 16.8 The (a) top and (b) bottom pieces of the light entrance portion of the spectrometer. (c) The mounting piece and (d) the phone case. (e and f) The assembled light entrance tube. (g) Two photos of the assembled spectrometer (without the smartphone).

bottom end of the tube has a slanted angle of 45 deg, which is used to mount the grating. The top and bottom pieces of the tube are superglued together. It is expected that the long tube can better prevent the ambient light from entering the spectrometer. A mounting piece with a window is designed to hold the grating and is attached to the slanted end of the tube, as shown in Fig. 16.8(c). The assembly of the tube and the mounting piece is illustrated in Figs. 16.8(e) and (f). The phone case is printed based on the dimensions of the smartphone that is being used as the observation device. An exit slit is created by using two pieces of black tape and placing them over the camera opening in the phone case. This exit slit should be attached flush with the mounting piece so that the phone camera can look into the spectrometer and capture the images of emission spectra. The photos of the spectrometer taken from different angles are shown in Fig. 16.8(g). 16.3.4 Experimental results Light from different combustion sources (matchsticks, match heads, wax candles, and paper) and the electrical sources (RGB computer monitor and fluorescent bulb) are shined into the spectrometer to produce the spectra. Figure 16.9 shows spectral images captured by the smartphone. In particular, the combustions of the match head, matchstick, printer paper, and candle give continuum spectra, whereas the fluorescent bulb and computer monitor exhibit discretized spectra.

114

Chapter 16

Figure 16.9 Spectral photos from combustions and electrical light sources captured by the smartphone-based spectrometer for (a) match heads, (b) a candle, (c) matchsticks, (d) a fluorescent bulb, (e) printer paper, and (f) an RGB computer monitor.

Reference [1] M. D’Anna and T. Corridoni, “Measuring the separation of the sodium D-doublet with a Michelson interferometer,” Eur. J. Phys. 39(1), 015704 (2017).

Chapter 17

Dispersion 17.1 Introduction According to Snell’s law, a light ray will bend when traveling from one medium into another because light travels at different speeds in these two media. Within a given medium, different colors of light also travel at different speeds because the refractive index n depends on the wavelength l of light; i.e., n is a function of l. In most optical media, a longer wavelength of light corresponds to a smaller refractive index. This phenomenon is called dispersion and was demonstrated by Newton’s prism experiment in 1666, in which a white light incident on a glass prism generates a broad rainbow-colored light beam, as shown in Fig. 17.1. For glass or other transparent media, the dispersion relationship can be expressed by the Sellmeier equation [1],     C CE 2 n ¼ A þ B∕ 1  2 þ D∕ 1  2 , (17.1) l l where l is the wavelength and A, B, C, D, and E are called the Sellmeier coefficients. The refractive index n of a medium can be determined using the angle of minimum deviation through a prism made of the same medium. As shown in Fig. 17.2, when a light beam (Beam 1) is incident on an isosceles triangular prism with a vertex angle bv and refractive index n, it refracts into the prism

Figure 17.1 Dispersion of a medium. (Left) Refractive index n of a medium as a function of wavelength l. (Right) Newton’s prism experiment. 115

Chapter 17

116

2 3 n

1

Figure 17.2 The deviation angle w between an incident beam (Beam 1) and an emergent beam (Beam 3) from a prism.

(Beam 2) and emerges on the other side (Beam 3). The deviation angle w is defined as the angle between Beams 3 and 1 and can be written as w ¼ u1  u2 þ u02  u01

(17.2)

u01 ¼ bv  u2 ,

(17.3)

and based on the geometry

where u1 and u01 are the angles of incidence at two different interfaces, and u2 and u02 are the corresponding angles of refraction. These angles obey Snell’s law: sin u02 sin u1 ¼ n: ¼ n, sin u2 sinðbv  u2 Þ

(17.4)

From Eqs. 17.2–17.4, one can eliminate u2 so that sin u02 ¼ sin bv

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n2  sin2 u1  cos bv sin u1 :

(17.5)

These equations can be used to calculate w at any initial angle of incidence u1. However, when light propagates symmetrically through the prism, i.e., u1 ¼ u02 , or Beam 2 becomes parallel to the base of the prism, w is a minimum and one has n sin bv 1 ffi ¼ n sin bv : sin u1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2ð1 þ cos bv Þ Under this condition, w = 2u1 – a. Therefore,

(17.6)

Dispersion

117

h n¼

sin

1 2 ðw

þ bv Þ

sin b2v

i :

(17.7)

Thus, in the experiment, if Beam 2 inside the prism can be set parallel to the base, one can obtain n by measuring the deviation angle w, provided that bv is known.

17.2 Smartphone Experiment (Eric Older and Mario Parra, 2018) 17.2.1 General strategy Classically, the measurement of the dispersion from a glass prism is conducted using a spectrometer with an optical goniometer. Because the dispersion is closely related to Snell’s law, the experimental setup should be very similar to that described in Chapter 4. To obtain the dispersion relationship, the minimum deviation angle w as a function of the wavelength l of a particular color of light from a beam of white light emerging from the prism will be determined through the geometry of the setup. In this experiment, the smartphone will act as a spectrometer to determine the wavelength of the light as well as an imaging device to record the angle of the bent light beams. The construction of a smartphone spectrometer is presented in Chapter 16. 17.2.2 Materials 1. 2. 3. 4. 5. 6. 7.

A smartphone A diffraction grating (1000 lines/mm) Two biconvex lenses ( f = 25 mm) A triangular glass prism A white light source Two metersticks A piece of cardboard with a small slit (1-mm wide)

17.2.3 Experimental setup The experimental setup is shown in Fig. 17.3. A white light from an LED flashlight is collimated by two lenses, and the beam is arranged to be parallel to the horizontal long meterstick. The light beam is incident on a prism of a known bv, which can be rotated so that the white light beam inside the prism can be adjusted to be parallel to the base of the prism. This can be done by taking a photo of the light beam inside the prism and analyzing it using ImageJ software (see Appendix III). The cardboard with a pinhole is placed on the path of Beam 3 but far away from the prism so that the dispersed colors can be spatially well separated. A smartphone spectrometer (see Chapter 16) is mounted behind the pinhole to determine the wavelength of the light passing

Chapter 17

118

Figure 17.3 The experimental diagram for a dispersion measurement.

through the pinhole. The minimum deviation angle w at that specific wavelength is determined by measuring the horizontal location L of the pinhole from the prism and the vertical distance D from the white LED beam to the pinhole, as shown in Fig. 17.3, tan w = D/L. To accurately determine w and obtain enough spectral resolution (i.e., clear wavelength dependence) in the experiment, one should ensure that L is much larger than the size of the prism. Before the experiment, the smartphone spectrometer should be calibrated using the spectral emission lines from a standard mercury lamp. The position of the cardboard is adjusted in the vertical direction so that the spectrometer can be illuminated by different colors of light, and the location of the slit opening is recorded to determine the minimum deviation angle w. A photograph of the experimental setup is shown in Fig. 17.4 (the LED and beam expander are not shown). 17.2.4 Experimental results Table 17.1 summarizes the experimentally measured results for the deviation angle w at different l and the calculated n. As l decreases, n increases, as shown in the n–l relationship plotted in Fig. 17.5, which is consistent with

Figure 17.4 (a) Experimental setup for measuring the dispersion of a glass prism. (b) The smartphone spectrometer. (c) A representative spectral image of a white LED.

Dispersion

119

Table 17.1 The measured deviation angle w at different wavelength l and the corresponding n. Color

Wavelength

w

Refractive index

690 nm

36.49 deg

1.492

Red Orange

630 nm

36.83 deg

1.496

Yellow

590 nm

37.52 deg

1.504

Green

547 nm

37.96 deg

1.509

Blue

437 nm

38.57 deg

1.516

Violet

415 nm

38.92 deg

1.520

Experimental data Sellmeier equation fit

1.520

Refractive Index n

1.515 1.510 1.505 1.500 1.495 1.490 1.485

400

450

500

550

600

650

700

Wavelength (nm) Figure 17.5 The experimental plot of n versus the wavelength l. The curve is a fitting using the Sellmeier equation.

that shown in Fig. 17.1. The data points fit well with Eq. 17.1. The obtained data are within the range of n for different glasses [1].

Reference [1] G. Ghosh, “Sellmeier coefficients and dispersion of thermo-optic coefficients for some optical glasses,” Appl. Opt. 36(7), 1540–1546 (1997).

Chapter 18

Beer’s Law 18.1 Introduction When a light beam propagates through any medium (except for a vacuum), the intensity of the light will be attenuated. If the medium penetrated by the light is homogeneous, then for a propagation distance x (Fig. 18.1), the intensity I(x) can be expressed as I ðxÞ ¼ I 0 eaa x ,

(18.1)

where aa is the absorption coefficient and A ¼ aax is defined as the absorbance. Usually, aa is a function of the wavelength l, and for a dilute solution, it is a linear function of the molar concentration of the solution c, aa ðlÞ ¼ εðlÞc,

(18.2)

where ε(l) is called the molar absorptivity or molar extinction coefficient. In most cases, the wavelength-dependent quantity ε(l) is a measure of the probability of an electronic transition. These relationships (Eqs. 18.1 and 18.2)

Figure 18.1 The configuration of Beer’s law.

121

Chapter 18

122

are known as the Beer–Lambert law or Beer’s law. They are often used in chemical or thin film analysis.

18.2 Smartphone Experiment (Sean Krautheim and Emory Perry, 2018) 18.2.1 General strategy Based on the schematic shown in Fig. 18.1, the experimental setup requires a light source, an absorptive medium, and a light detector. If the wavelengthdependent absorbance is to be measured, a white light source and a smartphone spectrometer (see Chapter 16) should be used. 18.2.2 Materials 1. 2. 3. 4. 5. 6.

A smartphone A desk lamp A diffraction grating (1000 lines/mm) A grating holder A cuvette Black tea

18.2.3 Experimental setup A smartphone-based spectrometer is constructed to serve as the detector in the experiment. Figure 18.2(a) shows a 3D holder designed for the diffraction

Figure 18.2 The design of a simple spectrometer: (a) CAD design for the grating holder, (b) the 3D-printed grating holder, (c) the diffraction grating slide, and (d) the image of the spectrum of a desk lamp.

Beer’s Law

123

grating slide that can be mounted on a smartphone. The incoming light forms an angle with respect to the grating so that the light is effectively separated into different wavelengths and higher-order diffraction patterns can be formed on the smartphone camera. Figures 18.2(b) and (c) show the 3D-printed holder for the diffraction grating slide and the diffraction grating. Figure 18.2(d) is an image of the dispersed light taken by the smartphone after the light from the desk lamp has passed through the grating. The light from the desk lamp is well dispersed into different colors. After calibrating the wavelength of the spectrum (see Chapter 16), the spectrometer is ready for use. The experimental setup for testing Beer’s law using a tea solution is shown in Fig. 18.3(a). Depending on the power of the desk lamp bulb, the intensity of the lamp may be too high, which will saturate the CMOS sensor in the smartphone. To reduce the intensity of the light from the lamp, the light beam is reflected off a sheet of white paper. The cuvette containing different relative concentrations of tea [see Fig. 18.3(b) in which the original tea solution is diluted by water in different volume ratios] is placed directly in front of the smartphone spectrometer. The clear water is used as a control. The spectra of different tea solutions are taken by the smartphone spectrometer, and the wavelength-dependent spectra are obtained and analyzed. For a specific wavelength, one can calculate I ðlÞ AðlÞ ¼  ln tea , (18.3) I water ðlÞ where Itea(l) is the intensity of the light after passing through the tea solution and Iwater(l) is the intensity of light after transmission through the water at a wavelength l. 18.2.4 Experimental results Figure 18.4 shows a representative spectrum of light passing through the 40% tea solution. The red curve is the average spectrum from the three (RGB) channels. Based on Eq. 18.3, A(l) can be obtained. Figure 18.5 plots A as a

Figure 18.3 (a) The experimental setup for demonstrating Beer’s law. (b) A photo of tea solutions of different concentrations.

124

Chapter 18

Figure 18.4 A normalized transmission spectrum of light passing through a tea solution with a concentration of 40%.

Figure 18.5 The plots of absorbance A versus the relative tea concentration c at selected wavelengths. Each solid line shows a linear relationship.

function of relative tea concentration c at l ¼ 490, 535, and 557 nm, respectively. According to Beer’s law (Eq. 18.2), it is expected that such a plot should follow a linear relationship, and the findings shown in Fig. 18.5 confirm that each A–c plot does indeed follow a linear relationship.

Chapter 19

Optical Spectra of Incandescent Lightbulbs and LEDs 19.1 Introduction A standard incandescent lightbulb, as shown in Fig. 19.1(a), has a filament heated by electrical resistance when a current is passed through it, causing a thermal excitation of electrons that emit light when they relax to the ground state. This light emission can be described theoretically by blackbody radiation. An object or a system in thermal equilibrium that absorbs all electromagnetic radiation incident on it without reflection or transmission is known as a blackbody. When a blackbody is set at an equilibrium temperature T, it emits electromagnetic radiation with an intensity distribution unique to that temperature [Fig. 19.1(b)]. A small cavity at temperature T, as shown in Fig. 19.2(a), can be treated as an ideal blackbody. The electromagnetic radiation enters through a small hole and loses energy to the interior as it is absorbed or reflected off the surfaces. It is highly improbable that the incoming radiation will exit the cavity because the hole is small, so the incoming electromagnetic radiation is effectively absorbed completely. However, the inner surfaces of the cavity that absorb the energy from the incoming radiation will emit thermal radiation at temperature T. Several

Figure 19.1 (a) The structure of an incandescent lightbulb and (b) its radiation spectrum. Notice that most of the radiation is distributed in the infrared wavelength region. 125

Chapter 19

126

Figure 19.2 (a) A cavity at temperature T with a small opening that acts as a blackbody, and (b) some representative blackbody radiation spectra at different temperatures T.

examples of blackbody radiation spectra at different T are shown in Fig. 19.2(b). Blackbodies were a pivotal topic in physics at the inception of quantum mechanics in the early 1900s, when Max Planck introduced the concept of quantized electromagnetic radiation and derived the well-known blackbody radiation equation. The electromagnetic waves inside the cavity can be treated as standing waves, each oscillating at a certain frequency n. Each of these oscillators has an average energy of E ¼ k B T, where kB is a Boltzmann constant. Planck assumed that the energy of an oscillator at frequency n is not a continuum of values, but rather that it is restricted to a discrete set of energies given by E ¼ nhv, where n is a positive integer and h is a Planck constant. This fundamental assumption led to the well-known blackbody radiation equation, I ðl, TÞ ¼

2hv2L 1 , hvL 5 l elkB T  1

(19.1)

where I is the radiation intensity and vL is the speed of light in vacuum. Equation 19.1 agrees well with experimental results. Notice that in Fig. 19.2(b), the peaks of each spectral radiance curve blueshifts with increasing temperature T (in Kelvin). These peak wavelengths lpeak (in nanometers) can be described by Wien’s displacement law: 2.898  103 : (19.2) T As shown in Fig. 19.2(b), when T is larger than 2000 K, blackbody radiation can emit electromagnetic waves in the visible wavelength range (400–700 nm) where light can be seen by the human eye. However, the brightness and the color depend closely on T. Tungsten has been the most commonly used filament material in incandescent bulbs since 1879 due to its high melting point and ability to function effectively at temperatures as high as 3000 K. The filament is enclosed within a glass bulb filled with an inert gas, lpeak ¼

Optical Spectra of Incandescent Lightbulbs and LEDs

127

Figure 19.3 Efficiencies of a variety of light sources. Incandescent lamps have efficiencies ranging between 1.5 and 2.2%, whereas LEDs are much more efficient, with efficiencies varying between 19 and 29.2%. (Figure reprinted from [1] under CC by 4.0.)

usually argon due to its low thermal conductivity, and the overall color of the incandescent bulbs is determined by the temperature of the filament or the rated wattages of the bulbs. For more than 100 years, incandescent bulbs have been indispensable household items for people all over the world. However, their efficiency of energy-to-light conversion is very low. As shown in Fig. 19.3, they convert less than 5% of the total electric energy into visible light, and the remaining energy is lost as heat or infrared radiation. New light bulbs such as fluorescent bulbs or LED lamps have been invented with a much higher efficiency. Recently, LED lamps have begun to dominate the market. LEDs emit light through a nonthermal process called electroluminescence and are constructed in the standard configuration shown in Fig. 19.4(a). An active irradiation layer is sandwiched between an n-type and a p-type semiconductor to form a diode and is deposited onto a substrate, usually indium tin oxide (ITO), chosen for its high conductivity and transparency. When a current is passing through the structure, electrons and holes are generated in the n-type and p-type semiconductor layers separately, and the recombination of electrons and holes in the active layer induces the emission of light. LEDs are typically manufactured to produce monochromatic light in the visible wavelength range, resulting in distinct peaks like the red (R), green (G), and blue (B) colors shown in Fig. 19.4(b). LEDs can also be made to produce white light by coating a blue LED with a special phosphor. The shortwavelength blue LED light can be converted into yellow light upon interaction with the phosphor. When this yellow light mixes with the remainder of the blue light, a white light is produced, which has a broad and continuous intensity distribution as shown by the dashed curve in Fig. 19.4(b). Alternatively, white light can be produced by mixing RGB-colored LEDs in the appropriate

128

Chapter 19

Figure 19.4 (a) The structure of an LED and (b) the emission spectrum of a red (red line), blue (blue line), green (green line), and white (dotted line) LED. Panel (b) is used with permission from Elsevier [3].

proportions. An LED bulb is characterized by two parameters: power and correlated color temperature (CCT). Because LED light only produces a minute amount of heat, its light emission color cannot be characterized by the emission temperature. Instead, a mixed RGB-colored LED is used to simulate the emitted visible light spectrum similar to that of incandescent bulbs. For instance, if an LED has a CCT of T, this implies that the perceived color of light emitted by the LED matches the color of light that would be emitted by a perfect blackbody of temperature T. The relationship between perceived color and blackbody temperature is described by a curve called the Planckian locus in the chromaticity space; see Zwinkels [2]. In the following two smartphone experiments, a smartphone-based spectrometer (see Chapter 16) is used to measure the spectral radiance of several light sources. It is important to note the differences between Sections 19.2 and 19.3. The experiment described in Section 19.2 uses Wien’s law (Eq. 19.2) to estimate the temperature of the incandescent lightbulb filament, whereas the one described in Section 19.3 estimates the CCT of white LED lightbulbs by using a fit of Planck’s law (Eq. 19.1) to the data and extracting the temperature as a fitting parameter.

19.2 Smartphone Experiment I: Spectral Radiance of an Incandescent Lightbulb (Tyler Christensen and Ryan Matuszak, 2017) 19.2.1 General strategy The smartphone is used as a detector with a custom smartphone-fitted spectrometer so that one can view and capture images of the light source spectra.

Optical Spectra of Incandescent Lightbulbs and LEDs

129

19.2.2 Materials 1. 2. 3. 4. 5. 6. 7.

A Smartphone A 3D-printed spectrometer shaft A phone case A diffraction grating Superglue An incandescent bulb A compact fluorescent lamp (CFL) for calibration

19.2.3 Experimental setup A spectrometer shaft is designed as shown in Fig. 19.5(a) to host a rectangular diffraction grating. The left face of the shaft, with a narrow entrance slit, is slanted upward at an angle, and the right face with a bigger opening is sloped downward. The entire shaft is 3D printed using a black polylactic acid (PLA) filament as shown in Fig. 19.5(b), and the left face is superglued directly onto the top half of a smartphone case so that the camera is facing the shaft. A wavelength calibration is performed by reflecting the light from the CFL onto a piece of white paper and into the spectrometer to produce a full spectrum without saturating the smartphone’s CMOS sensors. This spectrum is scaled to a known CFL reference spectrum according to the website https:// spectralworkbench.org/. In this experiment, light from a 50-W incandescent lightbulb is directed into the spectrometer, producing a distinct spectrum that peaks in the infrared range. An image of the spectrum is captured with a smartphone camera. 19.2.4 Experimental results A normalized intensity distribution (by its maximum intensity) of the 50-W incandescent lightbulb is obtained and is plotted against the calibrated wavelength shown in Fig. 19.6. The peak wavelength is determined to be

Figure 19.5 One-piece smartphone spectrometer design. (a) A transparent view of the spectrometer shaft with a diffraction grating wedged inside. (b) The spectrometer shaft attached to the smartphone.

Chapter 19

130

Figure 19.6 The normalized radiance spectrum of the 50-W incandescent bulb. The peak wavelength determined by the experimental data is shown by the green dashed line at lpeak ¼ 898 nm. Normalized Eq. 19.1 is plotted with T ¼ 2850 K (red curve) and T ¼ 3230 K (blue curve) for reference.

lpeak ¼ 898 nm. According to Eq. 19.2, the temperature of the filament can be estimated to be 3230 K. Compared with the incandescent bulb’s advertised color temperature of 2850 K, there is a 12% difference. This error may be attributed to the lack of spectral intensity calibration of the CMOS sensors in the smartphone because each CMOS sensor also has a spectral response at different wavelengths.

19.3 Smartphone Experiment II: Spectral Radiance of White LED Lightbulbs (Troy Crawford and Rachel Taylor, 2018) 19.3.1 General strategy The smartphone functions as described in Section 19.2. 19.3.2 Materials 1. 2. 3. 4. 5. 6.

A Smartphone A 3D-printed phone case spectrometer A diffraction grating Duct tape Three white LED lightbulbs with CCTs of 2700, 3000, and 5700 K A CFL for calibration

19.3.3. Experimental setup The spectrometer is constructed using two pieces of 3D-printed parts shown in Fig. 19.7. The first piece is a phone case with a rectangular cutout in the

Optical Spectra of Incandescent Lightbulbs and LEDs

131

Figure 19.7 Two-piece smartphone spectrometer design. The inset figures are head-on views of the entrance slit, the square piece, and the rectangular cutout.

corner that serves as a camera opening for the smartphone. The second piece is the spectrometer shaft, which is a cylindrically shaped barrel 23.7 mm in diameter that is slanted at 45 deg with respect to a square end piece. The square end of the shaft features an octagonal opening and a narrow entrance slit on the opposite end. A diffraction grating is placed over the rectangular cutout, and the square end of the spectrometer shaft is taped over the grating onto the phone case. The wavelength calibration procedure for this experiment is the same as that described in Section 19.2. Lights from the 2700, 3000, and 5000 K LED lightbulbs are each shined into the spectrometer, producing distinct spectra. Images of the spectra are captured by the smartphone camera for each of the LED lightbulbs. Five trials are performed for each LED lightbulb. 19.3.4 Experimental results A normalized intensity distribution is generated for each of the calibrated spectra of the three LED lightbulbs. These data are fitted with Planck’s law (Eq. 19.1) and plotted in Fig. 19.8. It is important to note that the data should be normalized so that it is comparable to the fit model. The temperatures estimated from the spectral fit for each of the LED lightbulbs are compared with the advertised CCT in Table 19.1. All the estimated results are larger than the advertised CCT, which shows a systemic error due to the noncalibrated CMOS spectral response in the smartphone.

Chapter 19

132

Figure 19.8 Normalized spectra radiance of the (a) 2700, (b) 3000, and (c) 5000 K white LED lightbulbs. The red curves are the fitting results based on Planck’s law (Eq. 19.1).

Table 19.1 Comparison of advertised and estimated CCTs for each white LED lightbulb. Advertised CCT (K)

Estimated CCT (K)

Percent difference

2700

3373  2

22%

3000

3940  10

27%

5000

5600  40

11%

References [1] D. F. de Souza, P. P. F. da Silva, L. F. A. Fontenele, G. D. Barbosa, and M. de Oliveira Jesus, “Efficiency, quality, and environmental impacts: a comparative study of residential artificial lighting,” Energy Rep. 5, 409–424 (2019). [2] J. C. Zwinkels, “Blackbody and blackbody radiation,” Chap. in Encyclopedia of Color Science and Technology, R. Luo, Ed., Springer, Berlin (2015). [3] M. A. J. Bapary, M. N. Amin, Y. Takeuchi, and A. Takemura, “The stimulatory effects of long wavelengths of light on the ovarian development in the tropical damselfish, Chrysiptera cyanea,” Aquaculture 314(1–4), 188–192 (2011).

Chapter 20

Blackbody Radiation of the Sun 20.1 Introduction The Sun is a yellow dwarf star whose mass accounts for roughly 99.86% of the mass in our solar system [1]. It is composed primarily of hydrogen (H) and helium (He) atoms held together by enormous amounts of inward gravitational pressure. To prevent itself from collapsing under its own gravity, the fast-moving atoms in the Sun’s core undergo nuclear fusion, which generates an outward radiative pressure that balances gravitational pressure [2]. Inside the Sun’s core, temperatures of roughly 14 million K strip the electrons off the hydrogen and helium atoms to form plasma, which is a state of matter consisting of positively charged nuclei and free electrons [3, 4]. The high temperatures and pressure cause two positively charged hydrogen ions to approach within 1015 m of each other, allowing the strong nuclear force to overcome the repulsive electrostatic force between the hydrogen nuclei. The Sun’s nuclear fusion process is called the proton–proton cycle, depicted in Fig. 20.1, which allows the Sun and other stars to release energy. In this process, four hydrogen nuclei fuse to produce one helium-4 nucleus [5]. The proton–proton cycle can be summarized in the following reaction equation [6]: 411 H ! 42 He þ 201 eþ þ 2n:

(20.1)

Because the resulting helium-4 nucleus is roughly 0.7% less than the mass of the four hydrogen nuclei used to produce it, energy is released from the loss of mass according to Einstein’s famous mass–energy equivalence. A single proton–proton cycle will release roughly 26 MeV of energy [5, 7]. Recall from Chapter 19 that a blackbody is a perfect absorber, meaning that it does not reflect or transmit light. Instead, it produces thermal radiation, which is dependent only on the blackbody’s own temperature rather than incident electromagnetic radiation. Similarly, the Sun emits thermal radiation due to nuclear fusion, which is an entirely internal process.

133

134

Chapter 20

Figure 20.1 The proton–proton cycle in the Sun’s core. (Figure is redrawn with permission from Nave [8].)

Thus, the Sun is often treated as an example of a near-perfect blackbody occurring in nature, which allows one to apply the principles of blackbody radiation discussed in Chapter 19 to analyze the solar radiation spectrum. The emission spectrum of the Sun’s thermal radiation agrees well with that of a blackbody radiator at T ¼ 5800 K, which is roughly the temperature of the Sun’s photosphere [9]. A small percentage of solar radiation is incident on Earth. Because Earth is surrounded by a thick atmosphere, composed of roughly 78% nitrogen, 21% oxygen, and less than 1% argon and other traces of gas, the solar radiation received at Earth’s ground is not the same as when it leaves the Sun’s surface. Roughly 30% of the incoming UV light of the solar radiation is reflected by clouds, air molecules, and the Earth’s surface, while about 19% is absorbed by the clouds and atmospheric gases. Earth’s surface absorbs the remaining 51% of radiation, which consists of directly absorbed radiation and downward scattered and reflected radiation. At the same time, Earth emits a weak infrared radiation into space to maintain radiative equilibrium, which sustains stable global temperatures. According to Fig. 20.2, the spectral radiance also depends on the elevation. As sunlight passes through the top of Earth’s atmosphere (the yellow portion in Fig. 20.2), the absorption of light by atmospheric H2O molecules at sea level is demonstrated by the dips in the sea level spectrum (the red portion of Fig. 20.2).

Blackbody Radiation of the Sun

135

Figure 20.2 Blackbody radiation spectrum of the Sun with inset absorption bands for atmospheric molecules at the outer atmosphere (yellow) and at sea level (red). (Figure is reprinted from [10] under CC by 4.0.)

20.2 Smartphone Experiment (Patrick Mullen and Connor Woods, 2015) 20.2.1 General Strategy A smartphone-based spectrometer is used to capture the spectrum of the Sun. This experiment should be performed on a clear, sunny day for the best results. 20.2.2 Materials 1. 2. 3.

A smartphone-based spectrometer with a diffraction grating (1000 lines/mm) A fluorescent lightbulb (for wavelength calibration) A white semitranslucent plastic piece (for wavelength calibration)

20.2.3 Experimental setup Any of the smartphone-based spectrometers described in previous chapters can be used for this experiment. After wavelength calibration, the spectrometer is pointed at the Sun and tilted at an angle until a spectrum is visible on the phone screen. This data-collection process is repeated once an hour from 10 AM to 4 PM for a total of 7 photos. 20.2.4 Experimental results Figure 20.3 shows one plot of seven normalized intensity spectra of the Sun acquired using the smartphone spectrometer. The peak wavelength lpeak is located at lpeak ¼ 493  2 nm, and using Wien’s displacement law (Chapter 19, Eq. 19.2), the temperature of the Sun T is calculated as T ¼ 5880  20 K, which is very close to the temperature documented in

Chapter 20

136

Normalized Intensity I( ,T)

1.0 0.8 0.6 0.4 Sunlight spectrum Blackbody curve T = 5880 K

0.2 0.0 400

500

600

700

800

900

Wavelength (nm) Figure 20.3

Collected sunlight spectrum fitted with Planck’s law.

Fig. 20.2. The red curve shows the blackbody radiation spectrum at T ¼ 5880 K based on Eq. 19.1. The temperatures extracted from all seven photos are averaged together to obtain a final estimate of the Sun’s surface temperature. The result, T ¼ 6000  300 K, differs by only 3.8% from the actual surface temperature of the Sun. This error may be attributed to the lack of spectral intensity calibration of the CMOS sensors in the smartphone. In addition, the obtained spectrum yields interesting implications pertaining to atmospheric effects on sunlight. The characteristic dips in the experimental data at roughly 630 and 690 nm (the green dashed lines in Fig. 20.3) can be attributed to vibrational absorption by atmospheric O2 by comparison with Fig. 20.2.

References [1] M. M. Woolfson, The Origin and Evolution of the Solar System, CRC Press, Boca Raton, FL (2000). [2] National Aeronautics and Space Administration, This Is NASA, R. E. Gibson, Ed., Public Affairs Division, NASA Headquarters, Washington, D.C. (1979). [3] K. J. H. Phillips, Guide to the Sun, Cambridge University Press, Cambridge (1995). [4] S. F. Green and M. H. Jones, Eds., An Introduction to the Sun and Stars, Cambridge University Press, Cambridge (2004).

Blackbody Radiation of the Sun

137

[5] G. A. Cook, Ed., Argon, Helium, and the Rare Gases: The Elements of the Helium Group, Volume 1, History, Occurrence and Properties, Interscience Publishers, Inc., New York (1961). [6] Borexino Collaboration, “Neutrinos from the primary proton–proton fusion process in the Sun,” Nature 512(7515), 383–386 (2014). [7] G. Srinivasan, Life and Death of the Stars, Springer, Berlin (2014). [8] C. R. Nave, “Proton–proton cycle,” Hyperphysics, http://hyperphysics. phy-astr.gsu.edu/hbase/Astro/procyc.html, accessed May 24, 2022 (2022). [9] A. E. Roy and D. Clarke, Astronomy, Structure of the Universe, A. Hilger, Bristol, England (1977). [10] Y. Tanaka, K. Matsuo, and S. Yuzuriha, “Long-lasting muscle thinning induced by infrared irradiation specialized with wavelengths and contact cooling: a preliminary report,” Eplasty 10, e40 (2010).

Chapter 21

Example Course Instructions for Smartphone-based Optical Labs Below are example lab instructions given in the Introduction to Modern Optics class at the University of Georgia in the fall of 2020. At the beginning of the class, the students were instructed to learn the basics of the Python™ programming language by working through a Python tutorial on error analysis and data fitting. The labs followed this training. Detailed lab instructions are given below.

21.1 General Lab Instructions 21.1.1 Important notices for students 1. 2. 3.

DO NOT shine the laser directly to anyone’s eyes. DO NOT directly touch the surfaces of lab materials with bare fingers where light will interact. If you observe dust or fingerprints on the surfaces of materials, use a lens cloth to clean them. DO NOT attempt to blow on them with your mouth.

21.1.2 Lab materials The following packed lab materials (all purchased from Amazon) are provided to the students (see Fig. 21.1): 1. 2. 3. 4. 5.

Two AA batteries ($18 for a 24-count pack) A battery case (3 V) with switch ($10 for 8 cases) A laser diode (3 V, 650 nm, 5 mW) ($6.50 for 10 pieces) Two polarizer sheets ($13 for an A5 size, cut into 1 cm  3 cm pieces with the polarization direction along the short side) One 1000 line/mm grating ($12 for 25 pieces)

139

Chapter 21

140

Figure 21.1

6. 7. 8. 9.

The full set of lab materials used.

A grating of unknown slit spacing ($11 for 1 roll of a 12 00  6 00 500 lines/ mm grating sheet, cut into 1 cm  3 cm pieces) Two cuvettes ($20 for a pack of 100) Three glass slides ($14 for a pack of 100 1 00  3 00 glass slides) A sheet of tracing paper ($5 for a booklet of 40 sheets)

The total cost of one complete lab package is about $5. Students are also provided with a PDF file of a printable protractor (https://free-printablepaper.com/printable-protractors/) and ruler (https://printable-ruler.net/). The laser diode connection instruction is provided in Appendix IV. 21.1.3 Lab instructions All the labs can be performed either in a face-to-face format or an online format, i.e., via Zoom or other online video meeting apps. Students can also make an appointment with the instructor for a face-to-face or online diagnostic of the labs. Detailed lab instructions are described in the following sections: • •



Lab 1 Data analysis lab: A tutorial lab for experimental data analysis (Chapter 2). Lab 2 Polarization: Two different labs on the polarization properties of light. The first examines Malus’s law for polarization, and the second measures the optical rotation of sugar solutions. Lab 3 Reflection: Two different labs on the Fresnel equations. The first is to examine Fresnel equations, and the second is to measure the Brewster angle of a glass slide.

Example Course Instructions for Smartphone-based Optical Labs •



141

Lab 4 Interference: Two different labs on the interference effect. The first lab investigates thin film interference, and the second lab examines thin wedge interference. Lab 5 Diffraction: Three different labs on the diffraction effect. The first lab is to use a known diffraction grating to determine the wavelength of the diode laser, the second is to determine the grating density of an unknown grating, and the third is to examine diffraction from a hair.

Students can find lab instructions in the corresponding book chapters and previous lab presentation videos on the “UGA Smartphone Intro Physics Lab” and “UGA Modern Optics: Smartphone Projects” YouTube channels found here: https://www.youtube.com/channel/UCDNH_mEXvy-Rp98ri96EuLw. Note that the detailed lab instructions, old lab reports on YouTube, and reference papers listed only provide some ideas for students to set up their own lab and show students how to collect and analyze data. Students should read all these files before they begin constructing their own lab setups. The specific labs that students design will be based on materials that they have on hand.

21.2 Polarization Labs 21.2.1 Required lab materials 1. 2. 3. 4. 5. 6. 7. 8.

A smartphone A laser diode A battery box Two AA batteries Two polarizer sheets A cuvette A printed protractor sheet Other household items: tape, cardboard, books, a scale, measuring cups, a ruler, etc.

21.2.2 Lab instruction Students need to set up their own lab and must complete the following tasks: 1.

Demonstrate Malus’s law (see Chapter 6): a. Set up the experiment for demonstrating Malus’s law; the angle a between two polarizers needs to be determined and continuously changed. b. Measure the relationship between the transmission intensity I p ðaÞ versus the angle a. I ðaÞ c. Use the following equation to fit the data: pI 0 ¼ cos2 ða  a0 Þ þ B, where I0 is the maximum transmission intensity measured and a0 and B are two fitting parameters.

Chapter 21

142

2.

3.

Determine the specific rotation coefficient of a sugar solution with a fixed concentration (see Chapter 9): a. Set up an experiment to demonstrate the optical activity. b. Make a sugar solution with a fixed concentration (for example, mass concentration of 6–10%) in a cuvette. c. Align the two polarizers to the same polarization direction. d. Insert the cuvette between the two polarizers and measure the transmitted intensity as a function of the polarization angle a. e. Determine the optical rotation angle fs for a fixed concentration c of sugar and find the specific rotation coefficient. The dimension of a cuvette is given by the specifications of the product. Write and submit a lab report manuscript based on these two experiments.

21.2.3 Additional labs The following labs can also be carried out using the constructed lab setup, and students can add the results to the lab report: 1. 2. 3.

Test the polarization properties of sunglasses. Measure the optical rotation angle versus the concentration of the sugar solution. Determine whether vegetable or other oils and a salt solution have optical rotation properties.

21.3 Reflection Labs 21.3.1 Required lab materials 1. 2. 3. 4. 5. 6. 7. 8.

A smartphone A laser diode A battery box Two AA batteries Two polarizer sheets A glass slide A printed protractor sheet Other household items: tape, cardboard, books, a ruler, etc.

21.3.2 Lab instructions Students should set up their own lab and complete the following tasks: 1.

Examine the Fresnel equations (see Chapter 7): a. Set up an experiment to examine the Fresnel equations using the s-polarized incident laser beam. The polarization direction of the

Example Course Instructions for Smartphone-based Optical Labs

2.

3.

143

polarizer is along the short edge (depending on the cut on the polarizer sheet; this must be confirmed by the instructor). b. Measure both the reflected and transmitted intensity versus the incident angle u1. Students are encouraged to repeat each u1-related intensity measurement at least five times. c. Use the Fresnel equation for reflection (Eq. 7.1 in Chapter 7) to fit the reflection data and extract the refractive index n2 for glass. d. Derive an equation based on the Fresnel equation for transmission (hint: there are two interfaces for transmission, the air–glass and glass–air interfaces), use it to fit the obtained transmission data, and obtain n2. Determine the Brewster angle at the air–glass interface (see Chapter 8): a. Calculate the expected Brewster angle based on the refractive index obtained from the first lab. b. Change the incident polarized light into p-polarization. c. Measure the reflection at the air–glass interface with the p-polarized light in the neighborhood of the estimated Brewster angle (5 deg) with a fine adjustment of the incident angle (say, an increment of 1 or 0.5 deg). d. Determine the Brewster angle with repeated measurements. Write and submit a lab report manuscript based on these two experiments.

21.3.3 Additional labs The following labs can also be carried out using the constructed lab setup, and students can add the results to their lab report: 1. 2.

Test the Fresnel equation for other materials, such as water, oil, acrylic sheets, or other plastic sheets. Determine the refractive index of these materials using the Brewster angle.

21.4 Interference Labs 21.4.1 Required lab materials Unfortunately, the laser beam from the laser diode is not coherent enough for the planned labs, so students use a He–Ne laser in the lab. 1. 2. 3. 4. 5.

A smartphone A He–Ne laser Three glass slides Multiple small-cut strips of printer paper Two lenses

Chapter 21

144

6. 7.

Various optical holders A sheet of tracing paper

21.4.2 Lab instruction Students should set up their own lab and complete the following tasks: 1.

2.

3.

Measure the thickness of a piece of printer paper using the thin film interference setup (see Chapter 10): a. Set up the experiment with a lens to converge the laser beam to the thin film and adjust the angle of the air gap to see whether an interference pattern can be observed. Students may need to add multiple pieces of paper to create the air film. b. If the students cannot observe the interference fringes, they should construct a two-lens system to expand the laser beam and then converge it onto the thin film. c. Students may need to measure the adjacent fringe separation as a function of the paper number to accurately determine the paper thickness. Determine the thickness of the paper using wedge interference (see Chapter 11): a. Wedge interference requires a parallel light beam. Students can first try to directly use the laser beam to see if it forms a visible interference pattern. b. If a direct laser beam does not work, students should construct a beam expander using two lenses. See Chapter 3 or https://www. edmundoptics.com/ViewDocument/best-of-eo-app-notes-beamexpanders.pdf to learn how to construct a beam expander. c. Similar to the thin film interference setup, a series of wedges with one or more pieces of paper may need to be constructed and the fringe separation versus the paper number may need to be plotted to accurately obtain the thickness of the paper. Write and submit a lab report manuscript based on these two experiments.

21.4.3 Additional labs The following labs can also be carried out using the constructed lab setup, and students can add the results to their lab report: 1. 2. 3.

Measure the thickness of a hair. Determine the thickness of plastic bags. Test the double- or multiple-slit interference using the slit sets.

Example Course Instructions for Smartphone-based Optical Labs

145

21.5 Diffraction Labs 21.5.1 Required lab materials 1. 2. 3. 4. 5. 6. 7. 8. 9.

A smartphone A laser diode A battery box Two AA batteries A known grating (1000 line/mm) An unknown grating A printed paper ruler A sheet of tracing paper Other household items: tape, cardboard, books, a ruler, a hair, etc.

21.5.2 Lab instruction Students should set up their own lab and complete the following tasks: 1.

2.

3.

Determine the wavelength of the laser beam from the laser diode using a known diffraction grating (see Chapter 12): a. Students will use the laser diode, the known grating, the paper ruler, and the tracing paper to set up a diffraction experiment, obtain a diffraction pattern, and measure the grating-to-screen distance. b. The adjacent bright fringe distance will be measured, and the diffraction grating equation can be used to determine the wavelength of the laser diode. Determine the grating density of an unknown grating (see Chapter 12): a. Students will replace the grating in previous labs with the unknown grating, examine the diffraction patterns on the screen, and use a similar diffraction grating equation to obtain the line density for the grating. Determine the thickness of a hair via diffraction (see Chapter 12): a. Students will use a single hair, fix it across a hole in a piece of cardboard, replace the grating in the previous lab, and align the laser beam so that it shines through the hair. b. Students will observe the diffraction pattern on the screen, measure the necessary parameters, and use the single-slit diffraction equations to estimate the diameter of the hair.

21.6 Summary of Lab Results The lab instructions in Sections 21.2–21.5 are simple and concise, leaving a lot of room for students to be creative and thoughtful. These instructions were given to 12 students in the Introduction to Modern Optics class at the University of Georgia in the fall of 2020. Although the experimental setups for the same labs are very different, the results are quite consistent.

Chapter 21

146

Figure 21.2 The experimental setups for polarization labs from nine different students.

Figure 21.2 shows the experimental design for the polarization labs taken from nine students’ reports. All the designs are very different. The students used cardboard, tape, matchboxes, paper clips, etc., to design the labs; the I ðaÞ

obtained normalized transmission intensity pI 0 versus the polarization angle a relationships from these nine sets of experiments are all plotted in Fig. 21.3. With the exception of one data set (open red circle in Fig. 21.3), most data I ðaÞ

from the different experiments are close to each other. The average pI 0 from the nine sets of experimental data are well fitted by the analytical function I ðaÞ 2 I 0 ¼ A cos a þ B. 1.0 0.97cos2 + 0.03

0

0.6

p

0.8

0.4 0.2 0.0 0

20

40

60

80 100 120 140 160 180

Polarizer Rotation Angle I ðaÞ

Figure 21.3 The plots of the normalized transmission intensity pI0 versus the polarization angle a from the nine experimental setups shown in Fig. 21.2. The solid curve shows the fitting using the relationship

I p ðaÞ I0

¼ A cos2 a þ B for the averaged

Ip ðaÞ I0

(filled black circles).

Example Course Instructions for Smartphone-based Optical Labs

Figure 21.4

147

The experimental setups for reflection labs from 11 different students.

Figure 21.4 shows the experimental designs for the reflection labs from 11 students. The angular-dependent reflection and transmission are mainly achieved via a goniometer. Students designed different goniometers using LEGO®, protractors, printed protractors, or even tablets, etc. The measured reflectance and transmittance versus the incident angle u1 data for s-polarization are plotted in Fig. 21.5. These data are more scattered compared with those shown in Fig. 21.3 because the angular precision of the goniometer, the reflectance or transmittance data analysis, as well as the carefulness with which the experiment was performed play important roles in determining the relationship. However, the average of all the data sets shown in Figs. 21.5(a) and (b) (filled black circles) agrees with the Fresnel equations quite well (solid curves). Table 21.1 also summarizes some other parameters obtained from various experiments by different students. Some experiments show large variations in measurements. For example, the specific optical rotation of sugar solution varies from 21 to 104 deg · mL/(g · dm), the refractive index of glass varies from 1.18 to 1.87, and the Brewster angle varies from 30 to 60 deg. These large variations and deviations could be due to carelessness in performing the measurements (for example, specific optical rotation) and the experimental design (for example, the goniometer design and measurement precision, the intensity measurements, etc.). On the other hand, the measured values for the

Chapter 21

148

Figure 21.5 The (a) reflectance and (b) transmittance of the glass slide by s-polarization incidence measured by 11 different students. The filled black squares show the average data from these measurements, and the solid black curves show the fits based on the Fresnel equations.

Table 21.1 Other parameters measured by the students in labs. Specific optical rotation [(deg · mL/ Student (g · dm)]

Refractive index of glass n2

Brewster’s angle Wavelength of the Grating at air–glass laser diode ldiode density (nm) interface uB (deg) (lines/inch)

Hair thickness (mm)

1

104.4

1.67

58

637

496

36.1

2

65

1.72

60

625

504

82.1

3

90.9

1.87

30

632

550

82

4

40

1.62

45

620



67

5

69

1.18

49.7

634

489

70.4

6

21.2

1.61

58

643

510

82

7

25.7

1.66

59

611

482

8

93.3

1.59

58.3

621



9



1.48

57.5

645

635

87

10



1.50

56

672



65

70.4 90

11



1.58

58.7

646

399

81

Ave.

61

1.59

53

635

492

73

Err.

12

0.05

3

16

18

5

wavelength of the diode, the grating density, and the thickness of a hair are quite consistent with the documented value, i.e., 650 nm for the diode wavelength, 500 lines/inch for the grating, and ,100 mm for hair. All of these results demonstrate that (1) with minimum instruction and an open-design policy, students could draw on their creativity to design different lab setups, and (2) most experimental results from various experimental designs are consistent with each other.

Appendix I

Materials Used in Labs Listed in the table on the next page are some materials that are commonly used in the experiments described throughout this book. Suggested search terms for purchasing these materials are provided.

149

10, 11, 13, 14, 19, 20

Adhesives

13, 15–17, 19, 20

Gratings

7, 10, 11

3, 4, 15, 17

Prisms

Glass slides

3–6, 8–14, 17

Measuring instruments

6–9

All chapters

Parts for experimental setup construction

9, 18

3, 4, 5, 10, 11

Optical lenses

Cuvette

15, 17–19

Light sources

Polarizers

3, 4, 6–15

Chapters

Laser pointers

Material

Used to hold components of the experimental setup together.

Used as a prototypical sample for experiments involving the determination of refractive indices of glass.

Used to hold solution for absorbance and birefringence studies.

Used for linear polarization of light.

Used to create interference patterns and separate white light into a spectrum.

Used to separate white light into a spectrum, and for studies of reflection and refraction.

Used to measure real distances and angles.

Used to build holders for optical components and smartphones. A 3D printer can also be used to construct parts, if the reader has access to one.

Used to construct beam expanders, collimators, and telescopes.

Used to illuminate samples and for calibration.

Used as a light source in studies involving diffraction, reflection, refraction, and other optical phenomena.

Description

Regular tape, Double-sided tape, Duct tape, Superglue

1 00 3 00 glass slides

Standard plastic disposable cuvette

Polarizer sheets

Diffraction grating (1000 lines/mm)

Triangular glass prism

Printable protractor, Printable ruler, Meterstick, Ruler, Tape measure

LEGO® Bricks, Cardboard scraps

Convex lens, focal lengths 50–100 mm Concave lens, focal lengths 50–1000 mm Lens set, focal lengths 200–500 mm

LED lightbulb CFL lightbulb Red laser diode + battery box

Laser pointer

Purchase search terms

150 Appendix I

Appendix II

Web Links and Smartphone Applications Listed in the table on the next page are some software programs, applications, and program languages that are used in the experiments described throughout this book. Links for downloading or purchasing them are provided.

151

1, 5, 7, 9, 11, 12, 14

15–20

15–20

6

15

All chapters

All chapters

Spectral Workbench

SpectraView

Light meter

Academo wavelengthto-color tool

Python™

Other data analysis software (optional)

Chapter(s)

ImageJ

Software/Application/ Programming language

If students have access to a research lab, subscription-based software such as OriginLab or programming languages such as MATLAB or Mathematica may be useful for fitting data and creating plots.

A programming language that is used in many areas of scientific research. Its scipy and matplotlib packages are particularly useful for fitting experimental data and creating plots and graphs.

A website designed to convert from a given wavelength to a color given in RGB, hexadecimal, or hue, saturation, lightness (HSL) format. It can be compared to colors of light from a monochromator or a spectrometer.

A smartphone app that can be used to determine the intensity of light units of lux.

An iPhone-compatible application that can measure the intensity of a spectrum photo from a smartphone spectrometer.

A spectral analysis website designed for use with homemade spectrometers. In this book, it is used for wavelength calibration of spectra acquired by smartphones.

An image analysis software implemented in many types of experimental research. In this book, it is often used for determining the pixel-to-distance calibration factor h or the intensity of light.

Description

https://www.originlab.com/ demodownload.aspx https://matlab.mathworks.com/

https://www.python.org/

https://academo.org/demos/ wavelength-to-colourrelationship/

N/A

https://apps.apple.com/us/app/ spectralviewer-spectraview/ id1201249094

https://spectralworkbench.org/

https://imagej.nih.gov/ij/

Link

152 Appendix II

Appendix Ill

Introduction to ImageJ ImageJ is a Java™-based image processing and analysis software program developed by the National Institutes of Health [1]. Due to its ability to accommodate a wide variety of image files (.TIFF, .PNG, .GIF, .JPEG, .BMP, .DICOM, and .FITS) as well as its flexible library of built-in analysis functions, it is frequently used throughout this book as a data analysis tool. The software can be downloaded for Linux™, Mac® OS X®, and Windows® operating systems via the website https://imagej.nih.gov/ij/. This section provides a tutorial on several common uses of ImageJ, using ImageJ Version 1.53o (January 2022).

III.1 Starting ImageJ Upon opening the ImageJ application, a small window will appear as shown in Fig. III.1. This window consists of the three components: a menu, a toolbar, and a status window.

Figure III.1

ImageJ start window consisting of a menu bar, a toolbar, and a status bar.

III.2 ImageJ Menu The ImageJ menu contains many of the functions that will be applied to perform image analysis. The following are some common uses of the menu: •

File and edit: To begin, an image must be selected and input by choosing File ! Open and selecting the desired image file. Any progress can also be saved in the file menu by choosing File ! Save or File ! Save As. The edit menu is often only used to undo actions.

153

Appendix Ill

154 •



Image and process: These options are used to display image properties and alter certain parameters such as the brightness, contrast, orientation, and other features of the image. Basic image processing can also be performed, including filtering, despeckling, fast Fourier transforms, and sharpening/ smoothing. Analyze and plug-ins: These are the most important menu options for obtaining information about the image. Together with the objects in the ImageJ toolbar, the Analyze ! Measure menu will be the most useful function for the purposes of the experiments in this book. For more advanced functionality, such as measurement of RGB values, the options on the Plugin menu can be accessed.

III.3 ImageJ Toolbar The ImageJ toolbar contains selection, measurement, labeling, viewing, and modification tools that are highlighted by the red, blue, green, purple, and orange boxes in Fig. III.1, respectively. When moving the cursor over pixels in the image, the status bar will display the x and y coordinates as well as the pixel value. The following are descriptions of these tools: •









Selection: Regions on an image can be selected for analysis in the shape of a rectangle, oval, polygon, or by freehand using the corresponding tools. When a particular shape is selected, the status bar will display the dimensions (width, height, length, angle, etc.) of the selected region. Measurement: Distances (in pixels) and angles can be measured by the line and angle tools. The RGB triplet of a pixel can be determined using the color picker tool. When measured, the status bar will display the dimensions (length, angle, RGB value, etc.) of the measured component. Labeling: The points on an image can be marked using the multipoint tool or labeled with text using the text tool. The wand tool is used to make selections by tracing regions of uniform color in the image. Viewing: The magnifying glass tool can be used to zoom in on a region of the image, and the scrolling tool allows the user to navigate the image. Modification: These tools are used to directly modify the intrinsic properties of the image and are not used in the experiments described in this book.

III.4 Image Analysis Example Using ImageJ Suppose we are asked to analyze the raw image of Yankee Stadium as shown in Fig. III.2. This section provides a basic example demonstrating the use of ImageJ functions to perform this task.

Introduction to ImageJ

Figure III.2

155

A Google Earth™ image of the baseball diamond at Yankee Stadium.

Step 1 (optional): Label the components of the image. It is often helpful to label the relevant components of the image for easier reference. Here, the text tool is used to label the four bases: home plate, first base, second base, and third base, as shown in Fig. III.3. The dirt and grass are also denoted by labels.

Figure III.3 upper left.

Image with labeled components. The ImageJ start window is shown in the

156

Appendix Ill

Figure III.4 Image with labeled components and an angle measurement. The ImageJ start window is shown in the upper left, and the Results dialog box is inset in the upper right.

Step 2: Measure the angle ∠ (home plate, first base, and second base). To determine the angle ∠ formed by home plate, first base, and second base, the angle tool will be used. The cursor is hovered over home plate and clicked once, moved to third base and clicked once, and moved to second base and clicked once. Two-line segments forming an angle should appear on the image, as shown by the yellow lines in Fig. III.4. To measure the angle, navigate to Analyze ! Measure in the menu or use the keyboard shortcut CtrlþM. A dialog box called Results should appear with the table with columns Area, Mean, Min, Max, Angle, and Length. The angle of 90.134 deg can be read off from the Angle column. What is the significance of this result? It is known that in baseball fields, the four bases form a square, with each corner measuring 90 deg. The measurement of 90.134 deg is very close to the real value, indicating that the image was taken with the baseball field perpendicular to the camera’s principal axis. Thus, distortion to the image is minimized. Step 3: Find the distances (in pixels) between any two bases. To determine the pixel distance between any two bases, the line tool is used. In this example, the cursor is hovered over home plate and clicked and dragged to first base, where it is released. The white line between these two fixed endpoints is shown in Fig. III.5. To measure the pixel distance, navigate to Analyze ! Measure in the menu or use the keyboard shortcut CtrlþM. A second measurement should be added to the Results dialog box. The pixel distance of

Introduction to ImageJ

157

Figure III.5 Image with labeled components and line measurements. The white line represents the measurement of the pixel length between home plate and first base. The yellow line represents the measurement of the pixel width of the dirt area.

lp ¼ 273.002 pixels between home plate and first base can be read off from the Length column. As discussed in Chapter 1, it is of interest to know the scaling factor h from pixel distance to real distance. It is known that the real distance between any two bases on a baseball field is la ¼ 90 ft. Then, the scaling factor for ft Fig. III.5 is h ¼ ll ap ¼ 27390pixels . Using h, the real width of the dirt area can be estimated directly from the image. First, the yellow line in Fig. III.5 is drawn over the width of the dirt to obtain a pixel width of 153.457 pixels. ft Multiplying by h, we have 153.547 pixels  27390pixels ¼ 50.619 ft. According to the result from Step 2, image distortion will have little effect on this calculated width. Step 4: Find the average RGB values for a selected region in the grass. The R, G, and B values can be found for a selected portion of the image by using any of the selection tools. Here, the rectangle tool is used to select a portion of the grass in the image as shown in Fig. III.6 (the yellow rectangle). By using Analyze ! Measure in the menu or the keyboard shortcut CtrlþM, a third measurement is added to the Results dialog box and the area of the selected rectangular region is found to be 3840 pixels. Next, to measure the RGB values, navigate to Plugins ! Analyze ! RGB Measure in the menu. Five more measurements should appear in the Results dialog box: three

158

Appendix Ill

Figure III.6 Image with labeled components and a rectangular area over which the RGB measurements are taken.

for R, G, and B, an average grayscale value, and a grayscale value calculated using Eq. 1.3 (Chapter 1). The average (R, G, B) values for the selected region are (67.615, 81.292, 98.342). The two average grayscale values in the selected region are RþGþB ¼ 72.433 and 0.299R þ 0.587G þ 0.114B ¼ 75.453, 3 respectively. Step 5: Save your work. The data in the Results dialog box can be saved by navigating to the File ! Save As option in the dialog box. The image itself can be saved by navigating to File ! Save in the menu or by using the keyboard shortcut CtrlþS.

Reference [1] C. A. Schneider, W. S. Rasband, and K. W. Eliceiri, “NIH Image to ImageJ: 25 years of image analysis,” Nat. Methods 9(7), 671–675 (2012).

Appendix IV

Connecting the Laser Diode 1.

Leave the battery box empty.

2.

Cut the plastic ends of the wires with a razor blade or a knife so that the metal wire ends for the battery box and laser diode can be long enough for easy connection.

159

160

Appendix IV

3.

Connect the red wire of the laser diode to the red wire of the battery box and the blue wire of the laser diode to the black wire of the battery box by twisting the two metal ends together.

4.

Wrap each connection with a piece of tape (any conventional tape will suffice, including Scotch® tape, duct tape, painter’s tape, etc.).

5.

Insert batteries in the battery box (make sure that the switch in the battery box is in the OFF position).

Connecting the Laser Diode

6.

161

Turn the switch in the battery box to the ON position; the laser diode is now on. Below, the laser diode and wires are taped onto a hardcover book.

Yiping Zhao is a Distinguished Research Professor at the Department of Physics and Astronomy at the University of Georgia, where he has taught undergraduate physics courses and conducted nanotechnology-based research since 2002. He obtained his Ph.D. in Physics at Rensselaer Polytechnic Institute in 1999. He is a Fellow of SPIE and the American Vacuum Society (AVS). Professor Zhao’s research interests are nanostructure and thin film fabrication and characterization, plasmonics and metamaterials, chemical and biological sensors, nanophotocatalysts, nanomotors, and nanotechnology for disease treatment. Yoong Sheng Phang is a Ph.D. student and National Science Foundation (NSF) Graduate Research Fellow in the Department of Physics at Harvard University. He received his B.S. degrees in Physics and Mathematics in 2022 at the University of Georgia, where he conducted undergraduate research on magnetic nanomotors and smartphone optics in Professor Zhao’s lab. His Ph.D. research is focused on experimental investigations of quantum materials.

Yiping Zhao and Yoong Sheng Phang

ZHAO, PHANG

Use of Smartphones in Optical Experimentation shows how smartphone-based optical labs can be designed and realized. The book presents demonstrations of fundamental geometric and physical optical principles, including the law of reflection, the law of refraction, image formation equations, dispersion, Beer’s law, polarization, Fresnel’s equations, optical rotation, diffraction, interference, and blackbody radiation. Many practical applications—how to design a monochromator and a spectrometer, use the Gaussian beam of a laser, measure the colors of LED lights, and estimate the temperature of an incandescent lamp or the Sun—are also included. The experimental designs provided in this book represent only a hint of the power of leveraging the technological capability of smartphones and other low-cost materials to create a physics lab.

Use of Smartphones in Optical Experimentation

Use of Smartphones in Optical Experimentation

Use of Smartphones in Optical Experimentation

This book can be used as a guide for undergraduate students and instructors for a hands-on experience with optics, especially for an online optical lab; elementary and high school science teachers to develop smartphone-based labs for classroom demonstrations; and anyone who wants to explore fundamental STEM concepts by designing and performing experiments anywhere.

Yiping Zhao Yoong Sheng Phang

P.O. Box 10 Bellingham, WA 98227-0010 ISBN: 9781510654976 SPIE Vol. No.: TT124 TT124