Signal and Noise in Geosciences: MATLAB® Recipes for Data Acquisition in Earth Sciences (Springer Textbooks in Earth Sciences, Geography and Environment) 3030749126, 9783030749125

This textbook introduces methods of geoscientific data acquisition using MATLAB in combination with inexpensive data acq

142 17 6MB

English Pages 351 [349] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Data Acquisition in Earth Sciences
1.1 Introduction
1.2 Methods of Data Acquisition
1.3 Classroom-Sized Earth Science Experiments
Recommended Reading
2 Introduction to MATLAB
2.1 MATLAB in Earth Sciences
2.2 Getting Started
2.3 The Syntax
2.4 Array Manipulation
2.5 Basic Visualization Tools
2.6 Generating Code to Recreate Graphics
2.7 Publishing and Sharing MATLAB Code
2.8 Exercises
2.8.1 Getting Started with MATLAB
2.8.2 Using MATLAB Help and Docs
2.8.3 Creating a Simple MATLAB Script
2.8.4 Creating Graphics with MATLAB
2.8.5 Collaborative Coding with MATLAB
Recommended Reading
3 MATLAB Programming
3.1 Introduction to Programming
3.2 Data Types in MATLAB
3.3 Data Storage and Handling
3.4 Control Flow
3.5 Scripts and Functions
3.6 Creating Graphical User Interfaces
3.7 Exercises
3.7.1 Communicating with the LEGO MINDSTORMS EV3 Brick
3.7.2 Controlling EV3 Motors Using an Ultrasonic Sensor
3.7.3 Reading Complex Text Files with MATLAB
3.7.4 Smartphone Sensors with MATLAB Mobile
3.7.5 Smartphone GPS Tracking with MATLAB Mobile
Recommended Reading
4 Geometric Properties
4.1 Introduction
4.2 Position on the Earth’s Surface
4.3 Digital Elevation Models of the Earth’s Surface
4.4 Gridding and Contouring
4.5 Exercises
4.5.1 Dip and Dip Direction of Planar Features Using Smartphone Sensors
4.5.2 Precision and Accuracy of Ultrasonic Distance Measurements
4.5.3 Spatial Resolution of the LEGO EV3 Ultrasonic Sensor
4.5.4 Object Scanning with the LEGO EV3 Ultrasonic Sensor
4.5.5 Point Clouds from Multiple Smartphone Images
Recommended Reading
5 Visible Light Images
5.1 Introduction
5.2 Visible Electromagnetic Waves
5.3 Acquiring Visible Digital Images
5.4 Storing Images on a Computer
5.5 Processing Images on a Computer
5.6 Image Enhancement, Correction and Rectification
5.7 Exercises
5.7.1 Smartphone Camera/Webcam Images with MATLAB
5.7.2 Enhancing, Rectifying and Referencing Images
5.7.3 Stitching Multiple Smartphone Images
5.7.4 Spatial Resolution of the LEGO EV3 Color Sensor
5.7.5 Scanning Images Using the LEGO EV3 Color Sensor
Recommended Reading
6 Spectral Imaging
6.1 Introduction
6.2 Visible to Thermal Electromagnetic Radiation
6.3 Acquiring Spectral Images
6.4 Storing Spectral Images on a Computer
6.5 Processing Spectral Images on a Computer
6.6 Exercises
6.6.1 Infrared Spectrometry of Landscapes
6.6.2 Using Spectral Cameras in a Botanic Garden
6.6.3 Using RGB Cameras to Classify Minerals in Rocks
6.6.4 Using Spectral Cameras to Classify Minerals in Rocks
6.6.5 Thermal Imaging in a Roof Garden
Recommended Reading
7 Acquisition of Elastic Signals
7.1 Introduction
7.2 Earth’s Elastic Properties
7.3 Acquiring Elastic Signals
7.4 Storing and Processing Elastic Signals
7.5 Exercises
7.5.1 Smartphone Seismometer
7.5.2 Smartphone Sonar for Distance Measurement
7.5.3 Use of Stereo Microphones to Locate a Sound Source
7.5.4 Sound in Time and Frequency Domains
7.5.5 Distortion of a Harmonic Signal
Recommended Reading
8 Gravimetric, Magnetic and Weather Data
8.1 Introduction
8.2 Earth’s Gravity Field, Magnetic Field and Weather
8.3 Acquiring Gravimetric, Magnetic and Weather Data
8.4 Storing Gravimetric, Magnetic and Weather Data
8.5 Exercises
8.5.1 Measuring the Density of Minerals
8.5.2 Gravitational Acceleration
8.5.3 Position, Velocity and Acceleration
8.5.4 LEGO-Smartphone Magnetic Survey
8.5.5 ThingSpeak Weather Station
Recommended Reading
Recommend Papers

Signal and Noise in Geosciences: MATLAB® Recipes for Data Acquisition in Earth Sciences (Springer Textbooks in Earth Sciences, Geography and Environment)
 3030749126, 9783030749125

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Martin H. Trauth

Signal and Noise in Geosciences MATLAB® Recipes for Data Acquisition in Earth Sciences

Springer Textbooks in Earth Sciences, Geography and Environment

The Springer Textbooks series publishes a broad portfolio of textbooks on Earth Sciences, Geography and Environmental Science. Springer textbooks provide comprehensive introductions as well as in-depth knowledge for advanced studies. A clear, reader-friendly layout and features such as end-of-chapter summaries, work examples, exercises, and glossaries help the reader to access the subject. Springer textbooks are essential for students, researchers and applied scientists.

More information about this series at http://www.springer.com/series/15201

Martin H. Trauth

Signal and Noise in Geosciences MATLAB® Recipes for Data Acquisition in Earth Sciences

123

Martin H. Trauth Institut für Geowissenschaften Universität Potsdam Potsdam, Brandenburg, Germany

ISSN 2510-1307 ISSN 2510-1315 (electronic) Springer Textbooks in Earth Sciences, Geography and Environment ISBN 978-3-030-74912-5 ISBN 978-3-030-74913-2 (eBook) https://doi.org/10.1007/978-3-030-74913-2 MATLAB® and Simulink® are registered trademarks of The MathWorks, Inc., 1 Apple Hill Drive, Natick, MA 01760-2098, USA, http://www.mathworks.com. © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Earth science observations often involve complex measurement procedures, highly specialized measuring devices, and sophisticated data processing. The design of geoscientific data-collection programs and experiments, their execution, and evaluation of the results are all complex and expensive tasks that require a significant amount of preparation. This book introduces methods of geoscientific data acquisition using MATLAB in combination with inexpensive data acquisition hardware such as sensors in smartphones, sensors that come with the LEGO MINDSTORMS set, webcams with stereo microphones, and affordable spectral and thermal cameras. The text includes 35 exercises in data acquisition, such as using a smartphone to acquire stereo images of rock specimens from which to calculate point clouds; using visible and near-infrared spectral cameras to classify the minerals in rocks; using thermal cameras to differentiate between different types of surface such as between soil and vegetation; localizing a sound source using travel time differences between pairs of microphones to localize a sound source; quantifying the total harmonic distortion and signal-to-noise ratio of acoustic and elastic signals; acquiring and streaming meteorological data using application programming interfaces, wireless networks, and Internet of Things platforms; determining the spatial resolution of ultrasonic and optical sensors; and detecting magnetic anomalies using a smartphone magnetometer mounted on a LEGO MINDSTORMS scanner. MATLAB software, which has been in existence for more than 40 years, is used since it not only provides numerous ready-to-use algorithms for most data acquisition methods but also allows the existing routines to be modified and expanded, or new software to be developed. MATLAB is also used because, unlike the free alternatives, it offers a warranty, complete and up-to-date documentation, professional support with a guaranteed response time of less than one working day, and the ability to certify computer code. However, most of the methods used in this book are introduced with very simple codes before using any MATLAB-specific functions, so that users of free alternatives to MATLAB will also be able to appreciate the book.

v

vi

Preface

The book’s electronic supplementary material (available online through SpringerLink) contains recipes that include all the MATLAB commands featured in the book, the example data, the LEGO construction plans, and photos and videos of the measurement procedures. However, the names of the files in the supplementary material need to be modified before they can be used, for example, the basics_2_7_mygraph.m file name should be changed to just mygraph.m. The MATLAB recipes display various graphs on the screen that are not shown in the printed book but are included in the book’s electronic supplementary material. The figure function can be used to rearrange the graphs on the screen. In some cases, it may be necessary to adjust the Position option in the figure function to the actual size and resolution of the user's display. This tutorial-style book does, however, include numerous figures making it possible to go through the text without actually running MATLAB on a computer. The recipes have been developed using MATLAB 9.10 Release R2021a, but most of them will also work with earlier software releases. The book has benefited from the comments of many colleagues and friends worldwide who helped with designing the experiments. Jonas Räsch accompanied the project for the book for three years, was involved in the development of the setups of the experiments, and was always a valuable discussion partner. My colleagues Jens Tronicke and Bodo Bookhagen at the University of Potsdam helped with the applied geophysics and remote sensing issues in Chaps. 4, 6, and 8. Sebastian Stacks from the RWTH Aachen University was available to answer questions regarding the phyphox app and educational program. Amir Salaree from the University of Michigan assisted with valuable discussions on seismic experiments using smartphones. The book would not be possible without the MathWorks Academic Support, which kindly sponsored an external proposal for curriculum development support. I also acknowledge Emily Devine, John Elhilow, Naomi Fernandes, and Alexa Sanchez from the Book Program at The MathWorks, Inc.; Mihaela Jarema, Sebastian Gross, Stefan Kerber, Cedric Esmarch, Steve Schäfer, Martin Strathemann, Andreas Himmeldorf, and their teams at The MathWorks GmbH Deutschland for their assistance; Steve Eddins for dealing with my many questions about the history of MATLAB and for helping in the use of the Image Processing Toolbox; and Conor McGovern, Friedrich Hempel, Vijay Sridhar, Jim David, Martijn Aben, Oliver Jährig, and Christoph Stockhammer at MathWorks for their patience in dealing with the large number of questions that I have sent to the Support section of MathWorks over the last few months. I think the excellent support provided by the company more than justifies any expenditure that I have made on MATLAB licenses over the last 25 years or more. I have great difficulty expressing my gratitude to Ed Manning in an appropriate manner; he has been improving the readability of my books for more than a decade. I am very grateful to Annett Büttner, Marion Schneider, Agata Weissmann, Helen

Preface

vii

Rachner, and their team at SpringerNature for their continuing interest and support in publishing my books. I would also like to thank Brunhilde Schulz, Robert Laudien, Andreas Bohlen, Ingo Orgzall and their team at UP Transfer GmbH for organizing short courses on my books over almost two decades. Potsdam, Germany March 2021

Martin H. Trauth

Contents

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

1 1 4 9 13

2 Introduction to MATLAB . . . . . . . . . . . . . . . . . . 2.1 MATLAB in Earth Sciences . . . . . . . . . . . . . 2.2 Getting Started . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Syntax . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Array Manipulation . . . . . . . . . . . . . . . . . . . 2.5 Basic Visualization Tools . . . . . . . . . . . . . . . 2.6 Generating Code to Recreate Graphics . . . . . . 2.7 Publishing and Sharing MATLAB Code . . . . 2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1 Getting Started with MATLAB . . . . . 2.8.2 Using MATLAB Help and Docs . . . . 2.8.3 Creating a Simple MATLAB Script . . 2.8.4 Creating Graphics with MATLAB . . . 2.8.5 Collaborative Coding with MATLAB . Recommended Reading . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

15 15 16 18 23 28 31 33 36 36 37 38 40 42 45

3 MATLAB Programming . . . . . . . . . . . 3.1 Introduction to Programming . . . . . 3.2 Data Types in MATLAB . . . . . . . 3.3 Data Storage and Handling . . . . . . 3.4 Control Flow . . . . . . . . . . . . . . . . 3.5 Scripts and Functions . . . . . . . . . . 3.6 Creating Graphical User Interfaces . 3.7 Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

47 47 48 62 72 76 80 86

1 Data Acquisition in Earth Sciences . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Methods of Data Acquisition . . . . . . . . . . . . 1.3 Classroom-Sized Earth Science Experiments Recommended Reading . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

ix

x

Contents

3.7.1 Communicating with the LEGO MINDSTORMS EV3 Brick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Controlling EV3 Motors Using an Ultrasonic Sensor . . 3.7.3 Reading Complex Text Files with MATLAB . . . . . . . 3.7.4 Smartphone Sensors with MATLAB Mobile . . . . . . . . 3.7.5 Smartphone GPS Tracking with MATLAB Mobile . . . Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Geometric Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Position on the Earth’s Surface . . . . . . . . . . . . . . . . . . . 4.3 Digital Elevation Models of the Earth’s Surface . . . . . . . 4.4 Gridding and Contouring . . . . . . . . . . . . . . . . . . . . . . . 4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Dip and Dip Direction of Planar Features Using Smartphone Sensors . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Precision and Accuracy of Ultrasonic Distance Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Spatial Resolution of the LEGO EV3 Ultrasonic Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Object Scanning with the LEGO EV3 Ultrasonic Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.5 Point Clouds from Multiple Smartphone Images . Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. 86 . 89 . 95 . 98 . 102 . 105

. . . . . .

. . . . . .

. . . . . .

107 107 108 111 115 121

. . . . . . . 121 . . . . . . . 127 . . . . . . . 133 . . . . . . . 140 . . . . . . . 145 . . . . . . . 152

5 Visible Light Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Visible Electromagnetic Waves . . . . . . . . . . . . . . . . . . . . . . 5.3 Acquiring Visible Digital Images . . . . . . . . . . . . . . . . . . . . . 5.4 Storing Images on a Computer . . . . . . . . . . . . . . . . . . . . . . 5.5 Processing Images on a Computer . . . . . . . . . . . . . . . . . . . . 5.6 Image Enhancement, Correction and Rectification . . . . . . . . 5.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Smartphone Camera/Webcam Images with MATLAB 5.7.2 Enhancing, Rectifying and Referencing Images . . . . . 5.7.3 Stitching Multiple Smartphone Images . . . . . . . . . . . 5.7.4 Spatial Resolution of the LEGO EV3 Color Sensor . . 5.7.5 Scanning Images Using the LEGO EV3 Color Sensor Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

153 153 154 155 157 161 168 176 176 180 184 188 197 203

6 Spectral Imaging . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Visible to Thermal Electromagnetic Radiation 6.3 Acquiring Spectral Images . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

205 205 206 207

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Contents

xi

6.4 Storing Spectral Images on a Computer . . . . . . . . . . . . . . . 6.5 Processing Spectral Images on a Computer . . . . . . . . . . . . 6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Infrared Spectrometry of Landscapes . . . . . . . . . . . 6.6.2 Using Spectral Cameras in a Botanic Garden . . . . . 6.6.3 Using RGB Cameras to Classify Minerals in Rocks . 6.6.4 Using Spectral Cameras to Classify Minerals in Rocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.5 Thermal Imaging in a Roof Garden . . . . . . . . . . . . Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

208 212 216 216 220 230

. . . . . 236 . . . . . 246 . . . . . 253

7 Acquisition of Elastic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Earth’s Elastic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Acquiring Elastic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Storing and Processing Elastic Signals . . . . . . . . . . . . . . . . . . 7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Smartphone Seismometer . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Smartphone Sonar for Distance Measurement . . . . . . . 7.5.3 Use of Stereo Microphones to Locate a Sound Source . 7.5.4 Sound in Time and Frequency Domains . . . . . . . . . . . 7.5.5 Distortion of a Harmonic Signal . . . . . . . . . . . . . . . . . Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

255 255 256 258 259 265 266 269 274 282 292 300

8 Gravimetric, Magnetic and Weather Data . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Earth’s Gravity Field, Magnetic Field and Weather . 8.3 Acquiring Gravimetric, Magnetic and Weather Data . 8.4 Storing Gravimetric, Magnetic and Weather Data . . . 8.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Measuring the Density of Minerals . . . . . . . . 8.5.2 Gravitational Acceleration . . . . . . . . . . . . . . 8.5.3 Position, Velocity and Acceleration . . . . . . . 8.5.4 LEGO-Smartphone Magnetic Survey . . . . . . 8.5.5 ThingSpeak Weather Station . . . . . . . . . . . . Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

301 301 302 304 306 308 309 313 316 322 329 339

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

1

Data Acquisition in Earth Sciences

1.1

Introduction

At the time that I was preparing my doctoral thesis on signal processing of paleoceanographic time series at the University of Kiel in the mid-1990s, I was also reading an excellent book by the American literary agent and author John Brockman, entitled The Third Culture: Beyond the Scientific Revolution (Brockman 1995). The book comprises a collection of essays in which well-known and respected scientists presented their latest ideas to the public. I found an essay by the paleontologist and evolutionary biologist Stephen Jay Gould particularly fascinating, and it prompted me to read one of his most famous books, Hen’s Teeth and Horse’s Toes (Gould 1983), in which I stumbled across these words in Chap. 10: Scientific discovery is not a one-way transfer of information from unambiguous nature to minds that are always open. It is a reciprocal interaction between a multifarious and confusing nature and minds sufficiently receptive (as many are not) to extract a weak but sensible pattern from the prevailing noise.

Perfect for my dissertation on signal processing of paleoceanographic time series, I thought and used it as an inscription for my thesis. And now, 25 years later, it is again appropriate as I present this book on signal and noise in the geosciences. Going back a little further to when I was studying geophysics (albeit not very successfully) in the mid-1980s, I learned in the basic physics laboratory how to collect measurements, sometimes using very sophisticated equipment. Measuring things was one thing, but almost more important (and more time-consuming) was determining the uncertainties in the measured values. It was necessary to determine both systematic and random errors for each series of measurements, which was not always easy, but very instructive. I believe it was around this time that I started Electronic supplementary material The online version of this chapter (https://doi.org/10.1007/ 978-3-030-74913-2_1) contains supplementary material, which is available to authorized users. © Springer Nature Switzerland AG 2021 M. H. Trauth, Signal and Noise in Geosciences, Springer Textbooks in Earth Sciences, Geography and Environment, https://doi.org/10.1007/978-3-030-74913-2_1

1

2

1 Data Acquisition in Earth Sciences

asking my colleagues in the geosciences, who did not have degrees in physics, to draw error bars on the data points in their plots. When I switched from geophysics to geology I was surprised to find that working with quantitative data was not very popular. Measuring the orientation of geologic features with a compass, entering the values onto a map, and drawing rose diagrams by hand: this was how geological data were evaluated (Fig. 1.1). Shortly before this, I had taken university-level courses in mathematics and in numerical and statistical methods, and also a FORTRAN77 programming course in geophysics; that was a whole different world! Computers were already widespread and affordable but word processing was still done on a typewriter. It seemed only logical to then leave the geological mapping to others and to turn to a doctoral project that dealt instead with different types of errors in measurements (Trauth 1995). There have been a lot of developments in the geosciences, including geology, since those days, although not at every university or research institution, as I have come to realize from my course participants over the past 25 years. The geological compass used to measure the orientation of a geologic feature is now available in an

Fig. 1.1 Measuring the orientation of Orthoceras fossils with a geological compass on an outcrop at Neptuni Åkrar, near Byxelkrok on Öland, Sweden. A statistical analysis of fossil orientations at Neptuni Åkrar has revealed a significant southerly paleocurrent direction, which is in agreement with the previous paleogeographic reconstructions (from Trauth 2020).

1.1 Introduction

3

electronic form, with an interface to the computer. Students are now even using geological apps on smartphones, along with geological apps to obtain replicate measurements for error calculations. However, students are in many cases not well prepared to make use of these technological developments since many geological curricula (and to an even greater extent, geographical curricula) still do not include any high-level mathematics or programming. This book deals with the collection of many different types of measurements to describe our planet Earth, whereby we always try to determine the possible errors in these measurements. The science of measurement is known as metrology (from the Greek lesqῶ or metrṓ for to measure). According to Wikipedia, which cites Czichos and Smith (2011), metrology involves three basic overlapping operations: (1) the definition of units of measurement, (2) the realization of these units of measurement in practice, and (3) the linking of measurements made in practice to reference standards. In this book we will focus primarily on the second of these operations, applied metrology, taking the other two operations for granted. We therefore make a lot of assumptions when we collect measurements and must never lose sight of the fact that the basis for these assumptions could change. Modern methods for measuring geoscientific data can be grouped according to the dimensions of the data sets that they are able to cope with. The first group consists of methods used to measure a single variable x, e.g. the sodium content of volcanic glass measured with a microprobe. Multiple measurements xi, with i having any value between 1 and N and N being the sample size, can be used to determine the random error of the measurement method. The second group consists of methods used to acquire simultaneous measurements of two variables x and y, with the aim of establishing a correlation between the two. An example would be measuring the potassium content of the volcanic glass as well as the sodium content. A special case of measuring two variables is measuring one variable x over time t, with the aim of performing a time-series analysis to identify trends, rhythms, and events in x(t). Three-dimensional measurements involve the xy mapping of a single variable z, also with the possibility of repeating the measurement zi at each location. An example is the spatial distribution of sodium content within a shard of volcanic glass with the aim of mapping weathered areas with lower sodium concentrations. It is, of course, possible to carry out such mapping for multiple chemical elements at the same time and store the data in a multidimensional array. An example of this is RGB photography, with m-by-n pixel images with three colors (red, green, and blue) that are stored in m-by-n-by-3 arrays. Multispectral and hyperspectral images have even more than three colors or spectral bands stored in the pages or sheets of m-by-n elements. Of course, the size of such a data set (i.e. the sample size) and the spatial and temporal resolution are always limited by constraints of time and budget of the project. However, a single number that describes the sodium content of volcanic glass is worthless without a specified confidence limit. For example, the true sodium content for a reading of 3.28 g in 100 g of glass could be zero if the standard error is of the same order of magnitude as this reading. The aim of the

4

1 Data Acquisition in Earth Sciences

book is to provide experience with different measuring methods used in the geosciences and to develop a feeling for what the measured values actually mean, as well as for the uncertainties in, and confidence limits of, these values.

1.2

Methods of Data Acquisition

The signals that are measured and then processed in a computer are discrete, i.e. they have been obtained by sampling a signal that is continuous in nature. As an example, continuously changing air temperatures can be measured at hourly intervals so that we obtain 24 discrete measurements per day, 168 measurements per week, and *8,760 measurements per year. The sampling interval Dt of our time series is 1/24 days, and the corresponding sampling frequency is 1/Dt = 24 day−1. Sampling is not only always associated with a loss of information but also introduces a number of other modifications and distortions of the signal. In our example, we obtain a single temperature that we take to be representative of the whole hour in which we measured it. We also measure it in one place and not over the entire area for which this temperature value might be used on a map. Many measurements and evaluation applications interpolate between discrete measured values, usually on an evenly spaced grid, which is a prerequisite for many methods of processing, analyzing, and visualizing. Furthermore, the signal is discretized during processing in a computer, namely through the use of a finite number of bits with two possible values (or states), 0 or 1. If we want to save our measurements in a data array with only one bit per data point, we can only save two different values for the temperature (e.g. 0 and 100 °C) and no values in between (e.g. 65 °C). In order to store more complex types of data, the bits are joined together to form larger groups such as bytes, each consisting of eight bits. These 8 bits (or 1 byte) allow 28 = 256 possible combinations of the eight ones or zeros and are therefore able to represent 256 different temperature values. Larger numbers of bits, for example, 64 bits which is the standard data type double in MATLAB, allow the representation of 264 or *1.8447  1019 different values. One of the most common effects of sampling is aliasing. This term refers to the misinterpretation of a sine wave with a frequency f, which is higher than the sampling frequency fs, as a sine wave with an alias frequency fa, which is lower than the sampling frequency. Aliasing is also known as a stroboscopic effect or a wagon wheel effect in human visual perception. This effect means that if objects that are rotating quickly and smoothly at a high frequency, such as the wheels of a wagon, are recorded with a video camera, operating at a sampling frequency of 24 frames per second, they will appear to be circling backward at a lower frequency. We can calculate fa using f a ¼ j f  f s  roundðf =f s Þj

1.2 Methods of Data Acquisition

5

where round stands for rounding to the nearest integer. We can use this as our first MATLAB example, demonstrating the effect of sampling a sinusoidal signal with a frequency f at a sampling rate of fs that is too low to describe the signal accurately. As an example, we generate a sine wave with a frequency of f = 1/2 = 0.5 (corresponding to a period of 2) (Trauth 2020) using t = 0 : 0.1 : 15; t = t'; x1 = sin(2*pi*t/2);

If this signal, originally sampled with a frequency of fs = 1/0.1 = 10, is sampled with a frequency that is too low, e.g. fs = 1/1.6 = 0.6250, we generate an alias frequency of fa = 1/8 = 0.1250, as we can calculate with the formula from above: fa = abs(0.5 - 0.6250 * round(0.5/0.6250))

which yields fa = 0.1250

We can demonstrate the aliasing effect with this example, if we actually sample the signal with the lower sampling rate of fs = 0.6250, ts = 0 : 1.6 : 15; ts = ts'; x2 = interp1(t,x1,ts,'linear','extrap');

shift the resulting signal by 3.5 to align it with the original signal x3 = sin(2*pi*t*fa + 2*pi*3.5);

and display the result (Fig. 1.2). figure('Position',[100 1000 600 300]) axes('Position',[0.1 0.2 0.8 0.7],... 'Box','On',... 'LineWidth',1,... 'FontSize',14,... 'YLim',[-1.2 1.2]) hold on line(t,x1,... 'Color',[0 0.4471 0.7412],... 'LineWidth',1.5) s1 = stem(ts,x2,... 'Color',[0.8510 0.3255 0.0980],...

6

1 Data Acquisition in Earth Sciences

Fig. 1.2 Graphic illustration of the alias effect. If a signal with a frequency f of 0.5 (blue line) is sampled with a low sampling frequency fs = 0.3125 (red stems and circles), a signal with a lower aliasing frequency fa of 0.125 is produced (yellow line).

'LineWidth',1.5,... 'Marker','o',... 'MarkerSize',14); line(t,x3,... 'Color',[0.9294 0.6941 0.1255],... 'LineWidth',1.5) xlabel('Time') s1.BaseLine.LineStyle = ':'; s1.BaseLine.LineWidth = 1;

The graphic illustration shows that the signal with the frequency f = 0.5 (corresponding to a period of 1/0.5 = 2) (blue line), sampled with a low sampling frequency fs = 0.3125 (corresponding to a sampling interval of 1/0.3125 = 3.2) (red stems and circles), produces a signal with a lower aliasing frequency fa = 0.1250 (corresponding to a alias period of 1/0.1250 = 8) (yellow line). The effect of aliasing can be avoided by applying the Nyquist–Shannon sampling theorem, according to which the sampling frequency fs must be at least twice the frequency of interest. Spectral representations of signals are therefore shown at half the sampling frequency (fs/2), which is called the Nyquist frequency fnyq and was defined by the American mathematician Claude Elwood Shannon and named after the Swedish-American electrical engineer Harry Nyquist. Compact discs (CDs) provide a good example of an application of the Nyquist–Shannon sampling theorem: CDs are sampled at frequencies of 44,100 Hz (Hertz, where 1 Hz = 1 cycle per second) but the corresponding Nyquist frequency is 22,050 Hz, which is the highest frequency a CD player can theoretically produce. The performance limitations of anti-alias filters used by CD players further reduce the frequency band and result in a cutoff frequency of around 20,050 Hz, which is the true upper-frequency limit of a CD player.

1.2 Methods of Data Acquisition

7

Signals are measured using sensors, of which there are two basic types: active sensors and passive sensors. Active sensors generate an electrical signal during the measurement and the voltage or current of this output signal can then be measured without requiring any external energy supply. Instead, they generate a voltage or current as an output signal that can be measured. The disadvantage of these sensors is their limited accuracy, which makes frequent calibration essential. Examples of active sensors are photo elements, thermocouples, and piezo crystals. Passive sensors consist of passive components, the parameters of which are changed by the measured variable. These sensors use primary electronics to drive the measurement process, which means that they can be used to measure static variables. Measurements with passive sensors generate a resistance, a capacitance, or an induced current which can then be measured with great accuracy. Examples of passive sensors include potentiometers, resistance thermometers, inductive and capacitive sensors, and strain gauges. Another problem with discrete measurements is that the individual values do not in fact represent point measurements from a particular time or place, as might initially appear to be the case. Most measuring devices integrate over a certain range and with some statistical methods it is quite important to know how the device changes the measured value in space and time. Test functions, or sequences of repeated measurements in the discrete case, are used to describe, quantify and correct this effect, or in other words, to calibrate the measuring device (Fig. 1.3) (Chap. 6 of Trauth 2020). The most commonly used test function (or sequence) in MATLAB is the impulse function, which has a value of zero everywhere except at position 0, where the value is one. The step function, also known as the step function or Heaviside function, is zero for negative arguments and one for non-negative arguments. The ramp function is zero for negative arguments and has increasing values with a slope of one for positive arguments.

Fig. 1.3 Test functions or sequences to describe and predict the performance of a measuring device: a impulse sequence, which is zero in every position except for position 0, where it takes the value one; b step sequence, which is zero for negative arguments and one for non-negative arguments; c ramp sequence, which is zero for negative arguments and has increasing values with a slope of one for positive arguments.

8

1 Data Acquisition in Earth Sciences

An example of the change in a signal caused by the measuring arrangement is the impulse response of a magnetic coil used to measure the magnetic susceptibility (MS) of a sediment core as it is passed through the coil. Since the coil cannot take point measurements but measures the MS over a certain length of the core, the signal is significantly smoothed. To determine the impulse response of the measuring arrangement, an empty half-tube (or liner) is used instead of a sediment core. These liners are usually made of plastic, which is diamagnetic and therefore has very weak (negative) MS. A thin ferromagnetic plate is attached to the liner perpendicular to the direction of movement and the liner passed through the coil while at the same time measuring the MS. Due to the integrating effect of the instrument, instead of a discrete impulse sequence, the result is a curve that is reminiscent of the bell-shaped curve of a Gaussian probability density function (Fig. 1.4). This impulse response can then be used to deconvolve the smoothing effect of the instrument (see Chap. 6 in Trauth 2020). The way that measurements are registered and stored has changed dramatically over the last thirty years. Prior to that readings were recorded manually and in some areas, this is still the case. Geologists go into the field, take measurements with a geological compass, and read the dip and dip direction of strata from a compass needle and a mechanical tilt indicator (clinometer). The next level of measurement acquisition involves the translation of a deflection in a measuring device (e.g. a seismometer) into the movement of a pen on a roll of paper. The storage of data on magnetic data carriers (e.g. magnetic tapes) requires the response of a measuring device to be converted into an electrical signal. The use of analog-to-digital converters (ADCs) then allows the data to be stored in a digital (mostly binary) format. These digital formats make the storage, transmission, and display of measured values much easier but above all, facilitate signal processing, data compression, and error correction (Trauth and Sillmann 2018).

Fig. 1.4 Impulse response sequence (in red) of a unit impulse (in blue) to describe and predict the performance of a measuring device. The impulse response of the device shows that the original signal is smeared over a wide area, which is why signals are generally smoothed during the measuring process.

1.2 Methods of Data Acquisition

9

All of these steps require interfaces across which information can be exchanged concerning the natural phenomena that are actually measured. The vibration of the ground during an earthquake, for example, is transferred to an inert mass which in turn transfers the relative movement to a magnetic coil. The coil generates an electric signal which is then converted into a digital signal by an ADC. The software for evaluating the signal requires hardware support in order to process, analyze, and display the signals acquired by the coil, which is usually provided by the manufacturers of either the hardware or the software. The latest phenomenon in this context is the Internet of Things (IoT) which involves connecting real physical objects, usually equipped with a number of sensors, via an Internet interface to virtual objects, usually software on a computer. Measured values are recorded, processed, and used to control processes with no human intervention. The best-known application of the IoT is in smart homes, where sensors for brightness, room temperature, and other parameters control services such as lighting and heating. Weather stations are also used in gardens to control automatic irrigation or shading systems.

1.3

Classroom-Sized Earth Science Experiments

Earth science observations often make use of complex measuring arrangements (such as orbiting satellites, seismic arrays, or magnetotelluric surveys), highly specialized measuring devices (such as mass spectrometers, spectroscopes, and gravimeters), and sophisticated data processing methods (such as signal processing, spectral analysis, pattern recognition, and inverse methods). Instruction in the design of geoscientific data collection programs and experiments, their execution, and evaluation of the results can be complex, expensive, and require significant preparation. In view of the typically limited financial resources available for university teaching courses, this book therefore attempts to make such earth science experiments and data-collection exercises readily available in the classroom. While the measuring devices commonly used for the included experiments will often cost more than 10,000 €, in this book only we only use measuring devices that students either already own (e.g. smartphones), or can purchase for significantly less than 500 € (e.g. LEGO MINDSTORMS EV3 Core Set, MAPIR Spectral Camera, or SEEK Thermal Camera). In theory, all of these experiments can therefore be carried out at home, which opens up the possibility of using the book for online distance learning. Many of these experiments use the LEGO® MINDSTORMS® EV3 Education Core Sets and Expansion Sets, together with LEGO® SPEED® rail tracks and wheels, connected using the classic LEGO® Cross Axle (Fig. 1.5). The EV3 Core Set consists of a programmable EV3 Brick with an ARM9 Central Processing Unit (CPU) running Linux, a USB connector, and a Micro SD slot, as well as connections for LEGO MINDSTORMS servo motors and sensors. The EV3 Core Set includes three motors, a gyro sensor, an ultrasonic (or sonic) sensor, a color (or

10

1 Data Acquisition in Earth Sciences

Fig. 1.5 MATLAB Mobile/LEGO MINDSTORMS scanner with an Apple iPhone 8 that can be used for horizontal and vertical scanning of objects, such as magnetic anomalies under the table or three-dimensional objects on the table.

RGB) sensor, and an infrared sensor (or detector). The MATLAB Support Package for LEGO MINDSTORMS EV3 Hardware provides MATLAB functions to control the motors and reads the measured values from the sensors. There are also a number of low-priced but high-quality sensors on the market, some of which are equipped with LEGO-compatible mounting systems. At the time of writing, the LEGO MINDSTORMS EV3 system has already been on the market for a while and the LEGO MINDSTORMS Robot Inventor and LEGO Education SPIKE® (together with the toy variant BOOST®, which uses the same hub) has just been introduced, but without MATLAB support package so far. The new Robot Inventor contains only two sensors (a sonic sensor and a color sensor) but four motors and the EV3 Brick successor Smarthub. There is no touch sensor and the gyro sensor is included in the hub. However, the new Robot Inventor system uses the microphone and speaker of the tablet on which the associated app for programming is installed. According to the information currently available from LEGO Education, LEGO MINDSTORMS EV3 will still be continued to be available for schools and universities, from where the system can therefore still be purchased. At the same time as the new Robot Inventor was introduced, the EV3 received a software update that enabled programming with EV3 MicroPython.

1.3 Classroom-Sized Earth Science Experiments

11

Fig. 1.6 Three-dimensional design of a MATLAB Mobile/LEGO MINDSTORMS scanner with an Apple iPhone 8 that can be used for horizontal and vertical scanning of objects.

The LEGO construction plans were created with the LEGO DIGITAL DESIGNER (LDD), which is still available for download even though it is no longer being supported by LEGO (Fig. 1.6). The construction plans in the book’s electronic supplementary material are provided in three formats. The first file format is .lxf, which is the native file format of the LDD. Instead of using the software provided by LEGO, a construction plan in this format can also be opened and edited with the current alternatives to the LDD, for example with STUDIO 2.0 by BrickLink. Construction plans in .lxf format can also be uploaded to, and edited in, a number of online platforms such as, for example, MecaBricks, which contains all important LEGO components except for LEGO MINDSTORMS components. The construction plans are loaded without errors but without the LEGO MINDSTORMS components. The second file format is.html, with which an index file can be opened in the browser. This file format allows you to navigate through step-by-step instructions, similar to the printed instructions in a commercial LEGO set. The third file format is a Microsoft® Excel®.xlsx file with which a file can be opened that contains a list of all LEGO bricks and pieces, including a thumbnail image, the LEGO part numbers, color code, and the quantity required. The increasing use of mobile internet devices with built-in sensors, such as smartphones and tablets, gives us the opportunity to conduct inexpensive experiments without purchasing sensors since most students today have smartphones. The first sensor in a smartphone is the camera, in fact, they now usually have a number

12

1 Data Acquisition in Earth Sciences

of cameras. The cameras built into smartphones are now of very high quality and can usually acquire and process images of 12 megapixels (=12 million picture elements). In addition to the usual JPEG format, the image data are now also saved in other formats such as the High Efficiency Image File Format (HEIC), and also the Raw Image File Format (RAW) which provides minimally processed and compressed data from the image sensor resulting in improved image analysis results. The numerous pro applications (or apps) available for photography also allow camera settings such as exposure time, aperture, and focus to be set manually, thus ensuring reproducible results. Microphones have also been built into mobile phones for a long time and are now high-quality stereo instruments. In our experiments with photo, video, and sound recordings we will also use a High Definition (HD) webcam. A more recent development in smartphones is the inclusions of a Global Positioning System (GPS) sensor for determining locations. The first such sensors in smartphones were of limited use but they did at least yield more accurate locations than could be determined by triangulation between cell phone masts. Smartphones also contain an xyz magnetometer for use as an electronic compass, as well as an xyz acceleration sensor and a gyroscope for angular velocity determination. We use these sensors in our experiments but other sensors in smartphones such as the ambient light, proximity, and fingerprint sensors, and sensors for monitoring vital functions, are not used. Instead of using the barometer in a smartphone, we use the environmental data sensors in a weather station. The data collected by smartphone sensors can be evaluated with a number of apps, most of which can be downloaded at no cost from app stores such as the Apple App Store (for iOS devices) and Google Play (for Android devices). In addition to apps for certain specialized applications such as the seismometer app from FFFF00 Agents AB, we also use the very popular phyphox app, which is available for both iOS and Android smartphones (Staacks et al. 2018; Vieyra et al. 2018; Stampfer et al. 2020). The phyphox app of the RWTH Aachen University comes with an educational program available online (Staacks et al. 2018). There are also numerous hardware support packages available for MATLAB, including the possibility of programming apps for iOS and Android devices in MATLAB. There is also an app for iOS and Android devices called MATLAB Mobile. This app is not a full MATLAB installation but establishes a connection to MATLAB Online and MATLAB Drive that allows MATLAB to be used on a mobile device. MATLAB Mobile can also be used to read data from the smartphone sensors and store them either on the mobile device or on MATLAB Drive. Alternatively, MATLAB Mobile can connect to MATLAB Drive using the MATLAB Drive Connector to stream the data to MATLAB installed on a computer (also known as MATLAB Desktop). In addition to the RGB cameras in smartphones, we also use a number of different spectral cameras and thermal cameras, each costing about 500 €, with the help of which we can make a more precise classification of vegetation, rock, and soil types than is usually achieved by remote sensing. The images obtained with these cameras are usually acquired either in daylight or under artificial lighting

1.3 Classroom-Sized Earth Science Experiments

13

generated with Light-Emitting Diode (LED) photography lighting kits that have a spectrum very similar to daylight. Both the cameras and any artificial lighting are mounted on tripods in order to enable the acquisition of a series of images under identical conditions. The most variable condition is likely to be the weather, which we record with an inexpensive weather station by Netatmo that can be connected to MATLAB Desktop via a weather server and the Internet of Things platform ThingSpeak powered by MathWorks.

Recommended Reading Attaway S (2018) MATLAB: a practical introduction to programming and problem solving. Butterworth-Heinemann, Oxford Waltham Behrens A, Atorf L, Schwann R, Neumann B, Schnitzler R, Balle J, Herold T, Telle A, Noll TG, Hameyer K, Aach T (2010) MATLAB meets LEGO Mindstorms—a freshman introduction course into practical engineering. IEEE Trans Educ 53:306–317 Brockman J (1995) The third culture: beyond the scientific revolution. Simon & Schuster Ltd., New York Clauser C (2016) Einführung in die Geophysik. Springer, Berlin Heidelberg New York Czichos H, Smith L (2011) Springer handbook of metrology and testing. Springer, Berlin Heidelberg New York Gadallah M, Fisher R (2009) Exploration geophysics: an introduction. Springer, Berlin Heidelberg New York Gould SJ (1983) Hen’s teeth and Horse’s toes. Further reflections in natural history. W.W. Norten & Company, New York Gupta HK (ed) (2011) Encyclopedia of solid earth geophysics. Springer, Berlin Heidelberg New York Hahn BH, Valentine DT (2017) Essential MATLAB for engineers and scientists, 6th edn. Academic Press, London Oxford Boston New York San Diego Hengl T, Reuter HI (2009) Geomorphometry—concepts, software, applications. developments in soil science, vol 33. Elsevier, Amsterdam Hughes IG, Hase TPA (2010) Measurements and their uncertainties: a practical guide to modern error analysis. Oxford University Press, Oxford Lein JK (2012) Environmental sensing—analytical techniques for earth observation. Springer, Berlin Heidelberg New York Lillesand T, Kiefer RW, Chipman J (2015) Remote sensing and image interpretation, 7th edn. John Wiley & Sons Inc., New York MathWorks (2020a) MATLAB Primer. The MathWorks, Inc., Natick, MA MathWorks (2020b) MATLAB App Building. The MathWorks, Inc., Natick, MA MathWorks (2020c) MATLAB Programming Fundamentals. The MathWorks, Inc., Natick, MA MathWorks (2020d) MATLAB Mobile for iOS/for Android. The MathWorks, Inc., Natick, MA (available online) Parthier R (2016) Messtechnik: Grundlagen und Anwendungen der elektrischen Messtechnik, 8th edn. Springer Vieweg, Wiesbaden Quarteroni A, Saleri F, Gervasio P (2014) Scientific computing with MATLAB and Octave, 4th edn. Springer, Berlin Heidelberg New York Rabinovich SG (2017) Evaluating measurement accuracy: a practical approach, 3rd edn. Springer, Berlin Heidelberg New York Salaree A, Stein S, Saloor N, Elling R (2017) Turn your smartphone into a geophysics lab. A&G 58:6.35–6.36

14

1 Data Acquisition in Earth Sciences

Staacks S et al. (2018) Advanced tools for smartphone-based experiments: phyphox. Phys Educ 53:045009 Stampfer C, Heinke H, Staacks S (2020) A lab in the pocket. Nat Rev 5:169–170 Tenoudji FC (2016) Analog and digital signal analysis: from basics to applications. Springer, Berlin Heidelberg New York The LEGO Group (2013a) LEGO® MINDSTORMS® EV3 Hardware Developer Kit. The LEGO Group, Billund, Denmark. https://www.lego.com/en-gb/themes/mindstorms/downloads The LEGO Group (2013b) LEGO® MINDSTORMS® EV3 Firmware Source Code Documentation. The LEGO Group, Billund, Denmark. https://www.lego.com/en-gb/themes/mindstorms/ downloads Trauth MH (2020) MATLAB recipes for earth sciences, 5th edn. Springer, Berlin Heidelberg New York Trauth MH, Sillmann E (2018) Collecting, processing and presenting geoscientific information, MATLAB® and design recipes for earth sciences, 2nd edn. Springer, Berlin Heidelberg New York Trauth MH (1995) Bioturbate Signalverzerrung hochauflösender paläoozeanographischer Zeitreihen. Berichte–Reports, Geologisch-Paläontologisches Institut der Universität Kiel, Doctoral Thesis, Kiel Vieyra R, Vieyra C, Jeanjacquot P, Marti A, Monteiro M (2018) Turn your smartphone into a science laboratory. Sci Teach 82:32–40

2

Introduction to MATLAB

2.1

MATLAB in Earth Sciences

MATLAB® is a software package developed by The MathWorks, Inc., which was founded by Cleve Moler, Jack Little, and Steve Bangert in 1984 and which has its headquarters in Natick, Massachusetts. MATLAB was designed to perform mathematical calculations, to analyze and visualize data, and to facilitate the writing of new software programs. The advantage of this software is that it combines comprehensive math and graphics functions with a powerful high-level language. Since MATLAB contains a large library of ready-to-use routines for a wide range of applications, the user can solve technical computing problems much more quickly than with traditional programming languages such as C and FORTRAN. The standard library of functions can be significantly expanded by add-on toolboxes, which are collections of functions for special purposes such as image processing, creating map displays, performing geospatial data analysis, or solving partial differential equations. During the last few years, MATLAB has become an increasingly popular tool in geosciences. It has been used for finite element modeling, processing seismic data, analyzing satellite imagery, and generating digital elevation models from satellite data. The continuing popularity of the software is also apparent in scientific publications and many conference presentations have also made reference to MATLAB. Universities and research institutions have recognized the need for MATLAB training for staff and students, and many earth science departments across the world now offer MATLAB courses for undergraduates.

Electronic supplementary material The online version of this chapter (https://doi.org/10.1007/ 978-3-030-74913-2_2) contains supplementary material, which is available to authorized users. © Springer Nature Switzerland AG 2021 M. H. Trauth, Signal and Noise in Geosciences, Springer Textbooks in Earth Sciences, Geography and Environment, https://doi.org/10.1007/978-3-030-74913-2_2

15

16

2

Introduction to MATLAB

MathWorks provides classroom kits for teachers at a reasonable price and it is also possible for students to purchase a low-cost edition of the software. This student version provides an inexpensive way for students to improve their MATLAB skills. There are also campus licenses (Total Academic Headcount License) with numerous toolboxes that can be purchased by universities. Employees and students can then download and install MATLAB at no additional cost, use it without being connected to a license server, and participate in an extensive online training program called MATLAB Academy. The following sections contain a tutorial-style introduction to MATLAB, covering the setup on the computer (Sect. 2.2), the MATLAB syntax (Sects. 2.3 and 2.4), and visualization (Sect. 2.5). Advanced sections are also included on generating M-files to recreate graphics (Sect. 2.6), and on publishing and sharing MATLAB code (Sect. 2.7). The reader is urged to go through the entire chapter to obtain a good knowledge of the software before proceeding to the subsequent chapters of the book. A more detailed introduction can be found in the MATLAB Primer (MathWorks 2021a) which is available in print form, online, and as a PDF file. The acquired knowledge will then be used in the exercises at the end of this chapter (Sect. 2.8). In this book, we use MATLAB Version 9.10 (Release 2021a), the Audio Toolbox Version 3.0, the Computer Vision Toolbox Version 10.0, the Image Acquisition Toolbox Version 6.4, the Image Processing Toolbox Version 11.3, the Mapping Toolbox Version 5.1, the Signal Processing Toolbox Version 8.6, and the Statistics and Machine Learning Toolbox Version 12.1. We have also used the MATLAB Support Package for USB Webcams Version 21.1.0, the MATLAB Support Package for LEGO MINDSTORMS EV3 Hardware Version 21.1.0, the MATLAB Support Package for IP Cameras Version 21.1.0, the MATLAB Support Package for Apple iOS Sensors Version 21.1.0, and the MATLAB Support Package for Android Sensors Version 21.1.0.

2.2

Getting Started

The software package comes with extensive documentation, tutorials and examples, which includes the book MATLAB Primer (MathWorks 2021a). Since the MATLAB Primer provides all the information required to be able to use the software, this introduction concentrates on those software components and tools used in the subsequent chapters of this book. Following the installation of MATLAB, the software is launched by clicking on the application’s shortcut icon. It can also be launched the software without using the MATLAB Desktop by typing cd /Applications/MATLAB_R2021a.app/bin matlab -nojvm -nodisplay -nosplash

2.2 Getting Started

17

Fig. 2.1 Screenshot of the MATLAB default desktop layout, including the Current Folder (left in the figure), the Command Window (center), and the Workspace (right) panels. This book uses only the Command Window and the built-in Editor, which can be called up by typing edit after the prompt. All information provided by the other panels can also be accessed through the Command Window.

in the operating system prompt, using ./matlab instead of matlab on computers with the macOS operating system. If the software is launched from its desktop then it comes up with several panels (Fig. 2.1). The default desktop layout includes the Current Folder panel, which lists the files in the directory currently being used, the Command Window, which serves as the interface between the software and the user (i.e. it accepts MATLAB commands typed after the prompt >>), and the Workspace panel, which lists the variables in the MATLAB workspace and is empty at the start of a new software session. In this book, we mainly use the Command Window and the built-in text Editor, which can be launched by typing edit

By default, the software stores all of your MATLAB-related files in the startup folder named MATLAB. Alternatively, you can create a personal directory in which to store your MATLAB-related files. You should then make this new directory the working directory using the Current Folder panel or the Folder Browser at the top of the MATLAB desktop. The software uses a Search Path to find MATLAB-related files, which are organized in directories on the hard disk. The

18

2

Introduction to MATLAB

default search path includes only the MATLAB_R2020b directory that has been created by the installer in the Applications folder, and the default working directory called MATLAB. To see which directories are in the search path or to add new directories, select Set Path from the Home toolstrip of the MATLAB desktop and use the Set Path dialog box. The modified search path is saved in the software preferences on your hard disk and the software will then in future direct MATLAB to use your custom path.

2.3

The Syntax

The name MATLAB stands for matrix laboratory. The classic object handled by MATLAB is a matrix, i.e. a rectangular two-dimensional array of numbers. A simple 1-by-1 array is a scalar. Arrays with one column or row are vectors, time series, or other one-dimensional data fields. An m-by-n array can be used for a digital elevation model or a grayscale image. Red, green, and blue (RGB) color images are usually stored as three-dimensional arrays, i.e. the colors red, green, and blue are represented by an m-by-n-by-3 array. Before proceeding, we need to clear the workspace by typing clear

after the prompt in the Command Window. Clearing the workspace is always recommended before working on a new MATLAB project to avoid name conflicts with previous projects. We can also go a step further, close all Figure Windows using close all and clear the content of the Command Window using clc. It is therefore recommended that a new MATLAB project should always start with the line clear, close all, clc

Entering matrices or arrays in MATLAB is easy. To enter an arbitrary matrix, type A = [2 4 3 7; 9 3 -1 2; 1 9 3 7; 6 6 3 -2]

which first defines a variable A, then lists the elements of the array in square brackets. The rows of A are separated by semicolons, whereas the elements of a row are separated by blank spaces, or by commas. After pressing the return key, MATLAB displays the array A= 2 9 1 6

4 3 9 6

3 -1 3 3

7 2 7 -2

2.3 The Syntax

19

Displaying the elements of A could be problematic for very large arrays such as digital elevation models consisting of thousands or millions of elements. To suppress the display of an array or the result of an operation in general, the line needs to end with a semicolon. A = [2 4 3 7; 9 3 -1 2; 1 9 3 7; 6 6 3 -2];

The array A is now stored in the workspace and we can carry out some basic operations with it, such as computing the sum of elements, sum(A)

which results in the display ans = 18

22

8

14

Since we did not specify an output variable (such as the A used above for the input array), MATLAB uses a default variable ans, short for answer or most recent answer, to store the results of the calculation. We should, however, generally define variables since the next computation without a new variable name will overwrite the contents of ans. The above example illustrates an important point about MATLAB: the software prefers to work with the columns of arrays. The four results of sum(A) are the sums of the elements in each of the four columns of A. To sum all elements of A and store the result in a scalar b, we simply need to type b = sum(sum(A));

which first sums the columns of the array and then the elements of the resulting vector. We now have two variables, A and b, stored in the workspace. We can easily check this by typing whos

which is one of the most frequently used MATLAB commands. The software then lists all variables in the workspace, together with information on their sizes (or dimensions), the number of bytes, classes, and attributes (see Sect. 3.2 for more details). Name

Size

A ans b

4x4 1x4 1x1

Bytes Class 128 double 32 double 8 double

Attributes

20

2

Introduction to MATLAB

Note that by default MATLAB is case sensitive, i.e. A and a can define two different variables. In this context, it is recommended that capital letters be used for arrays that have two dimensions or more and lower-case letters for one-dimensional arrays (or vectors) and scalars. However, it is also common to use variables with mixed large and small letters. This is particularly important when using descriptive variable names, i.e. variables whose names contain information concerning their meaning or purpose, such as the variable CatchmentSize, rather than a single-character variable a. We can now delete the contents of the variable ans by typing clear ans

We will next learn how specific array elements can be accessed or exchanged. Typing A(3,2)

simply yields the array element located in the third row and second column, which is 9. The array indexing, therefore, follows the rule (row, column). We can use this to replace single or multiple array elements. As an example we type A(3,2) = 30

to replace the element A(3,2) by 30 and to display the entire array. A= 2 9 1 6

4 3 3 -1 30 3 6 3

7 2 7 -2

If we wish to replace several elements at one time, we can use the colon operator. Typing A(3,1:4) = [1 3 3 5]

or A(3,:) = [1 3 3 5]

replaces all elements in the third row of the array A. The colon operator also has several other uses in MATLAB, for instance as a shortcut for entering array elements such as

2.3 The Syntax

21

c = 0 : 10

which creates a vector, or a one-dimensional array with a single row, containing all integers from 0 to 10. The resultant MATLAB response is c= 0 1

2 3 4 5

6 7 8

9 10

Note that this statement creates 11 elements, i.e. the integers from 1 to 10 and the zero. A common error when indexing arrays is to ignore the zero and therefore expect 10 elements instead of the 11 in our example. We can check this from the output of whos. Name

Size

A ans b c

4x4 1x1 1x1 1x11

Bytes Class 128 8 8 88

Attributes

double double double double

The above command creates only integers, i.e. the interval between the array elements is one unit. However, an arbitrary interval can be defined, for example, 0.5 units. This is used in a subsequent chapter to create evenly-spaced time vectors for time-series analysis. Typing c = 1 : 0.5 : 10

results in the display c= Columns 1 through 6 1.0000

1.5000

2.0000

2.5000

3.0000

3.5000

5.5000

6.0000

6.5000

8.5000

9.0000

9.5000

Columns 7 through 12 4.0000

4.5000

5.0000

Columns 13 through 18 7.0000

7.5000

8.0000

Column 19 10.0000

which autowraps the lines that are longer than the width of the Command Window. The display of the values of a variable can be interrupted by pressing Ctrl+C (Control +C) on the keyboard. This interruption affects only the output in the Command Window; the actual command is still processed but the result is not displayed.

22

2

Introduction to MATLAB

MATLAB provides standard arithmetic operators for addition (+) and subtraction (–). The asterisk (*) denotes matrix multiplication involving inner products between rows and columns. For instance, to multiply the matrix A by a new matrix B B = [4 2 6 5; 7 8 5 6; 2 1 -8 -9; 3 1 2 3];

we type C = A * B'

where ' is the complex conjugate transpose, which turns rows into columns and columns into rows. This generates the output C= 69 46 53 44

103 -79 94 11 76 -64 93 12

37 34 27 24

In linear algebra, matrices are used to keep track of the coefficients of linear transformations. The multiplication of two matrices represents the combination of two linear transformations into a single transformation. Matrix multiplication is not commutative, i.e. A*B' and B*A' yield different results in most cases. Similarly, MATLAB allows matrix divisions representing different transformations, with / as the operator for right-hand matrix division and \ as the operator for the left-hand division. Finally, the software also allows the powers of matrices (^). In earth sciences, however, matrices are often simply used as two-dimensional arrays of numerical data rather than a matrix sensu stricto representing a linear transformation. Arithmetic operations on such arrays are carried out element-by-element. While this does not make any difference in addition and subtraction, it does affect multiplicative operations. MATLAB uses a dot (.) as part of the notation for these operations. As an example, multiplying A and B element-by-element is performed by typing C = A .* B

which generates the output C= 8 63 2 18

8 18 35 24 -5 12 3 -24 -45 6 6 -6

2.4 Array Manipulation

2.4

23

Array Manipulation

MATLAB provides a wide range of functions with which to manipulate arrays (or matrices). This section introduces the most important functions for array manipulation, which are used later in the book. We first clear the workspace and create two arrays, A and B, by typing clear, close all, clc A = [2 4 3; 9 3 -1] B = [1 9 3; 6 6 3]

which yields A= 2 9

4 3

3 -1

1 6

9 6

3 3

B=

When we work with arrays, we sometimes need to concatenate two or more arrays into a single array. We can use either cat(dim,A,B) with dim=1 to concatenate the arrays A and B along the first dimension (i.e. along the rows), or we can use the vertcat function to concatenate the arrays A and B vertically. By typing either C = cat(1,A,B)

or C = vertcat(A,B)

we obtain (in both cases) C= 2 9 1 6

4 3 9 6

3 -1 3 3

Similarly, we can concatenate arrays horizontally, i.e. concatenate the arrays along the second dimension (along with the columns) by typing D = cat(2,A,B)

24

2

Introduction to MATLAB

or using the horzcat function instead D = horzcat(A,B)

both of which yield D= 2 9

4 3

3 -1

1 6

9 6

3 3

When working with satellite images we often concatenate three spectral bands of satellite images into three-dimensional arrays of the colors red, green, and blue (RGB) (Sects. 3.2 and 5.4). We again use cat(dim,A,B) with dim=3 to concatenate the arrays A and B along the third dimension by typing E = cat(3,A,B)

which yields E(:,:,1) = 2 9

4 3

3 -1

E(:,:,2) = 1 6

9 6

3 3

Typing whos

yields Name

Size

A B C D E

2x3 2x3 4x3 2x6 2x3x2

Bytes Class 48 48 96 96 96

Attributes

double double double double double

indicating that we have now created a three-dimensional array, as the size 2-by-3-by-2 suggests. Alternatively, we can use size(E)

2.4 Array Manipulation

25

which yields ans = 2

3

2

to see that the array has 2 rows, 3 columns, and 2 layers in the third dimension. Using length instead of size, length(A)

yields ans = 3

which tells us the dimension of the largest array only. Hence length is normally used to determine the length of a one-dimensional array (or vector), such the evenly-spaced time axis c that was created in Sect. 2.3. MATLAB uses matrix-style indexing of arrays with the (1,1) element being located in the upper-left corner of arrays. Other types of data that are imported into MATLAB may follow a different indexing convention. As an example, digital terrain models (introduced in Sect. 4.3) often have a different way of indexing and therefore need to be flipped in an up-down direction or, in other words, about a horizontal axis. Alternatively, we can flip arrays in a left-right direction (i.e. about a vertical axis). We can do this by using flipud to flip in an up-down direction and fliplr to flip in a left-right direction F = flipud(A) F = fliplr(A)

yielding F= 9 2

3 4

-1 3

3 -1

4 3

2 9

F=

In more complex examples we can use circshift(A,K,dim) to circularly shift (i.e. rotate) arrays by K positions along the dimension dim. As an example we can shift the array A by one position along the 2nd dimension (i.e. along the rows) by typing G = circshift(A,1,2)

26

2

Introduction to MATLAB

which yields G= 3 -1

2 9

4 3

We can also use reshape(A,[m n]) to completely reshape the array. The result is an m-by-n array H whose elements are taken column-wise from A. As an example, we create a 3-by-2 array from A by typing H = reshape(A,[3 2])

which yields H= 2 9 4

3 3 -1

Another important way to manipulate arrays is to sort their elements. As an example, we can use sort(C,dim,mode) with dim=1 and mode='ascend' to sort the elements of C in ascending order along the first array dimension (i.e. the rows). Typing I = sort(C,1,'ascend')

yields I= 1 2 6 9

3 4 6 9

-1 3 3 3

The function sortrows(C,column) with column=2 sorts the rows of C according to the second column. Typing J = sortrows(C,2)

yields J= 9 2 6 1

3 -1 4 3 6 3 9 3

2.4 Array Manipulation

27

Array manipulation also includes comparing arrays, for example by using ismember to check whether elements in A(i,j) are also found in B. Typing A, B K = ismember(A,B)

yields A= 2 9

4 3

3 -1

1 6

9 6

3 3

B=

K= 23 logical array 0 1

0 1 1 0

K is a logical array, i.e. an array of logical values 0 (false) or 1 (true). The array L(i,j) is zero if A(i,j) is not found in B, and one if A(i,j) is found in B. We can also locate elements within A for which a statement is true. For example, we can locate elements with values less than zero and replace them with NaNs as the MATLAB representation for Not-a-Number that can be used to mark missing data or gaps. L = A; L(find(L= B disp('A is not less than B') end

The for loops and if–then constructs are extensively used in the following chapters of the book. For other aspects of programming, please refer to the MATLAB documentation (MathWorks 2021a and c).

3.5

Scripts and Functions

MATLAB is a powerful programming language. All files containing MATLAB code use.m as an extension and are therefore called M-files. These files contain ASCII text and can be edited using a standard text editor. However, the built-in Editor color-highlights various syntax elements, for example, comments in green, keywords such as if, for, and end in blue, and character strings in purple. This syntax highlighting facilitates MATLAB coding. MATLAB uses two types of M-files: scripts and functions. Whereas scripts are a series of commands that operate on data in the workspace, functions are true algorithms, with input and output variables. The advantages and disadvantages of both types of M-file will now be illustrated by an example. We first start the Editor by typing edit

This opens a new window named untitled. Next, we generate a simple MATLAB script by typing a series of commands to calculate the average of the elements of a data array x. [m,n] = size(x); if m == 1 m = n; end sum(x)/m

3.5 Scripts and Functions

77

The first line of the if–then construct yields the dimensions of the variable x, using the command size. In our example, x should be either a column vector, i.e. an array with a single column and dimensions (m,1), or a row vector, i.e. an array with a single row and dimensions (1,n). The if statement evaluates a logical expression and executes a group of commands if this expression is true. The end keyword terminates the last group of commands. In the example, the if–then construct picks either m or n, depending on whether m==1 is false or true. Here, the double equal sign == makes element by element comparisons between the variables (or numbers) to the left and right of the equal signs and returns an array of the same size, made up of elements set to logical 1 where the relationship is true and to logical 0 where it is not true. In our example, m==1 returns 1 if m equals 1 and 0 if m equals any other value. The last line of the if–then construct computes the average by dividing the sum of elements by m or n. We do not use a semicolon here to allow the output of the result. We can now save our new M-file as average.m and type clear, close all, clc x = [3 6 2 -3 8];

in the Command Window to define an example array x. We then type average

without the extension.m to run our script and obtain the average of the elements of the array x as output. ans = 3.2000

After typing whos

we see that the workspace now contains Name ans m n x

Size 1x1 1x1 1x1 1x5

Bytes 8 8 8 40

Class double double double double

Attributes

78

3

MATLAB Programming

The listed variables are the example array x and the output of the function size, m and n. The result of the operation is stored in the variable ans. Since the default variable ans might be overwritten during a succeeding operation, we need to define a different variable. Typing a = average

however, results in the error message Attempt to execute SCRIPT average as a function: /Users/trauth/Desktop/average.m

We cannot assign a variable to the output of a script. Moreover, all variables defined and used in the script appear in the workspace; in our example, these are the variables m and n. Scripts contain sequences of commands that are applied to variables in the workspace. MATLAB functions, however, allow inputs and outputs to be defined but do not automatically import variables from the workspace. To convert the above script into a function we need to introduce the following modifications (Fig. 3.2): function y = average(x) %AVERAGE Average value. % AVERAGE(X) is the average of the elements in the array X. % By Martin Trauth, 27 June 2014 [m,n] = size(x); if m == 1 m = n; end y = sum(x)/m;

The first line now contains the keyword function, the function name average, the input x, and the output y. The next two lines contain comments, as indicated by the percent sign, separated by an empty line. The second comment line contains the author’s name and the version of the M-file. The rest of the file contains the actual operations. The last line now defines the value of the output variable y, and this line is terminated by a semicolon to suppress the display of the result in the Command Window. We then type help average

3.5 Scripts and Functions

79

Fig. 3.2 Screenshot of the MATLAB Editor showing the function average. The function starts with a line containing the keyword function, the name of the function average, the input variable x, and the output variable y. The subsequent lines contain the output of help average, the copyright and version information, and the actual MATLAB code for computing the average using this function.

which displays the first block of contiguous comment lines. The first executable statement (or blank line in our example) effectively ends the help section and therefore the output of help. We are now independent of the variable names used in our function. The workspace can now be cleared and a new data vector defined. clear data = [3 6 2 -3 8];

Our function can then be run by the statement result = average(data);

This clearly illustrates the advantages of functions compared to scripts. Typing whos

80

3

MATLAB Programming

results in Name

Size

data result

1x5 1x1

Bytes Class

Attributes

40 double 8 double

revealing that none of the variables used in the function appear in the workspace. Only the input and output, as defined by the user, are stored in the workspace. The M-files can therefore be applied to data as if they were real functions, whereas scripts contain sequences of commands that are applied to the variables in the workspace. If we want variables such as m and n to also appear in the memory they must be defined as global variables in both the function and the workspace, otherwise, they are considered to be local variables. We, therefore, add one line to the function average with the command global: function y = average(x) %AVERAGE Average value. % AVERAGE(X) is the average of the elements in the array X. % By Martin Trauth, June 27, 2014 global m n [m,n] = size(x); if m == 1 m = n; end y = sum(x)/m;

We now type global m n

in the Command Window. After running the function as described in the previous example, we find the two variables m and n in the workspace. We have therefore transferred the variables m and n from the function average and to the workspace.

3.6

Creating Graphical User Interfaces

Almost all the methods of data analysis presented in this book are in the form of MATLAB scripts, i.e. series of commands that operate on data in the workspace (Sect. 3.5). The MATLAB commands provided by MathWorks, however, are mostly functions, i.e. algorithms, with input and output variables (Sect. 3.5). The most convenient variants of these functions are those with a graphical user interface (GUI). A GUI in MATLAB is a graphical display in one or more windows

3.6 Creating Graphical User Interfaces

81

containing controls (or components) that enable the user to perform interactive tasks without typing commands in the Command Window or writing a script in the Editor. These components include pull-down menus, pushbuttons, sliders, text input fields, and more. The GUI can read and write data files as well as performing many types of computation and displaying the results in graphics. The manual entitled MATLAB App Building (MathWorks 2021b) provides a comprehensive guide to the creation of GUIs with MATLAB. This manual, however, demonstrates a rather complex example with many interactive elements, rather than providing the simplest possible example of a GUI. The following text, therefore, provides a very simple example of a GUI that computes and displays a Gaussian function for a mean and a standard deviation that can be defined by the user. Creating such a simple GUI with MATLAB requires two steps: the first step involves designing the layout of the GUI and the second step involves adding functions to the components of the GUI. One way to create a graphical user interface with MATLAB is using the GUI Design Environment (GUIDE). We start GUIDE by typing guide

in the Command Window. Calling up GUIDE opens the GUIDE Quick Start dialog where we can choose to either open a previously created GUI or create a new one from a template. From the dialog, we choose the GUIDE template Blank GUI (Default) and click on OK, after which the GUIDE Layout Editor appears. We first enable Show names in component palette in the GUIDE Preferences under the File menu and click on OK. We then select Grid and Rulers from the Tools menu and enable Show rulers. The GUIDE Layout Editor displays an empty layout with dimensions of 670-by-388 pixels. We resize the layout to 500-by-300 pixels by clicking and dragging the lower right corner of the GUI. We then place components such as static text, edit text, and axes onto the GUI by choosing the corresponding controls from the component palette. In our example, we place two Edit Text areas on the left side of the GUI, along with a Static Text area containing the title Mean, with Standard Deviation above it. Double-clicking on the static text areas brings up the Property Inspector, in which we can modify the properties of the components. We change the String of the static text areas to Mean and Standard Deviation. We can also change other properties of the text, such as the FontName, FontSize, BackgroundColor, and HorizontalAlignment. Instead of the default Edit Text content of the edit text areas we choose 0 for the mean and 1 for the standard deviation text area. We now place an axis system with a dimension of 250-by-200 pixels to the right of the GUI, save the GUI, and then activate it by selecting Run from the Tools menu. GUIDE displays a dialog box with the question Activating will save changes…?, where we click on Yes. In the subsequent Save As dialog box we define a FIG-file name such as gaussiantool.fig.

82

3

MATLAB Programming

GUIDE then saves this figure file, together with the corresponding MATLAB code, in a second file named gaussiantool.m, after which the MATLAB code is opened in the Editor and the default GUI is opened in a Figure Window with no menu or toolbar (Fig. 3.3). As we can see, GUIDE has automatically programmed the code for our GUI layout, including an initialization code at the beginning of the file that we should not edit. This code is included in the main routine named gaussiantool. The file gaussiantool.m also contains other functions called up by gaussiantool, for instance, the functions gaussiantool_Opening_Fcn (executed before gaussiantool is made visible), gaussiantool_OutputFnc (sending output to the command line, not used here), edit1_CreateFcn, and edit2_CreateFcn (initializing the edit text areas when they are created), or edit1_Callback and edit2_Callback (accepting text input and returning this input, either as text or as a double-precision number).

Fig. 3.3 Screenshot of the graphical user interface (GUI) gaussiantool for plotting a Gaussian function with a given mean and standard deviation. The GUI allows the values of the mean and standard deviation to be changed to update the graphics on the right. The GUI has been created using the MATLAB App Designer.

3.6 Creating Graphical User Interfaces

83

We now add code to our GUI gaussiantool. First, we add initial values for the global variables mmean and mstd in the opening function gaussiantool_ Opening_Fcn by adding the following lines after the last comment line marked by % in the first column: global mmean mstd mmean = 0; mstd = 1;

The two variables must be global because they are used in callbacks that we edit next (as in Sect. 3.5). Callbacks are functions called by other functions taking the first function as a parameter. The first of these callbacks edit1_Callback gets three more lines of code after the last comment line: global mmean mmean = str2double(get(hObject,'String')); calculating_gaussian(hObject, eventdata, handles)

The first line defines the global variable mmean, which is then obtained by converting the text input into double precision with str2double in the second line. The function edit1_Callback then calls up the function calculating_gaussian, which is a new function at the end of the file gaussiantool.m. This function computes and displays the Gaussian function with a mean value of mmean and a standard deviation of mstd. function calculating_gaussian(hObject, eventdata, handles) % hObject handle to edit2 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles empty - handles not created until after all CreateFcns called global mmean mstd x = -10 : 0.1 : 10; y = normpdf(x, mmean, mstd); plot(x,y)

The second callback edit2_Callback picks the value of the standard deviation mstd from the second Edit Text area, which is then also used by the function calculating_gaussian. global mstd mstd = str2double(get(hObject,'String')); calculating_gaussian(hObject, eventdata, handles)

84

3

MATLAB Programming

After saving the file gaussiantool.m we can run the new GUI by typing gaussiantool

in the Command Window. The GUI starts up, following which we can change the values of the mean and the standard deviation, and then press the Return key. The plot on the right is updated with each press of the return key. Using edit gaussiantool guide gaussiantool

we can open the GUI code and Figure Window for further edits. The second way to create a graphical user interface with MATLAB is to use the App Designer. We start the App Designer by typing appdesigner

in the Command Window. As before, we drag and drop components (such as two Edit Text areas) on the left side of the GUI, which contains static text areas that we change to Mean and Standard Deviation. Below these, we place a Button whose label we change to Plot. We then place an axis to the right of the GUI. Next, we save and activate the GUI by selecting Run from the toolbar of the App Designer. We then specify a name such as gaussiantool.mlapp in the Save As dialog box. The App Designer then saves the app and the default GUI is opened in a Figure Window entitled UI Figure, with no menu or toolbar. As before with GUIDE, the App Designer has automatically programmed the code of our GUI layout, which we can view by switching from the Design View to the Code View in the tool's user interface. In contrast to GUIDE, the App Designer allows us to edit only those parts of the code that are highlighted in white, while those lines highlighted in gray can not be edited. We first change the function MeanEditFieldValueChanged by defining a global variable mmean. We then change the variable value in the callback function app.MeanEditField.Value to mmean. The function now looks like this: % Value changed function: MeanEditField function MeanEditFieldValueChanged(app,event) global mmean mmean = app.MeanEditField.Value; end

3.6 Creating Graphical User Interfaces

85

We then change the function to enter the standard deviation: % Value changed function: StandardDeviationEditField function StandardDeviationEditFieldValueChanged(app,event) global mstd mstd = app.StandardDeviationEditField.Value; end

Finally, we modify the code for the push button: % Button pushed function: PlotButton function PlotButtonPushed(app, event) global mmean mstd x = -10 : 0.1 : 10; y = normpdf(x, mmean, mstd); plot(app.UIAxes,x,y)

The function first defines the two global variables mmean and mstd, and then creates an array x running from –10 to +10 in steps of 0.1. It then computes and displays the Gaussian function with a mean value of mmean and a standard deviation of mstd. After saving the file gaussiantool.mlapp we can run the new GUI by typing gaussiantool

in the Command Window. The GUI starts up, following which we can change the values of the mean and the standard deviation, then click on Plot. The plot on the right is updated with each click on the Plot button. Using appdesigner gaussiantool

we can open the GUI for further edits. Such GUIs in MATLAB allow very direct and intuitive handling of functions that can also include animations and audio– video signals. However, GUIs always require an interaction with the user, who needs to click on push buttons, move sliders, and edit text input fields while the data is being analyzed. The automatic processing of large quantities of data is therefore more often carried out using scripts and functions with no graphical user interface.

86

3.7

3

MATLAB Programming

Exercises

The following exercises are designed to help us learn how to program MATLAB using LEGO MINDSTORMS motors and sensors, as well as how to read data from hard drives and to stream measurements from LEGO and smartphone sensors. There is MATLAB hardware support available from MathWorks for both LEGO MINDSTORMS and smartphones running MATLAB Mobile; in later exercises, we will also use MATLAB hardware support that is available for webcams. This hardware support has ensured that exercises for MATLAB and hardware such as LEGO MINDSTORMS or smartphones have been developed at many universities, especially at the Institute for Imaging and Computer Vision of the RWTH Aachen University (e.g. Behrens et al. 2010). The practical course developed there was an inspiration for the development of the data acquisition practical course at the University of Potsdam. The following exercise provides practice in reading complex data sets from a hard drive. This is necessary when, for example, large data sets of laboratory measurements are made available via a cloud service or hard drive. MATLAB has very simple but versatile functions for reading data (e.g. load, textscan), but also functions for specialized data sets (e.g. hdfread). The third part of the exercise deals with acquiring and streaming data from external devices, in our example, we use data from smartphones and their built-in sensors. The MATLAB Mobile app is very popular; it allows connections to be made between smartphones and other devices that have MATLAB installed, as well as with MATLAB Online, or with ThingSpeak (which is the Internet of Things platform of MathWorks).

3.7.1 Communicating with the LEGO MINDSTORMS EV3 Brick The LEGO MINDSTORMS EV3 programmable Brick plays a central role in the exercises of the following chapters. According to the LEGO® MINDSTORMS® EV3 Hardware Developer Kit (The LEGO Group 2013), the EV3 Brick is a LINUX computer with the 32-bit ARM9 processor AM1808 by Texas Instruments with 64 MB DDR RAM. It has four output ports (A–D) used for controlling motors and four input ports (1–4) for controlling sensors connected to the Brick. The EV3 Brick supports USB, Bluetooth, and (via an external WiFi dongle) WiFi communication. The LEGO MINDSTORMS EV3 product line comes with the EV3 Software, which uses the graphical programming language called EV3-G developed by LEGO. Several other textural languages exist that can be used instead of EV3-G, such as MATLAB, Python, and C++. MathWorks offers a LEGO MINDSTORMS EV3 Support from MATLAB software package, as it did for the predecessor system NXT. The following exercise demonstrates how to communicate with a LEGO MINDSTORMS EV3 Brick using MATLAB.

3.7 Exercises

87

Exercise 1. Download the LEGO MINDSTORMS Education EV3 software from the LEGO Education webpage. Install the software on your computer and read the manual. 2. Download the LEGO MINDSTORMS EV3 Support from MATLAB software package from the manufacturer's webpage. Install the hardware support on your computer and read the manual. 3. Establish a USB, WiFi, or Bluetooth connection between your computer and the EV3 Brick. 4. Display the text Hello, World! for three seconds on the display of the EV3 Brick. 5. Play a tone with a frequency of 800 Hz for a duration of 500 ms. 6. Change the color of the status light of the EV3 Brick to flashing orange. Solution 1. We need the LEGO MINDSTORMS Education EV3 Core Set, the LEGO MINDSTORMS Education EV3 Expansion Set, and either an Apple computer running macOS or a PC running Windows. We also need a WiFi USB Adapter compatible with the EV3 Brick, such as the EDIMAX EW-7811UN Wireless USB Adapter. 2. During the exercise we will need MATLAB together with the Signal Processing Toolbox, the Statistics and Machine Learning Toolbox, the Image Processing Toolbox, and the Computer Vision Toolbox. 3. Next we need to download the LEGO MINDSTORMS Education EV3 software, which includes teacher resources, a documentation tool, building instructions, and tutorials. Check the Tools menu > Update Firmware to see if there is a new version of the firmware for the EV3 Brick. We will not use this software or the LEGO EV3-G programming language during the exercise but having up to date firmware helps to avoid problems establishing USB, WiFi, and Bluetooth connections. To run the MATLAB/LEGO MINDSTORMS exercise we need the LEGO MINDSTORMS EV3 Support from MATLAB installed. 4. The easiest connection to establish is a USB connection, which provides a stable and fast connection with a computer running MATLAB. The advantage of a WiFi connection is that no cable is required, but transmission errors are very common. A Bluetooth connection is not recommended since the MATLAB hardware support can only read data from a single sensor. MATLAB Script We use the following MATLAB script for tasks 4–6 in the exercise. We first clear the workspace and the Command Window and close all Figure Windows. clear, clc, close all

88

3

MATLAB Programming

We then connect the EV3 Brick with the computer using the USB cable that comes with the LEGO MINDSTORMS package. In order to establish a WiFi connection we need the internal IP address (e.g. 10.0.1.3) assigned by the router, and also the serial number (e.g. 00165350bf24) of the EV3 Brick that can be found under Brick Info > ID in the Brick's Settings menu. To establish a Bluetooth connection, although not recommended for the reasons mentioned above, we need the Serial Port of the Brick, which is /dev/tty.EV3_1-SerialPort as long as the factory settings of the Brick have not been changed. In the following script, we are first prompted to choose one of the three options for connecting the EV3 Brick and the computer before MATLAB establishes the connection. mynumber =... input('Select a connection: USB=1, WiFi=2, Bluetooth=3 '); switch mynumber case 1 mylego =... legoev3('USB') case 2 mylego =... legoev3('Wifi','10.0.1.3','00165350bf24') case 3 mylego =... legoev3('Bluetooth','/dev/tty.EV3_1-SerialPort') otherwise msg = 'Invalid number!'; error(msg) end

Once a connection has been successfully established between the computer and the EV3 Brick, we display Hello World on the LCD screen for three seconds and then clear the screen. writeLCD(mylego,'Hello World!') pause(3) clearLCD(mylego)

Then we play an 800 Hz tone for 0.5 s at a volume level 2 on the speaker of the EV3 Brick. playTone(mylego,800,0.5,2)

Finally, we change the color of the EV3 Brick from green to orange for three seconds, and then back to green.

3.7 Exercises

89

writeStatusLight(mylego,'orange','solid') pause(3) writeStatusLight(mylego,'green','solid')

Now we are ready to use the EV3 Brick, together with MATLAB, for our data acquisition exercises.

3.7.2 Controlling EV3 Motors Using an Ultrasonic Sensor The second MATLAB/LEGO MINDSTORMS exercise demonstrates how to use MATLAB to control motors and to read data from sensors such as an ultrasonic sensor. We also learn to deal with the peculiarities of sequential programming and control flow in MATLAB. More specific aspects of the exercise involve the use of tic and toc for time measurements, the use of counters, the recording of measurement results, and the graphical presentation and evaluation of measurements. In order to be able to work on the advanced exercises in this chapter, it is important to understand how to control the LEGO MINDSTORMS motors with MATLAB. Some of the exercises use motors to move various types of vehicles, or devices for scanning 2D and 3D objects. In this exercise, a simple vehicle is powered by two LEGO MINDSTORMS EV3 Large Servo Motors. The vehicle is equipped with a LEGO MINDSTORMS EV3 Ultrasonic Sensor that continuously determines distances to objects in the direction of movement. A while loop is used to change the direction of the vehicle as soon as it gets to within 0.3 m of an object. During the vehicle's operation the distance from the object and the angle of rotation (in degrees) of the motors are recorded, together with the time, and display on a plot with two y-axes. Exercise 1. Build a four-wheeled vehicle with two large motors and with an ultrasonic sensor at the front. 2. Program the vehicle in such a way that it drives forward with 50% engine power until it is 0.3 m away from an object, and then it drives backward with 50% engine power for two seconds. 3. Read the rotation of the motors and the distance data from the ultrasonic sensor. Display it graphically over time (in seconds). 4. Discuss possible time lags in the response of the vehicle to the MATLAB commands and between the sensor data and the motors. Solution 1. First we need to build the vehicle according to the LEGO building instructions (Fig. 3.4). The building instructions are in a LEGO Digital Designer Model (LXF) file, which can be opened with the free LEGO Digital Designer software, available for computers running either macOS or Microsoft Windows. After

90

3

MATLAB Programming

Fig. 3.4 LEGO MINDSTORMS EV3 vehicle powered by two LEGO MINDSTORMS EV3 Large Servo Motors. The vehicle is equipped with a LEGO MINDSTORMS EV3 Ultrasonic Sensor that continuously determines the distance to objects in the direction of movement. A while loop is used to change the direction of the vehicle as soon it reaches 0.3 m from an object.

launching the software we can use the View menu to switch from the construction mode to the building instructions mode. 2. Write a MATLAB script to establish connections to the motors and the ultrasonic sensor, and then uses a while loop to move the vehicle until it is 0.3 m from an object. The script then uses another while loop to move the vehicle backward for 2 seconds. The functions tic and toc are used to measure time. The function yyaxis is used to display both distance traveled and the rotation over time in a single graphic. MATLAB Script Download the LEGO building instructions and the MATLAB script from Springer Extras. The LXF file contains the building instructions to be used with the free LEGO Digital Designer software. We first clear the workspace and the Command Window and close all Figure Windows.

3.7 Exercises

91

clear, clc, close all

We use a UBS connection to connect the EV3 Brick with the computer. mylego = legoev3('USB');

We then create a connection between the EV3 Brick and two motors, which are connected to the EV3 output ports A and D, and the ultrasonic sensor, connected to the EV3 input port 1. The connections for the motors A and D must be defined using motor in the MATLAB script, but the input port for the sensor is automatically detected by MATLAB using sonicSensor. Using motor and sonicSensor the object handles mymotor_A, mymotor_D, and mysonicsensor are created for these connections that will later be used to control the motors and to read the data from the sensor. mymotor_A = motor(mylego,'A'); mymotor_D = motor(mylego,'D'); mysonicsensor = ultrasonicSensor(mylego);

We now reset the motors and define a speed of 50% for the motors. clear rotation_* distance* t2 intensity* img* time resetRotation(mymotor_A) resetRotation(mymotor_D) mymotor_A.Speed = 50; mymotor_D.Speed = 50;

We then read the first distance to the nearest object using readDistance, and the first measurement of the rotation angles of the two motors using readRotation. distance = readDistance(mysonicsensor); rotation_A(1) = readRotation(mymotor_A); rotation_D(1) = readRotation(mymotor_D);

We now define the counter i, assign to it a starting value of 1, and reset the stopwatch to 0 using tic. i = 1; tic

We then start the two motors. We use a while loop to keep the motors running until the sensor reaches 0.3 m from the object. This is achieved by reading the distance during

92

3

MATLAB Programming

each run through the while loop and continuing to loop as long as the distance > 0.03 statement is true. If this is no longer the case, we exit the while loop and stop the two motors. The counter i counts the number of runs through the while loop. In addition to the distance traveled, the angles rotation_A and rotation_D through which the motors have rotated since the start is also measured. The time time is measured for each run through the while loop that has elapsed since the stopwatch was reset using tic. After leaving the while loop, we store the total time of the forward motion in time_forward. start(mymotor_A) start(mymotor_D) while distance > 0.30 i = i + 1; distance(i) = readDistance(mysonicsensor); rotation_A(i) = readRotation(mymotor_A); rotation_D(i) = readRotation(mymotor_D); time(i) = toc; end stop(mymotor_A) stop(mymotor_D) time_forward = time(i);

The vehicle then moves backward at –50% speed for two seconds, i.e. as long as time is less than the total time time_forward taken to drive forward plus two seconds. We again record the distances and rotation angles within the while loop. mymotor_A.Speed = -50; mymotor_D.Speed = -50; start(mymotor_A) start(mymotor_D) while time(i) < time_forward+2 i = i + 1; start(mymotor_A) start(mymotor_D) distance(i) = readDistance(mysonicsensor); rotation_A(i) = readRotation(mymotor_A); rotation_D(i) = readRotation(mymotor_D); time(i) = toc; end

3.7 Exercises

93

stop(mymotor_A) stop(mymotor_D) clear i

We then display the measured distances and the angles (in degrees) in a graph with two different y-axes using yyaxis. figure('Position',[200 600 800 400],... 'Color',[1 1 1]); axes('LineWidth',1,... 'FontName','Helvetica',... 'FontSize',12,... 'XGrid','on',... 'YGrid','on'); yyaxis left line(time,distance,... 'LineWidth',1); xlabel('Time (s)',... 'FontName','Helvetica',... 'FontSize',12); ylabel('Distance (m)',... 'FontName','Helvetica',... 'FontSize',12); yyaxis right line(time,rotation_A,... 'LineWidth',1); ylabel('Motor Rotation Value (degrees)',... 'FontName','Helvetica',... 'FontSize',12);

Alternatively, we can display the angles of rotation measured in radians instead of degrees by dividing the angles by 360 degrees. Surprisingly we only get the integer values of either 0, 1, or 2 for the angles of rotation in radians. Typing whos reveals that the angles of rotation are stored in a 32-bit integer (int32) array, which means that if we divide the angles by 360 degrees, we get values that are rounded to the nearest integer value. rotation_A_radians = rotation_A/360; figure('Position',[200 600 800 400],... 'Color',[1 1 1]); axes('LineWidth',1,... 'FontName','Helvetica',... 'FontSize',12,... 'XGrid','on',... 'YGrid','on');

94

3

MATLAB Programming

yyaxis left line(time,distance,... 'LineWidth',1); xlabel('Time (s)',... 'FontName','Helvetica',... 'FontSize',12); ylabel('Distance (m)',... 'FontName','Helvetica',... 'FontSize',12); yyaxis right line(time,rotation_A_radians,... 'LineWidth',1); ylabel('Motor Rotation Value (radians)',... 'FontName','Helvetica',... 'FontSize',12);

We, therefore, change the data type of the angles into double before converting the angles of rotation in degrees into angles of rotation to radians. rotation_A_radians = double(rotation_A)/360; figure('Position',[200 600 800 400],... 'Color',[1 1 1]); axes('LineWidth',1,... 'FontName','Helvetica',... 'FontSize',12,... 'XGrid','on',... 'YGrid','on'); yyaxis left line(time,distance,... 'LineWidth',1); xlabel('Time (s)',... 'FontName','Helvetica',... 'FontSize',12); ylabel('Distance (m)',... 'FontName','Helvetica',... 'FontSize',12); yyaxis right line(time,rotation_A_radians,... 'LineWidth',1); ylabel('Motor Rotation Value (radians)',... 'FontName','Helvetica',... 'FontSize',12);

The graph now looks as it should (Fig. 3.5), with the distances plotted as a blue curve and the angles of rotation as a red curve. The blue curve shows several interesting features. For example, it is slightly curved at the beginning indicating

3.7 Exercises

95

Fig. 3.5 Plot of the distances driven by a LEGO MINDSTORMS EV3 vehicle towards an object (blue curve) and rotations of the motors that drive the vehicle (red curve). A while loop is used to change the direction of the vehicle as soon it reaches 0.3 m from the object.

that the vehicle requires about half a second to reach full speed. It also reveals that the vehicle does not stop at a distance of exactly 0.3 m from the object, but at *0.27 m, requiring approximately 3 cm to accommodate the reaction time and braking distance. Finally, the curve is not smooth but slightly jagged, indicating noise due to random measurement errors. The curve of the rotations is much smoother, either because the measured values are more accurate or because the measurements in the data type int32 do not have the required accuracy to represent the noise. We cannot determine the precision and accuracy of the measurements in this experiment, but this is the subject of an exercise in the next chapter of the book.

3.7.3 Reading Complex Text Files with MATLAB MATLAB allows the traditional reading of data from a storage device (e.g. hard drives, flash memory, digital versatile discs), importation of data from a cloud service (e.g. DropBox, MATLAB Drive, ThingSpeak), and direct importation or streaming in realtime from hardware with built-in sensors and using a MATLAB hardware support (e.g. LEGO MINDSTORMS EV3 hardware support, MATLAB support packages for iOS and Android Sensors, and other hardware support for different types of measuring instruments of all kind). The next exercise provides practice with importing data from a file on a hard drive. Exercise 1. Import the content of the file exercise_3_7_3_data_1.txt into the MATLAB workspace.

96

3

MATLAB Programming

2. Display the concentrations of K and Ti, together with the K/Ti ratio, in three xy plots within a single Figure Window. Use x-axis labels, y-axis labels and a title. Solution 1. To import the data we use the function textscan. Since the first three rows of the data are header rows we use the HeaderLines option in textscan to start importing the data from line 4. The columns of the data are separated by tabs, so we define the tab as a delimiter. We find a mixture of different data types in the file (e.g. text, integer, and floating-point values) and therefore have to use the formatSpec option in textscan to define the input format for the individual columns in the file. 2. The textscan function saves the imported data field in a cell array, from which we then have to extract the desired columns to display the K, Ti, and K/Ti values. We use subplot to create a three-axis systems in the Figure Window and plot to display the individual data series (Fig. 3.6). MATLAB Script We first clear the workspace and the Command Window and close all Figure Windows. clear, clc, close all

We then use fopen, textscan and fclose to read the data from the file. fid = fopen('exercise_2_7_3_data_1.txt'); formstrg = [repmat('%s ',1,3),... repmat('%u %s ',1,2),'%u %s ',... repmat('%f ',1,71)]; A = textscan(fid,formstrg,... 'Delimiter','\t',... 'Headerlines',3); fclose(fid); clear fid

This script opens the file exercise_3_7_3_data_1.txt for read only access using fopen and defines the file identifier fid, which is then used to read the text from the file using textscan and to write it into the cell array C. We then create the character string formstrg to be used with the formatSpec option of textscan. When creating formstrg we use repmat to replicate and tile the arrays, defining the conversion specifiers %u, %s, and %f. The parameter Headerlines is set to 3, which means that

3.7 Exercises

97

Fig. 3.6 Plots of potassium and titanium concentrations in a sediment core, and the potassium/titanium ratio.

three header lines are ignored when reading the file. The function textscan concatenates its output into a single cell array A. The function fclose closes the file defined by fid. After reading the file and storing the content in A we browse the input file and learn from its header lines that the splice depth is in the 13th column. The potassium and titanium concentrations are stored in columns 43 and 70, respectively. We use curly brackets, {}, to access the three columns of interest and sort the rows in ascending order. dataxrf1(:,1) = A{13}; dataxrf1(:,2) = A{43}; dataxrf1(:,3) = A{70}; dataxrf1 = sortrows(dataxrf1,1);

We then display the K, Ti, and K/Ti values in three individual plots within a single Figure Window (Fig. 3.6).

98

3

MATLAB Programming

figure('Position',[100 100 800 600],... 'Color',[1 1 1]) subplot(3,1,1) plot(dataxrf1(:,1),dataxrf1(:,2)) xlabel('Splice Depth [mbs]') ylabel('Counts') title('Potassium Concentration') subplot(3,1,2) plot(dataxrf1(:,1),dataxrf1(:,3)) xlabel('Splice Depth [mbs]') ylabel('Counts') title('Titanium Concentration') subplot(3,1,3) plot(dataxrf1(:,1),dataxrf1(:,2)./dataxrf1(:,3)) xlabel('Splice Depth [mbs]') ylabel('Counts/Counts') title('Potassium/Titanium Concentration')

We can save the figure in a portal network graphics (PNG) file using print -dpng -r300 figure_2_exercise_2.8.3_1.png

Alternatively, we can store the file in an Encapsulated PostScript (EPS) file using the -depsc2 option in print.

3.7.4 Smartphone Sensors with MATLAB Mobile The introduction of smartphones and smartwatches with large numbers of built-in sensors, and apps that can read the data from these sensors, has created many new possibilities for data acquisition. In addition to one or more RGB cameras and microphones, other examples of sensors built in to smartphones include Global Positioning System (GPS) sensors, xyz magnetometers for use as electronic compasses, gyroscopes and accelerometers, ambient light sensors, proximity sensors, and fingerprint sensors that are used to unlock phones and complete purchases. The types of software on smartphones that use motion sensors include apps for tracking cycling tours and hikes. Another family of sensor-app combinations is the health and fitness family, which includes heart rate and blood oxygen sensors used in combination with electrocardiogram and sleep monitoring apps, as well as the usual motion sensors. To read out raw data from such sensors we mainly use those apps that allow the data acquisition to be controlled, e.g. by choosing a suitable sampling frequency. An example of such an app is the phyphox app, which is available for both iOS and Android smartphones (Staacks et al. 2018; Vieyra et al. 2018; Stampfer et al. 2020). This award-winning app and the associated teaching program were

3.7 Exercises

99

developed at the 2nd Institute of Physics of the RWTH Aachen University to enable the use of iOS and Android smartphones as mobile physics laboratories. The app enables the data from smartphone sensors to be read out and to be saved for use in simple physics experiments. The recorded data can be exported in Microsoft Excel and comma-separated values (CSV) formats and transferred to a computer using the usual data transfer methods such as email, cloud services, or (on iOS devices) AirDrop, for subsequent processing. An alternative is MATLAB Mobile from MathWorks, which also reads out the data from the sensors and then processes and displays the data directly on the phone, using MATLAB (MathWorks 2021d). The data, as well as the results of the MATLAB analysis, can be stored on the MATLAB Drive cloud storage service from where they can be further processed on a computer with MATLAB Desktop, or online with MATLAB Online (MathWorks 2021a, b, c). Alternatively, the data can be streamed directly to MATLAB Desktop or MATLAB Online using the MATLAB Drive Connector. The third option is to send the data collected by any network-compatible hardware (e.g. smart phones, cameras, and weather stations) to ThingSpeak (which was originally launched in 2010 by ioBridge and has integrated support from MATLAB), in which it can be read, processed, and displayed. The following exercise shows how MATLAB Mobile can be used in conjunction with these cloud services to process sensor data from smartphones. Exercise 1. Download and install MATLAB Mobile for iOS or Android devices on your smartphone. Connect the app with the MathWorks Cloud using your MathWorks account. Use MATLAB Drive to store the data collected by the sensors. Use the MATLAB Drive Connector to synchronize your data with MATLAB Online and MATLAB Desktop. 2. Use MATLAB Mobile to read the sensor data from the accelerometer on your device and display them with MATLAB Mobile on your smartphone. 3. Stream your data to MATLAB Desktop using the MATLAB Drive Connector and display the data on your computer. Solution 1. Install MATLAB Mobile on the smartphone and the MATLAB Drive Connector on the computer. Launch MATLAB Mobile, which first establishes a connection to the MathWorks Cloud with your MathWorks account. Click on the  button in the top left corner of the app and select Sensors. Change the Sample rate to the highest possible rate, which is 100.0 Hz. 2. Select Log under Stream to in the SETTINGS to save the data on the smartphone. Enable the accelerometer by turning on the on/off switch to the right of the Acceleration label. Press the START button to record the accelerometer data, press the STOP button after about a second, and press Save in the Enter

100

3

MATLAB Programming

log name pop-up dialog window. Choose a log name such as sensorlog_20201005_172246 that consists of the date and time at which the recording was made. The smartphone stores the data in a file with the name sensorlog_20201005_172246.mat in MATLAB Drive. Click on the  button, choose Commands, and load the data using load. 3. Select MATLAB under Stream to in the SETTINGS to stream the data to MATLAB Desktop. In order to acquire data from the sensors in an iOS or Android device through a WiFi or cellular connection, we create the mobiledev object using m=mobiledev. We begin logging data from the smartphone by enabling the Logging property of m, i.e. by changing its value to m.Logging=1, and stop logging using m.Logging=0. MATLAB Script We use the following MATLAB script for tasks 1–3. This exercise uses an Apple iPhone 8, or any other smartphone with an accelerometer. We use MATLAB Mobile to record xyz acceleration for about one second, using the Log mode in the Sensors menu of the app. Once the recording has stopped we are prompted to define a file name under which to save the file on MATLAB Drive. The file is then copied to the desktop directory of MATLAB Drive. We move the file to our working folder or any other directory within the current MATLAB search path. We then clear the workspace and the Command Window and close all figures using clear, clc, close all

We first import the data from the .mat file recorded with the smartphone. load sensorlog_20201005_172246.mat

Typing whos yields Name

Size

Acceleration 140x3

Bytes 5938

Class timetable

We can display the xyz acceleration data using stackedplot(Acceleration)

and save the data in a .mat file using save exercise_3_7_4_data_1.mat

Attributes

3.7 Exercises

101

Alternatively, we can stream the data directly to MATLAB Desktop using the MATLAB Drive Connector. We first clear the workspace and the Command Window and close all Figure Windows. clear, clc, close all

We then create a mobiledev object using m = mobiledev

We then enable the accelerometer, changing the value of the corresponding AccelerationSensorEnabled property of m to 1. m.AccelerationSensorEnabled = 1;

We begin logging data from the smartphone by enabling the Logging property of m, i.e. by changing its value to m.Logging = 1 and stop logging using m.Logging = 0 after one second using pause(1). m.Logging = 1; pause(1) m.Logging = 0;

We then extract the xyz accelerometer data stored in log, together with the timestamp from m, convert the resulting data into a timetable acclog, and display the result using stackedplot. [log, timestamp] = accellog(m) acclog = array2timetable(log,... 'VariableNames',{'X' 'Y' 'Z'},... 'RowTimes',seconds(timestamp)) stackedplot(acclog)

We can save the data in a .mat file using save exercise_3_7_4_data_1.mat

This was a simple example of a connection using MATLAB Mobile to connect the sensors of a smartphone to MATLAB Desktop. In later chapters, this concept will be used in more sophisticated exercises.

102

3

MATLAB Programming

3.7.5 Smartphone GPS Tracking with MATLAB Mobile Since the introduction of smartphones with built-in GPS sensors, the recording of cycling and hiking trails, as well as location information for photos, has been the subject of numerous apps, both free and commercial. It is also possible to record tracks with MATLAB Mobile, with subsequent processing in MATLAB Desktop and display in map services such as OpenStreetMap. The following simple example demonstrates the use of MATLAB Mobile to record a bicycle tour in Potsdam, one third of which is through a forest and two thirds on asphalt cycle tracks with moderate slopes. Exercise 1. Download and install MATLAB Mobile for iOS or Android devices on your smartphone. Connect the app with the MathWorks Cloud using your MathWorks account. Use MATLAB Drive to store the data collected by the sensors. Use the MATLAB Drive Connector to synchronize your data with MATLAB Online and MATLAB Desktop. 2. Use MATLAB Mobile to read the GPS sensor data on your device during a local bike tour. 3. Import the data from MATLAB Drive and display the data with MATLAB Desktop on a map on your computer. Solution 1. Install MATLAB Mobile on your smartphone and the MATLAB Drive Connector on your computer. Launch MATLAB Mobile, which first establishes a connection to the MathWorks Cloud with your MathWorks account. Click on the  button in the top left corner of the app and select Sensors. We use the default Sample rate of 10.0 Hz. 2. Select Log under Stream to in the SETTINGS to save the data on the smartphone. Enable the GPS sensor by turning on the on/off switch to the right of the Position label. Press the START button to record the GPS data while riding the bike, press the STOP button after completing the bike tour, and press Save in the Enter log name pop-up dialog window. Choose a log name such as sensorlog_20200925_170958 that consists of the date and time when the recording was made. The smartphone stores the data in a file with the name sensorlog_20200925_170958.mat in MATLAB Drive. 3. Back home the data have arrived in the MATLAB Drive on the computer. We import the data with MATLAB Desktop and display the time series of the data using stackedplot, and also in OpenStreetMap using webmap.

3.7 Exercises

103

MATLAB Script We use the following MATLAB script for tasks 1–3. This exercise presented here uses an Apple iPhone 8, or any other smartphone with a built-in GPS sensor. We use MATLAB Mobile to record the position as latitudes and longitudes during a *37-min bike tour using the Log mode in the Sensors menu of the app. When we stop recording we are prompted to define a file name under which to save the file on MATLAB Drive. The file is then copied to the desktop directory of MATLAB Drive. We move the file to our working folder or any other directory within the current MATLAB search path. We then clear the workspace and the Command Window and close all Figure Windows using clear, clc, close all

We first import the data from the .mat file recorded with the smartphone. data = load('sensorlog_20200925_170958.mat');

Typing data

yields data = struct with fields: Acceleration: MagneticField: Orientation: AngularVelocity: Position:

[221313 timetable] [221343 timetable] [221343 timetable] [221313 timetable] [22096 timetable]

suggesting that we have also recorded the data from all of the other sensors. We type position = data.Position

which yields a timetable position with the columns timestamp, latitude, longitude, altitude (in meters above sea level), speed (in m/s), course (as the

104

3

MATLAB Programming

clockwise angle from geographic north, in degrees), and hacc, which is the horizontal acceleration in m/s2. We can convert the speed from m/s to km/h by typing position.speed = 3.600*position.speed;

and calculate the mean speed using mean(position.speed)

which yields ans = 14.4742

suggesting that the mean speed is *14.5 km/h. We can display the data using stackplot. stackedplot(position)

Alternatively, we can display the bike tour in OpenStreetMap (Fig. 3.7). webmap('openstreetmap') wmline(position.latitude,position.longitude,... 'Color',[0.1 0.5 0.8],... 'LineWidth',5)

We can save the data in a .mat file using save exercise_3_7_5_data_1.mat

This is a simple example of using MATLAB Mobile for GPS tracking and MATLAB Desktop to display the resulting data.

Recommended Reading

105

Fig. 3.7 Screenshot displaying a bicycle ride as a geographic line on a web map from OpenStreetMap.

Recommended Reading Attaway S (2018) MATLAB: a practical introduction to programming and problem solving. Butterworth-Heinemann, Oxford Waltham Behrens A, Atorf L, Schwann R, Neumann B, Schnitzler R, Balle J, Herold T, Telle A, Noll TG, Hameyer K, Aach T (2010) MATLAB meets LEGO mindstorms—a freshman introduction course into practical engineering. IEEE Trans Educ 53:306–317 Hahn BH, Valentine DT (2017) Essential MATLAB for engineers and scientists, 6th edn. Academic Press, London Oxford Boston New York San Diego MathWorks (2021a) MATLAB Primer. The MathWorks Inc., Natick, MA MathWorks (2021b) MATLAB App Building. The MathWorks Inc., Natick, MA

106

3

MATLAB Programming

MathWorks (2021c) MATLAB Programming Fundamentals. The MathWorks Inc., Natick, MA MathWorks (2021d) MATLAB Mobile for iOS/for Android. The MathWorks Inc., Natick, MA (available online) Quarteroni A, Saleri F, Gervasio P (2014) Scientific computing with MATLAB and octave, 4th edn. Springer, Berlin Heidelberg New York Staacks S et al. (2018) Advanced tools for smartphone-based experiments: phyphox. Phys Educ 53, 045009 Stampfer C, Heinke H, Staacks S (2020) A lab in the pocket. Nat Rev 5:169–170 The LEGO Group (2013) LEGO® MINDSTORMS® EV3 Hardware Developer Kit. The LEGO Group, Billund, Denmark Trauth MH (2020) MATLAB recipes for earth sciences, 5th edn. Springer, Berlin Heidelberg New York Trauth MH, Sillmann E (2018) Collecting, processing and presenting geoscientific information, MATLAB® and design recipes for earth sciences, 2nd edn. Springer, Berlin Heidelberg New York Vieyra R et al. (2018) Turn your smartphone into a science laboratory. Sci Teach 82:32–40

4

Geometric Properties

4.1

Introduction

One of the primary properties of the earth’s surface is its shape. The shape of the earth’s surface, i.e. its geomorphology (from the Greek cῆ or ge for earth, loquή or morphé for shape and kόco1 or lógos for word), is a result of a range of different processes. These include processes that operate within the earth’s interior (endogenous processes) and those that operate at the surface (exogenous processes). Endogenous processes result from an uneven heat distribution within the earth’s interior, which can lead to movement of rock material in a solid state (e.g. brittle or ductile deformation), or in a molten or partly molten state (e.g. rising magmas and lava extrusion). Exogenic processes include weathering, erosion, and the movement of mineral and organic materials on the earth’s surface by air (e.g. winds, storms), water (e.g. rivers, ocean currents), and ice (e.g. glaciers, icebergs). In this chapter we deal with surveying the earth, initially using very simple tools such as a magnetic compass and later using more sophisticated tools such as scanners, in each case also including appropriate methods of evaluation. We first learn how to determine the position of points on the earth’s surface (Sect. 4.2). Digital elevation models are then introduced, together with methods to import such models into MATLAB, to interpolate unevenly-spaced geometric data, and to display elevation models (Sect. 4.3). The various ways to compute 3D point clouds from pairs of images and 3D laser scans are then explained (Sect. 4.4). The knowledge gained is then used in exercises to acquire, process, and display the geometric properties of small-scaled objects that are used as examples (Sect. 4.5). The Computer Vision Toolbox (MathWorks 2021) is used for the examples presented in this chapter. While the MATLAB User’s Guide to the Computer Vision Toolbox provides an excellent general introduction to images analysis, this Electronic supplementary material The online version of this chapter (https://doi.org/10.1007/ 978-3-030-74913-2_4) contains supplementary material, which is available to authorized users. © Springer Nature Switzerland AG 2021 M. H. Trauth, Signal and Noise in Geosciences, Springer Textbooks in Earth Sciences, Geography and Environment, https://doi.org/10.1007/978-3-030-74913-2_4

107

108

4 Geometric Properties

chapter provides an overview of typical earth science applications. In-depth introductions to the geometric properties of the earth’s surface, digital elevation models, and point clouds can be found in the excellent books by Hengl and Reuter (2009), Anderson and Anderson (2010), and Shan and Toth (2018).

4.2

Position on the Earth’s Surface

The geosciences are concerned with spatio-temporal changes above, on and beneath the earth’s surface, with one objective being to understand the processes that have shaped the earth’s surface. Detecting spatio-temporal changes require not only the dating of rocks and fossils but also a precise determination of their locations. Land surveying aims to determine the 2D or 3D position of points and the distances and angles between them. Two-dimensional surveying uses a two-step approach (Fig. 4.1). The angles a and b between a baseline from A to B and an unknown point C are measured from locations A and B, which have a known separation of c from each other. The third angle of the triangle is c = 180–a–b. We can then calculate distances a and b from the equation c b a ¼ ¼ sinðcÞ sinðbÞ sinðaÞ If the coordinates of A and B are known, the coordinates of C can also be calculated in the selected coordinate system (e.g. the Universal Transverse Mercator coordinate system, UTM). Before the introduction of the US Global Positioning System (GPS) in the 1960s, these angles were mostly determined with simple sighting compasses. In such cases, the angles a and b are not measured directly but are calculated from compass bearings taken from points A or B. The angle a is calculated by subtracting the bearing to C from point A, from the bearing to B from point A. The angle b is

C

N

N

b

γ a

α

A

β

Base

line c

B Fig. 4.1 Surveying uses a two-step approach to calculate the position of an unlocated point, which involves measuring the bearings from the known points A and B to an unknown point C, where the distance between A and B is known and then using the angles and distance to calculate the location of point C.

4.2 Position on the Earth’s Surface

109

calculated by subtracting the bearings to A from point B, from the bearing to C from point B. The angle c can be calculated using c = 180–a–b, and if the distance c is known then the distances a and b can be calculated from the above equation, and the position of point C thus determined. Compasses are magnetic or electronic devices used to determine the cardinal directions from magnetic north. This definition is not quite physically correct, since it is the magnetic South pole, which is close to the geographic north pole in the Northern Hemisphere of the earth. To avoid linguistic confusion, the term Arctic magnetic pole is sometimes used when referring to the magnetic pole of the Northern Hemisphere. Mechanical compasses, which according to Wikipedia were first used during the ancient Han dynasty in China between 300 and 200 BC, use a needle of ferromagnetic material that can rotate freely, with low friction, within a non-magnetic casing. The needle aligns itself with the earth’s magnetic field, which runs approximately north–south. The alignment of the compass (i.e. the casing) can be read off a circular scale on which the cardinal points north (N), east (E), south (S), and west (W) are marked. Since the position of the Arctic magnetic pole does not coincide with that of the geographic north pole, there is a discrepancy between magnetic north and true north that varies according to the location, with the angular discrepancy usually shown on topographic maps. To determine the direction of geographic north we need to correct for this angle, which is known as the declination, by rotating the scale on the compass. The magnetic poles move over time, albeit very slowly and this drift must be taken into account by updating the declination. For this, we can use the details of the annual change in declination that are provided on topographic maps by how much the declination changes per year. Magnetic field lines only run parallel to the earth’s surface at the equator. They dip vertically into the earth at the magnetic poles, while at all other points their dips range from 0° at the equator to 90° at the poles. This angle, which is known as the inclination, may also need to be corrected on the compass by attaching a small weight to the needle, to prevent it from touching the cover glass of the device. This is not required close to the equator, but working with compasses close to the poles becomes otherwise impossible. Another type of mechanical compass, but a non-magnetic one, is the gyrocompass, which according to Wikipedia was first patented in 1885, by Marinus Gerardus van den Bos. A gyrocompass makes use of gyroscopic precession, which occurs when the axis of a fast-spinning disc starts to describe a cone in space as a result of the torque induced by the force of the earth’s rotation. This force, and hence the torque and the precession, is zero as soon as the axis of the disc’s rotation points exactly towards true north. The use of these compasses avoids the problem of the declination between the magnetic and geographical poles, as well as the problem of inclination. Gyrocompasses are still favored for nautical navigation since they are unaffected by a ship’s metal parts, its movement, or weather disturbances. The rapidly spinning disc of gyrocompasses was originally mechanically driven, but the newer models are electrically powered.

110

4 Geometric Properties

Electronic compasses, which have recently replaced magnetic compasses in many applications and are also used in devices such as smartphones and GPS devices, do not require a magnetic needle or a rapidly spinning disc. Instead, they use the Hall Effect, named after US American physicist Edwin Herbert Hall (1855– 1938) who discovered the effect when working on his doctoral thesis. According to this effect, electrons flowing within a conductor are deflected by the magnetic force of the earth’s field, which leads to a voltage difference across the conductor. This voltage difference can be measured allowing the magnetic flux density to be calculated in three dimensions. Electronic compasses are sensitive magnetometers and can therefore also be used to detect small magnetic anomalies (Sect. 8.6.4). Theodolites are significantly more sophisticated and precise than sighting compasses for working out the exact position of unmapped points. According to Wikipedia, they were first used in the early sixteenth century. Theodolites are used to measure both vertical and horizontal angles with a single device, to determine positions in three dimensions. The device is first aligned horizontally with the help of spirit levels; distant targets are then sighted through the theodolite’s telescope and the angles between them read from the device. A tachymeter can also be used to measure the distance to a target, making it possible to rapidly acquire distance measurements in the field. Total stations are surveying devices that combine an electronic theodolite with electronic distance measurements, using an infrared laser. These instruments can be used to collect large quantities of data fully automatically within a short period of time, and to calculate the 3D position of specific points in the terrain by using triangulation using an on-board computer. Ancient Egyptians were already using water levels to measure objects in three dimensions during the construction of pyramids. Heights were also being measured trigonometrically (by measuring the height angle) long before the invention of barometric altimetry in the seventeenth century. The mercury barometer was invented by Galileo’s student Torricelli (Torge 2009). This device uses the known reduction in atmospheric pressure with increasing altitude above sea level to measure the altitude. However, due to weather-related fluctuations in air pressure, the device needs to be calibrated daily, or even several times a day. This can be achieved either by referring to a known altitude, for example from a topographic map, or automatically with the help of a GPS signal. The US Global Positioning System (GPS) project was started by the US Department of Defense in 1973; there had, however, been several previous projects going back as far as the late 1950s (Wikipedia). The full complement of 24 satellites at an altitude of 20,200 km above the earth’s surface was reached in 1993. They were originally restricted to military use, but civilian use was permitted from the 1980s. However, global and regional disturbances were introduced by the operators of the service by adding pseudo-random noise to the satellite signal and using local GPS jammers. The GPS concept is based on the travel time between a GPS receiver and satellites with known positions. The satellites are equipped with very precise atomic clocks, which are synchronized with each other and with clocks on the ground. To determine an exact location and to calculate the two-dimensional positional coordinates, and also the altitude, the receiver needs signals from at least

4.2 Position on the Earth’s Surface

111

four satellites and the time delay in the received signal. The accuracy of the position depends on the number of satellites received and their relative positions. The sensitivity of the sensor is also dependent on an unobstructed connection to the satellite. This usually means that positions obtained inside buildings or in narrow gorges are less accurate than those obtained in a flat open area (see Sect. 3.7.5). Civilian use of the GPS was originally subjected to pseudo-random noise, which yielded a horizontal accuracy of about 100 m (Wikipedia), but this was discontinued in 2000, after which the accuracy improved to *15 m. The replacement of older satellites has now improved this further to 7.8 m (95% probability). A Differential Global Positioning System (DGPS) can be used to dramatically increase the accuracy by using data from a fixed base station. An optimal configuration can yield accuracies of 1–3 cm, which makes a DGPS suitable for surveying relatively small geomorphological features. Since 2010 the European Union (EU) and the European Space Agency (ESA), together with numerous non-EU countries, have been working on an alternative system to the US GPS.

4.3

Digital Elevation Models of the Earth’s Surface

Multiple points in 2D or 3D space can be combined into vector data (points, lines, polygons), raster data (grids of squares), or triangular irregular networks (TINs). Vector data can be generated by digitizing objects on a map (e.g. using a digitizer tablet), or calculated by either automatic vectorization (e.g. image tracing, raster-to-vector conversion) or digital terrain analysis (DTA) (e.g. extracting drainage network patterns). Raster data of the earth’s surface, for example, digital elevation models (DEMs) or terrain models (DTMs), can be obtained directly from the output of a satellite sensor or laser scanners (e.g. Light Detection And Ranging, LiDAR), or created using photogrammetric methods (e.g. stereo images). Alternatively, they can be calculated by interpolation of irregularly distributed 2D and 3D point data (e.g. digitized topographic maps or point measurements in the field). The availability and use of digital elevation data have increased considerably since the early 1990s. With a resolution of 5 arc minutes (about 9 km), ETOPO5 was one of the first data sets available for topography and bathymetry. In October 2001 it was replaced by ETOPO2, which has a resolution of 2 arc minutes (about 4 km), and in March 2009 ETOPO1 became available, which has a resolution of 1 arc minutes (about 2 km). There is also a data set for topography called GTOPO30, completed in 1996, that has a horizontal grid spacing of 30 arc seconds (about 1 km). More recently, the 30 and 90 m resolution data from the Shuttle Radar Topography Mission (SRTM) have replaced the older data sets in most scientific studies. The ETOPO1, GTOPO30, and SRTM data sets are discussed in detail in the book MATLAB Recipes for Earth Sciences (Trauth 2020). Here we limit ourselves to a brief introduction to the most recent representative of this group of data sets, the SRTM data set.

112

4 Geometric Properties

The Shuttle Radar Topography Mission (SRTM) was a radar system onboard the Space Shuttle Endeavour during an 11-day mission in February 2000 (Farr and Kobrick 2000, Farr et al. 2008). The SRTM was an international project spearheaded by the National Geospatial-Intelligence Agency (NGA) and the National Aeronautics and Space Administration (NASA). Detailed information on the SRTM project, including a gallery of images and a user’s forum, can be accessed through the NASA web page: http://www2.jpl.nasa.gov/srtm/

The data have been processed at the Jet Propulsion Laboratory (JPL) and are distributed through the United States Geological Survey (USGS) National Map Data Download and Visualization Services: https://www.usgs.gov/core-science-systems/ngp/tnm-delivery/

Alternatively, the raw data files can be downloaded from http://dds.cr.usgs.gov/srtm/

This directory contains zipped files of SRTM DEMs from various parts of the world, processed by the SRTM global processor, and sampled at resolutions of 1 arc second (SRTM-1, 30 m grid) and 3 arc seconds (SRTM-3, 90 m grid). As an example, we download the 1.7 MB file s01e036.hgt.zip from http://dds.cr.usgs.gov/srtm/version2_1/SRTM3/Africa/

containing SRTM-3 data for the Kenya Rift Valley in eastern Africa. All elevations are in meters referenced to the WGS84 EGM96 geoid, as documented at http://Earth-info.nga.mil/GandG/wgs84/index.html

The name of this file refers to the longitude and latitude of the lower-left (southwest) pixel of the tile, i.e. latitude 1° south and longitude 36° east. The SRTM-3 data contain 1,201 lines and 1,201 samples, with similar numbers of overlapping rows and columns. Having downloaded S01E036.hgt.zip we use the MATLAB unzip function to decompress this file clear, close all, clc unzip('S01E036.hgt.zip')

4.3 Digital Elevation Models of the Earth’s Surface

113

and save S01E036.hgt in our working directory. Alternatively, we can unzip the corresponding file basics_4_3_data_1.zip from the supplement of this book by typing unzip('basics_4_3_data_1.zip')

The digital elevation model is provided as 16-bit signed integer data in a simple binary raster. The bit order is big-endian (Motorola’s standard) with the most significant bit first. The data are imported into the workspace using fid = fopen('S01E036.hgt','r'); SRTM = fread(fid,[1201,inf],'int16','b'); fclose(fid);

This script opens the file s01e036.hgt for read only access using fopen and defines the file identifier fid, which is then used to read the binaries from the file (using fread) and to write them into the matrix SRTM. Function fclose closes the file defined by fid. The matrix first needs to be transposed and flipped vertically. SRTM = SRTM'; SRTM = flipud(SRTM);

The –32,768 flag for data voids can be replaced by NaN, which is the MATLAB representation for Not-a-Number. SRTM(SRTM == -32768) = NaN;

The SRTM data contain numerous gaps that might cause spurious effects during statistical analysis or when displaying the digital elevation model in a graph. A popular way to eliminate gaps in digital elevation models is to fill them with the arithmetic means of adjacent elements. We use the function nanmean since it treats NaNs as missing values and returns the mean of the remaining elements that are not NaNs. The following double for loop averages SRTM(i-1:i + 1,j-1:j + 1) arrays, i.e. averages over three-by-three element wide areas of the digital elevation model. for i = 2 : 1200 for j = 2 : 1200 if isnan(SRTM(i,j)) == 1 SRTM(i,j) = nanmean(nanmean(SRTM(i-1:i+1,j-1:j+1))); end end end clear i j

114

4 Geometric Properties

If there are still NaNs in the data set causing errors when importing the data set into a Virtual Reality Modeling Language (VRML) client (Sect. 7.6), the double for loop can be run a second time. Finally, we check whether the data are now correctly stored in the workspace by printing the minimum and maximum elevations of the area. max(SRTM(:)) min(SRTM(:))

In our example, the maximum elevation of the area is 3,992 m above sea level and the minimum is 1,504 m. A coordinate system can be defined using the information that the lower-left corner is s01e036. The resolution is 3 arc seconds, corresponding to 1/1,200 of a degree. [LON,LAT] = meshgrid(36:1/1200:37,-1:1/1200:0);

The script below yields a great display of the data. Please note the use of the function daspect to reduce the length of the z-axis. The Mapping Toolbox includes the function demcmap, which creates and assigns a colormap that is appropriate for elevation data since it relates to land and sea colors to hypsometry and bathymetry. We first define contour level v to display the SRTM data as a surface plot with demcmap colors and white contour lines (Fig. 4.2). Having generated the shaded-relief map the plot must then be exported to a graphics file.

Fig. 4.2 Display from the filtered SRTM elevation data set. The function surfl has been used to generate a shaded-relief image with simulated lighting, contour3 to create white elevation contours, and the demcmap colormap from the Mapping Toolbox (data from Farr and Kobrick 2000, Farr et al. 2008).

4.4 Gridding and Contouring

4.4

115

Gridding and Contouring

The previous data set was stored in an evenly-spaced two-dimensional array. Most field data in earth sciences are, however, obtained from irregular sampling patterns; they are therefore unevenly-spaced and need to be interpolated to allow a smooth and continuous surface to be computed from field measurements. Surface estimation is typically carried out in two major steps (Fig. 4.3). The number of control points first needs to be selected and then the value of the variable of interest needs to be estimated for each of the grid points. The control points are unevenly spaced field measurements such as, for example, the thicknesses of sandstone units at different outcrops or the concentrations of a chemical tracer in water wells. The data are generally presented as xyz triplets, where x and y are spatial coordinates and z is the variable of interest. In such cases, most gridding methods require continuous and unique data. However, spatial variables in earth sciences are often discontinuous and not spatially unique: a sandstone unit, for example, may be faulted or folded. Furthermore, gridding requires spatial autocorrelation, i.e. the neighboring data points should be correlated with each other through a specific relationship. There is no point in trying to make a surface estimation if the z variables are random and have no autocorrelation. There are two steps involved in determining values for a variable z at different points on an evenly-spaced grid, from the values of this variable at various unevenly-spaced control points: the first step involves deciding which of the control points to use and the second step involves the actual calculations. Several different techniques exist for deciding which control points to use for these calculations. The nearest-neighbor criterion uses all control points within a user-specified radius from the grid point (Fig. 4.3a). Since the degree of spatial autocorrelation is likely to decrease with increasing distance from the grid point, taking into account too many distant control points is likely to lead to erroneous estimates of a grid point’s

Grid Point

a

b

Control Point

Fig. 4.3 Methods for selecting control points to use for estimating values at grid points. a Using a circle around the grid point (plus sign) with a radius defined by spatial autocorrelation of the z values at the control points (small blue circles). b The control points are selected from the vertices of the triangle enclosing the grid point, with the option of also including the vertices of the adjoining triangles. Modified from Swan and Sandilands (1995).

116

4 Geometric Properties

z value. On the other hand, using radii that are too small may reduce the number of control points used in calculating a grid point’s z value to a very small number, resulting in a noisy estimate of the modeled surface. In another technique, all control points are connected in a triangular network (Fig. 4.3b). Every grid point is located within the triangular area formed by three control points. The z value of the grid point is averaged from the z values of the three grid points. A modification of this form of gridding also uses the three points at the vertices of the three adjoining triangles. Kriging, introduced in Sect. 7.11 of Trauth (2020), is an alternative approach for deciding which control points to use. It is often regarded as the ultimate gridding method. Some people even use the term geostatistics synonymously with kriging. Kriging is a method for quantifying the spatial autocorrelation and hence the size of the nearest-neighbor criterion circle. More sophisticated versions of kriging use an ellipse instead of a circle. As mentioned above, the second step in surface estimation is the actual computation of the z values for the grid points. The easiest way is to use the arithmetic mean of the measured z values at the control points z ¼

N 1X zi N i¼1

This is a particularly useful method if there are only a highly restricted number of control points. If the study area is well-covered by control points and the distance between these points is highly variable, the z values of the grid points should be computed using a weighted mean. This involves weighing the z values at the control points by the inverse of the distance di from the grid point whose z value is to be determined. N P

z ¼

i¼1 N P

ðzi =di Þ ð1=di Þ

i¼1

Depending on the spatial scaling relationship of the variable z, the inverse of the square root of the distance can be used to weight the z values, rather than simply the inverse of the distance. The fitting of 3D splines to the control points offers another method for computing grid point values that is commonly used in the earth sciences. Most routines used in surface estimation involve cubic polynomial splines, i.e. a third-degree 3D polynomial is fitted to at least six adjacent control points. The final surface is a composite comprising different portions of these splines.

4.4 Gridding and Contouring

117

MATLAB has from the start provided a biharmonic spline interpolation method, which was developed by Sandwell (1987). This gridding method is particularly well suited to produce smooth surfaces from noisy data sets with unevenly-distributed control points. As an example we use synthetic xyz data representing the vertical distance between an imaginary stratigraphic horizon that has been displaced by a normal fault, and a reference surface. The foot wall of the fault shows roughly horizontal strata, whereas the hanging wall contains two large sedimentary basins. The xyz data are irregularly distributed and the data therefore need to be interpolated onto a regular grid. The data points are stored as a three-column table in a file named basics_3_2_data_2.txt: 4.3229698e+02 4.4610209e+02 4.5190255e+02 4.6617169e+02 4.6524362e+02 4.5526682e+02 4.2930233e+02 (cont'd)

7.4641694e+01 7.2198697e+01 7.8713355e+01 8.7182410e+01 9.7361564e+01 1.1454397e+02 7.3175896e+01

9.7283620e-01 6.0655065e-01 1.4741054e+00 2.2842172e+00 1.1295175e-01 1.9007110e+00 3.3647807e+00

The first and second columns contain the x (between 420 and 470) and y (between 70 and 120) coordinates from an arbitrary spatial coordinate system, while the third column contains the vertical z values. The data are loaded using clear, close all, clc data = load('basics_4_4_data_2.txt');

We first want to create an overview plot of the spatial distribution of the control points. In order to label the points in the plot, the numerical z values from the third column are converted into character string representations with a maximum of two digits. labels = num2str(data(:,3),2);

The 2D plot of our control points is generated in two steps. Firstly, the data are displayed as empty circles using the plot command. Secondly, the data are labeled using the function text(x,y,'string’), which adds text contained in string to the xy locations. The value 1 is added to all x coordinates to produce a small offset between the circles and the text. plot(data(:,1),data(:,2),'o'), hold on text(data(:,1)+1,data(:,2),labels), hold off

118

4 Geometric Properties

This plot helps us to define the axis limits for gridding and contouring: xlim = [420 470] and ylim = [70 120]. The function meshgrid transforms the domain specified by vectors x and y into arrays XI and YI. The rows of the output array XI are copies of the vector x and the columns of the output array YI are copies of the vector y. We choose 1.0 as the grid interval. x = 420:1:470; y = 70:1:120; [XI,YI] = meshgrid(x,y);

The biharmonic spline interpolation is used to interpolate the irregular-spaced data at the grid points specified by XI and YI. ZI = griddata(data(:,1),data(:,2),data(:,3),XI,YI,'v4');

The option v4 selects the biharmonic spline interpolation, which was the sole gridding algorithm available until MATLAB4 was replaced by MATLAB5. MATLAB provides various tools with which to display the gridding results. The simplest way to display the results is as a contour plot using contour. By default, the number of contour levels and the values of the contour levels are chosen automatically. The choice of the contour levels depends on the minimum and maximum values of z. contour(XI,YI,ZI)

Alternatively, the number of contour levels can be chosen manually, e.g. ten. contour(XI,YI,ZI,10)

Contour values can also be specified in a vector v. Since the maximum and minimum values of z are min(data(:,3)) ans = -27.4357 max(data(:,3)) ans = 21.3018

we choose v = -40 : 10 : 20;

4.4 Gridding and Contouring

119

The command [c,h] = contour(XI,YI,ZI,v);

yields a contour matrix c and a handle h that can be used as input to the function clabel, which labels contours automatically. clabel(c,h)

The contourf function is used to create a filled contour plot and the colorbar function displays a legend for this plot. We can also plot the locations (small circles in Fig. 4.4) and z values (contour labels) of the control points. contourf(XI,YI,ZI,v), colorbar, hold on plot(data(:,1),data(:,2),'ko') text(data(:,1)+1,data(:,2),labels), hold off

The third dimension is added to the plot using the mesh command. We can also use this example to introduce the function view(az,el), which is used to specify the direction of viewing, where az is the azimuth or horizontal rotation and el is the

Fig. 4.4 Contour plot with the locations (small circles) and z values (contour labels) of the control points. The irregularly distributed xyz data are stored as a three-column table in a file named basics_3_2_data_2.txt and interpolated onto a regular grid using a biharmonic spline interpolation (Sandwell 1987).

120

4 Geometric Properties

elevation (both in degrees). The values az = –37.5 and el = 30 define the default view for all 3D plots, mesh(XI,YI,ZI), view(-37.5,30)

whereas az = 0 and el = 90 is the default 2D view from directly overhead: mesh(XI,YI,ZI), view(0,90)

The function mesh provides one of the many 3D presentation methods available in MATLAB, another commonly used function being surf. The figure can be rotated by selecting the Rotate 3D option on the Edit Tools menu. We also introduce the function colormap, which uses predefined color look-up tables for 3D graphics. Typing Dhelp graph3d lists several built-in colormaps, although colormaps can also be arbitrarily modified and generated by the user. As an example, we use the colormap hot, which is a black-red-yellow-white colormap. surf(XI,YI,ZI), colormap('hot'), colorbar

Using Rotate 3D only rotates the 3D plot, not the colorbar. The function surfc combines a surface plot and a 2D contour plot into one graph. surfc(XI,YI,ZI)

The function surfl can be used to illustrate an advanced application for 3D visualization, generating a 3D colored surface with interpolated shading and lighting. The axis labeling, ticks, and background can be turned off by typing axis off. Black 3D contours can also be added to the surface, as above. The grid resolution is increased prior to data plotting to obtain smooth surfaces. [XI,YI] = meshgrid(420:0.25:470,70:0.25:120); ZI = griddata(data(:,1),data(:,2),data(:,3),XI,YI,'v4'); surf(XI,YI,ZI), colormap(jet) shading interp, light, axis off, hold on contour3(XI,YI,ZI,v,'k'), hold off

The biharmonic spline interpolation described in this section provides a solution to most gridding problems. It was, therefore, for some time, the only gridding method that came with MATLAB. However, different applications in earth sciences require different methods of interpolation, although they all have their problems. The book MATLAB Recipes for Earth Sciences (Trauth 2020) gives a detailed introduction to different interpolation methods and possible artifacts.

4.5 Exercises

4.5

121

Exercises

The following exercises will demonstrate the electronic measurement of the morphology of features on the earth’s surface (e.g. mountain ridges, river valleys, river terraces) and beneath the surface (e.g. sediment layers, fault planes, oriented fossils). All exercises will demonstrate these measurements scaled to the size of a classroom table, carried out with inexpensive measuring instruments such as LEGO sensors or the sensors in smartphones. In each exercise, the equipment is first set up and the measurements then acquired; these are subsequently analyzed on the computer with the software tools that are used in large-scale experiments.

4.5.1 Dip and Dip Direction of Planar Features Using Smartphone Sensors Geological mapping involves measuring the three-dimensional orientation of planar features such as sediment layers or foliations. Traditionally, two alternative pairs of measured values are used for this purpose. The first pair consists of the strike and dip, where the strike is the angle between a line representing the intersection of that feature with a horizontal plane and true (geographic) north. The dip is the angle of descent of the planar feature from a horizontal plane. Since there are two possibilities for the dip direction when using strike and dip, this must also be defined by specifying a cardinal direction. The second pair of measurements comprises the dip and the dip direction, which is what we will use in this exercise. The dip is, as before, the angle of descent of the planar feature from a horizontal plane, and the dip direction is the angle between the direction of the steepest descent and true north. The three-dimensional orientation of a planar feature is traditionally measured with a geological compass; such devices can be purchased for prices ranging from less than 100 € up to about 1,500 €. They contain a magnetic needle that gives the horizontal orientation of linear and planar features with respect to north, and a clinometer to measure their angle of inclination relative to a horizontal surface. More recently, electronic compasses have been developed that contain a magnetometer for measuring the magnetic field strength and a gyroscope for measuring the angle of inclination to the horizontal. This compass offers an interface to a computer through which the measured values can be downloaded for analysis. With the increasing use of smartphones, which normally contain an electronic magnetometer and a gyroscope, several applications (or apps) have been released that enables them to be used to measure and evaluate geological data on smartphones, (e.g. Stereonet by Richard Allmendinger, FieldMove Clino by Petroleum Experts Limited, and Lambert by Peter Appel). Examples of applications that read the values measured by the magnetometer and the gyroscope on a smartphone, and also all other sensors, include phyphox (physical phone experiments) from RWTH Aachen University and MATLAB Mobile from MathWorks.

122

4 Geometric Properties

Here we use MATLAB Mobile with an Apple iPhone 8 to carry out measurements with the three-axis magnetometer and gyroscope of the smartphone. However, most other iOS and Android-based smartphones with built-in sensors are also suitable for carrying out the exercises. A magnetometer is a device that measures the strength of a magnetic field (in Tesla). The earth’s magnetic field varies between 20 and 80 µT depending on location but can fluctuate by up to 100 nT. Smartphones use a magnetic field sensor to detect magnetic field strength in the xyz direction (in µT), which can be used as an electronic compass to measure the orientation of the smartphone with respect to north. In the settings of the app we can choose between true north (default) and magnetic north, with the declination being the difference between the two types of north. The gyroscope together with the accelerometer of the smartphone is used to measure its orientation with respect to its three axes, i.e. its roll, pitch and yaw. The following exercise demonstrates how to measure the orientation of linear and planar structures in the field, using the magnetometer and the gyroscope of the smartphone. It involves acquiring a large number of repeated measurements, which would not be possible with a traditional compass within a reasonable time frame. The accuracy and precision of these repeated measurements will then be statistically evaluated and interpreted. Exercise 1. Use a smartphone to carry out repeated measurements of the orientation (azimuth, pitch, and roll) of an inclined wooden board on a table. 2. Use the azimuth, measured between –180° and 180° to true north, as an estimate of the dip direction, measured between 0° and 360°, and the pitch as an estimate of the dip, both measured between –90° and 90°. 3. Display the data as time series (azimuth and pitch over time) and a scatter plot (azimuth over pitch). Calculate and display the measures of central tendency and dispersion of the replicate measurements. 4. Interpret the result with regard to the suitability of smartphones for surveying geological phenomena. Solution 1. Place a wooden board at an inclination of *20° towards the northwest (i.e. 330° or −30° to true north) on a table. Place the smartphone flat on the board with the bottom edge pointing towards the steepest descent. Launch MATLAB Mobile on your smartphone and use the digital compass to measure the device’s orientation on the board (azimuth, pitch, and roll) for *60 s. Save the measurements on MATLAB Drive in a .mat file. 2. Launch MATLAB Desktop on your computer and import the data. Display the measured values of the azimuth, pitch, and roll in time series, i.e. the individual measured values over time in seconds. The azimuth is the angle between the smartphone’s longest axis and magnetic north. The pitch is the angle between

4.5 Exercises

3.

4.

5.

6.

123

the smartphone’s long axis and the horizontal plane. The role is the angle between the smartphone’s second-longest axis and the horizontal plane. Calculate the geographic and cartesian means and standard deviations, ignoring the fact that the data are not Gaussian/von Mises distributed (see Chaps. 3 and 10 in Trauth 2020), and the two-dimensional kernel density. Display the data in a scatter plot, with azimuth as the x-axis and pitch as the y-axis, and with the color of the points in the scatter plot corresponding to the kernel density. Mark the mean values and the maximum kernel density in the plot. Display the data in a scatter plot, with azimuth as the x-axis and pitch as the yaxis, and with the color of the points in the scatter plot corresponding to the time at which the measurement was made. Mark the mean values and the maximum kernel density in the plot. The position of the kernel density compared to the mean value and the data over time in the scatter plot shows that the kernel density is a better measure of central tendency than the mean of the non-Gaussian/von Mises distributed data. Despite the clear drift, the measured values of azimuth and pitch are much more accurate and precise than traditionally measured values and the technique is obviously suitable for geological measurements.

MATLAB Script We use the following MATLAB script for tasks 1–4. After setting up an inclined wooden board, we attach a smartphone to it with double-sided tape. The experiment presented here uses an Apple iPhone 8 with a built-in ALPS e-Compass according to TechInsights. We use MATLAB Mobile to record the orientation (azimuth, pitch, and roll) for one minute using the Log mode in the Sensors menu of the app. After we have stopped recording we are asked for a file name under which to save the file on MATLAB Drive. The file is then copied to the desktop directory of the MATLAB Drive application on the computer. We move the file to our working folder or any other directory contained in the current search path of MATLAB. We then clear the workspace and the Command Window, and close all Figure Windows using clear, clc, close all

We import the data using orient_1 = load('sensorlog_20191129_102032.mat');

Typing whos shows that we have created a structure array called orient_1 that comprises one single field called Orientation, which is a 603-by-3 time table. Within the * 60 s of recording, we measured and stored 603-by-3 xyz angular values in degrees, with x = azimuth, y = pitch, and z = roll, together with a timestamp. The sampling rate is therefore 10 values per second, i.e. 10 Hz. We can display the first 10 measurements by typing

124

4 Geometric Properties

orient_1.Orientation(1:10,:)

We can use the stackedplot function to display the timetable. We see that the angles measured by the magnetometer and the gyroscope have a significant drift over time. figure('Position',[50 900 600 300],... 'Color',[1 1 1]) stackedplot(orient_1.Orientation,... 'DisplayLabels',{'Azimuth','Pitch','Roll'});

We then convert the data type from timetable to table, and then to double. As we can see from the plot below, azimuth = –30° (or +330°), pitch = –20°, and roll = 0° approximately. orient_2 = timetable2table(orient_1.Orientation); dazimuth = double(orient_2.X); dpitch = double(orient_2.Y); droll = double(orient_2.Z);

We also convert the timestamp into a serial representation of date and time using datenum. We then use datevec to convert the date numbers to date vectors, with the seconds in the last column. We next create a second vector t. dtime = datenum(orient_2.Timestamp); dtimev = datevec(dtime); t = (max(dtimev(:,6))-min(dtimev(:,6)))/length(dtimev) :... (max(dtimev(:,6))-min(dtimev(:,6)))/length(dtimev) :... max(dtimev(:,6))-min(dtimev(:,6)); t = t';

We replace the 3rd column of the timetable with the updated version of dazimuth. orient_1.Orientation(:,1) = table(dazimuth); orient_1.Orientation(1:10,:)

We can use the stackedplot function to display the timetable. We see that the angles exhibit a clear drift over time. close figure('Position',[50 900 600 300],... 'Color',[1 1 1]) stackedplot(orient_1.Orientation,... 'DisplayLabels',{'Azimuth','Pitch','Roll'});

4.5 Exercises

125

We calculate the mean location of data distributed on a sphere (meanm and stdm) or on a Cartesian plane (mean and std). The means and standard deviations on the sphere and on the Cartesian plane are identical. [dpitchmeanm,dazimuthmeanm] = meanm(dpitch,dazimuth) [dpitchstdm,dazimuthstdm] = stdm(dpitch,dazimuth) dpitchmean = mean(dpitch) dazimuthmean = mean(dazimuth) dpitchstd = std(dpitch) dazimuthstd = std(dazimuth) dstdist = stdist(dpitch,dazimuth)

We now calculate the two-dimensional kernel density, and then the highest kernel density. [F,XI] = ksdensity([dpitch,dazimuth],[dpitch,dazimuth]); dazimuthhiden = XI(F==max(F(:)),2) dpitchhiden = XI(F==max(F(:)),1)

We then display the data as a scatter plot with colors according to the kernel density. We see that the mean [148.7958,–20.3206] (red circle) does not correspond to the maximum of the two-dimensional kernel density (green circle). figure('Position',[50 500 600 300],... 'Color',[1 1 1]) axes('Box','on',... 'XGrid','on',... 'YGrid','on',... 'DataAspectRatio',[1 1 1]); hold on scatter(dazimuth,dpitch,50,F,'filled'), hold on line(dazimuthmean,dpitchmean,30,... 'Marker','o',... 'MarkerSize',10,... 'MarkerFaceColor','r',... 'MarkerEdgeColor','k') drawellipse('Center',[dazimuthmean,dpitchmean],... 'SemiAxes',[dpitchstd,dazimuthstd],... 'Color','red',... 'FaceAlpha',0,... 'LineWidth',1); line(dazimuthhiden,dpitchhiden,30,... 'Marker','o',... 'MarkerSize',10,... 'MarkerFaceColor','g',...

126

4 Geometric Properties

Fig. 4.5 Display of azimuth and pitch data as a scatter plot with colors according to the time, showing the temporal drift of both the magnetometer (measuring azimuth) and the gyroscope (measuring pitch and roll).

'MarkerEdgeColor','k') xlabel('Azimuth') ylabel('Pitch') c1 = colorbar; c1.Limits = [0 800]; c1.Label.String = 'Kernel Density';

We then display the data as a scatter plot with colors according to the time, showing the temporal drift of both the magnetometer (measuring azimuth) and the gyroscope (measuring pitch and roll) (Fig. 4.5). figure('Position',[50 100 600 300],... 'Color',[1 1 1]) axes('Box','on',... 'XGrid','on',... 'YGrid','on',... 'DataAspectRatio',[1 1 1]); hold on scatter(dazimuth,dpitch,50,t,'filled'), hold on line(dazimuthmean,dpitchmean,30,... 'Marker','o',... 'MarkerSize',10,... 'MarkerFaceColor','r',... 'MarkerEdgeColor','k') line(dazimuthhiden,dpitchhiden,30,... 'Marker','o',...

4.5 Exercises

127

'MarkerSize',10,... 'MarkerFaceColor','g',... 'MarkerEdgeColor','k') line(dazimuth,dpitch,... 'Color','k') xlabel('Azimuth') ylabel('Pitch') c2 = colorbar; c2.Limits = [0 60]; c2.Label.String = 'Time (s)'; save exercise_4_5_1_data_1.mat

Evaluating the measured values of orientation reveals that, despite the clear drift, the measured values are much more accurate and precise than traditionally measured values would be expected to be.

4.5.2 Precision and Accuracy of Ultrasonic Distance Measurements There are in general two different types of measurement errors (not only in geosciences): random errors and systematic errors (Taylor 1997; Trauth 2020) (Fig. 4.6). Random errors (sr) can be determined by evaluating repeated measurements and therefore reflect the dispersion of the measured values due to the inaccuracy of the measuring instrument. Systematic errors (ss) are much harder to detect and quantify. They are usually consistent downward or upward deviations in measured values due to an incorrectly calibrated instrument. If we know the random and systematic errors of measurement, we can determine the precision and

1 Random Error 0.8 “True” value

Systematic Error

f(x)

0.6 0.4 0.2 0 0

2

4

6 x

x

10

12

Fig. 4.6 Illustration of the difference between random error and systematic error. While random errors can be reduced by increasing the sample size, this does not have any effect on systematic errors (Trauth 2020).

128

4 Geometric Properties

accuracy of our data. Precise measurements are consistent about the mean, but the true value may be different from the mean value of the measurements due to systematic errors that limit the accuracy. Increasing the sample size N increases the precision, but not the accuracy. The following exercise demonstrates the difference between random errors and systematic errors. As an example, we will use measurements of the distance between a LEGO MINDSTORMS EV3 Ultrasonic Sensor and a wooden board with (in our example) a width of 93 cm and a height of 53 cm. We carry out multiple (e.g. 10,000) measurements from several different positions between 0 and 300 cm from the wooden board, to determine the range of the sensor as well as random and systematic measurement errors. We use a tape measure to check the recorded distances between the wooden board and the sensor. Exercise 1. Download the LEGO MINDSTORMS Education EV3 software from the LEGO Education webpage. Install the software on your computer and read the manual. Download the LEGO MINDSTORMS EV3 Support from MATLAB from the manufacturer’s webpage. Install the hardware support on your computer and read the manual. Set up a USB connection between your computer and the EV3 Brick. 2. Assemble a four-wheeled vehicle with two large motors and an ultrasonic sensor at the front. Point the sensor towards a wooden board set up vertically at a suitable distance (measured with a tape measure) and read the distance with the ultrasonic sensor. Write a MATLAB script to carry out a large number of repeated measurements with the sensor and to record the distance values. 3. First determine the range of values of the measurements, i.e. what distances (in cm) can actually be measured with the ultrasonic sensor? Then determine the precision of the measurements by analyzing the dispersion of the measured distances. Finally, determine the accuracy of the measured values by comparing the distances measured using the ultrasonic sensor with those measured using the tape measure. Discuss the results, especially the causes of random and systematic errors. Solution 1. Assemble the vehicle according to the LEGO building instructions. These instructions are in a LEGO Digital Designer Model (LXF) file which can be opened with the free LEGO Digital Designer software, or with STUDIO 2.0 by BrickLink, available for computers running both macOS and Windows. 2. Create a MATLAB script that establishes a connection to the ultrasonic sensor and then uses a while loop to carry out replicate measurements (e.g. 10,000) of the distance between the sensor and the wooden board. Repeated measurements with the sensor at several different positions, e.g. 0, 5, 10, 50, 100, 200, 240, and 260 cm from the wooden board.

4.5 Exercises

129

3. After carrying out the distance measurements we see that they are stored as 32-bit floating-point values of data type (class) single. Displaying the data as a plot against time and the frequency distribution of the measurements as a histogram reveals that the ultrasonic sensor has a maximum value of 255 cm and rounds the distances to the nearest whole millimeter. Due to the design of the device, the sensor is not able to measure the distance down to zero; the smallest distance that can be measured is slightly more than 3 cm. 4. The repeated measurements show a clear dispersion in the measured values, but due to the limitation to 32-bit single precision and the maximum resolution of 0.1 cm, the dispersion is represented by only a few discrete values. The dispersion increases with increasing distance from the wooden board, i.e. the number of divergent distance values increases. The small size of the wooden board compared to the distance from the sensor is certainly an important factor. Sources of error are not only the limited precision of the sensor and the limitations due to the 32-bit single precision data type but also all sorts of disturbances such as those due to multiple reflections and dispersion of the acoustic signal. 5. Comparing the distances measured by the ultrasonic sensor with those measured by tape measure can reveal a systematic error, e.g. if the values recorded by the sensor are systematically higher or lower by a few millimeters. If we take a closer look at the experimental setup, the following possible sources of systematic error can be identified: (1) the board may not be exactly perpendicular to the propagation direction of the sound waves, (2) the measuring tape may not be precisely positioned, (3) the tape measure may not have been accurately read, or (4) the zero position of the sensor may have not been correctly determined. MATLAB Script We use the following MATLAB script for tasks 2–5. The graphical representations of the distance values as plots against time and the frequency distribution of the measurements as histograms for the sensor positions at 0, 5, 10, 50, 100, 200, 240, and 260 cm from the board can be found in the book’s electronic supplementary material, together with the raw measurement data. We first clear the workspace and the Command Window, and close all Figure Windows. clear, clc, close all

We then connect the EV3 Brick with the computer. A USB connection through the USB cable that comes with the LEGO MINDSTORMS package is by far the simplest and most stable connection, especially if there are a lot of WiFi networks that could interfere with a WiFi connection, e.g. within a university building. Next, we select the desired connection in the MATLAB script by typing mylego = legoev3('USB');

130

4 Geometric Properties

We then set up a connection between the computer and the ultrasonic sensor, which is attached to EV3 input port 1. MATLAB uses sonicSensor to automatically detect the EV3 input ports; this function then yields the object handle mysonicsensor that will later be used to read the data from the sensor. mysonicsensor = sonicSensor(mylego);

We next read 10,000 distance values from the ultrasonic sensor using a for loop and stored them in the variable distance_so. We use tic and toc to measure the time dur required for the measurements and create a time vector time_so that can be used to display the data. clear distance tic for i = 1 : 10000 distance_so(i) = readDistance(mysonicsensor); end dur = toc; time_so = dur/length(distance_so):dur/length(distance_so):dur;

With the function unique we determine the distances measured by the sensor by typing unique_distances = unique(distance_so)

Finally, we present the distances as a plot against time and the frequency distribution of the distances in a histogram (Fig. 4.7). From the results of the distance measurements at different distances between the board and the sensor, we can learn a lot about the sensor and the quality of the experimental setup used, which will also be relevant to similar measurements in the geosciences. The first observation is the unusual measurement range of between *3 and 255 cm. The unfortunately very unsatisfactory LEGO MINDSTORMS EV3 Hardware Developer Kit (The LEGO Group 2013a) states that the sensor yields a measurement precision of 1 cm. This precision, together with the obtained range, suggests an 8-bit unsigned integer data type (unit8), which allows 28 = 256 different measurement values from 0 cm (but because of the design of the sensor only from 3 cm) to 255 cm in 1 cm steps. You can read in the Hardware Developer Kit that the sensor indeed includes a small 8-bit microcontroller, which sends the data to the EV3 Brick. The LEGO EV3 Ultrasonic Sensor uses a 226N0120300 piezo ultrasonic sensor, including a transmitter (or sender, the left eye of the sensor) and a receiver (or receiver transducer, the right eye of the sensor). The transmitter emits a 40 kHz signal about 12 times per second. The receiver is most sensitive at 40 kHz (see 226N0120300 data sheet in the book’s electronic supplementary material) and has a

4.5 Exercises

131

Fig. 4.7 Histogram of the distance measurements from the LEGO EV3 Ultrasonic Sensor pointing towards a wooden board at a distance of 100 cm (measured with a tape measure). A MATLAB script carries out multiple distance measurements and displays the result.

beam angle of *90°. Both transmitter and receiver consist of small metal plates, which are piezoelectric elements with positive and negative charges. Voltage changes cause the plates in the transmitter to move, generating outgoing sound waves, while incoming sound waves reflected by the wooden board produce voltage changes in the receiver. The time difference between outgoing and incoming sound waves can be used, together with the speed of sound (343 m/s), to calculate the distance from the sensor and the board and back. The LEGO MINDSTORMS EV3 Firmware Source Code Documentation (The LEGO Group 2013b), and also the data sheets for the sensor (226N0120300) and the microcontroller (STM8S103F3), both of which are included in the book’s electronic supplement, state that the analog voltage signal of the piezo ultrasonic transmitter– receiver system is sampled with a 10-bit analog–digital converter (ADC) using successive approximation. From such an ADC we would expect 210 = 1,024 different values. However, the data processed in the EV3 Brick and transmitted to MATLAB are received as 32-bit floating-point values of data type (class) single. This data type allows us to store 2324.3 million different values. The sensor’s theoretical range of between 0 and 255 cm at 0.1 cm resolution requires only 2,551 different values, more than 8-bit or 10-bit data types can store but much less than can be stored as 32-bit single-precision data. Unfortunately, the information provided by LEGO does not indicate where the data are converted from 8-bit to a higher bit depth before they arrive in MATLAB as 32-bit floating-point values of data type (class) single. Gaps in the 0:0.1:255 distance vector are observed in the frequency distribution histograms, which may indicate that the value range is defined by a 10-bit ADC. That would mean that not all of the

132

4 Geometric Properties

theoretical values in the 0:0.1:255 distance vector are possible, but only 210 = 1,024 different distance values. Unfortunately, the LEGO support could not help us with this point and so this remains a matter of speculation. The description of the statistical parameters is somewhat difficult due to the data type of the measurements. When the sensor is touching the board we get three different distance measurements of about 3 cm, i.e. 3.21, 3.24, and 3.26 cm, with a mode of 3.26 cm. There are no values less than 3.21 cm suggesting that this is the smallest possible distance we can measure. Measurement errors result in this distance being recorded only 17 times, but a distance of 3.24 cm is recorded 131 times and a distance of 3.26 cm is recorded 9,852 times, which is therefore the mode of the frequency distribution for distances between sensor and board. As indicated in the previous paragraph, three values are missing in between these values: 3.22, 3.23, and 3.25 cm do not appear in our series of 10,000 measurements. How should we describe the precision of such a measurement, using the parameters of descriptive statistics described in Chap. 3 of Trauth (2020)? We have a very large sample size of n = 10,000 but only three different values: 3.21, 3.24, and 3.26 cm. The frequency distribution of the distances does not allow us to test for a particular type of distribution (e.g. normal distribution, log-normal distribution). However, the calculation of mean values and standard deviations, for example, requires a normal distribution. The calculation of quantiles (e.g. median and quartiles) also makes little sense when there are only three discrete values. The best solution, in this case, to not bother about descriptive statistics but to just list the three values 3.21, 3.24, and 3.26 cm, including the information that almost 100% (actually 98.52%) of the measured values are 3.26 cm, probably due to a weighting algorithm in the software of the sensor and/or the EV3 Brick. If the number of different values increases, e.g. with larger distances from the wooden board, one can indicate the value range in the form of 3.21 cm < distance < 3.26 cm and the mode as mode = 3.26 cm. Alternatively, we can write distance = [3.21;3.26] cm with mode = 3.26 cm, but we need to remember that these are floating-point numbers with gaps, and not real numbers. A similar dominance of one particular distance is evident in all measurements up to about 100 cm, with the other distances appearing as outliers. The proportion of such outliers increases with increasing distance from the board. At 200 cm this pattern continues, but clusters of very low values now occur at 80 and 100 cm, with a gap between these clusters and the most common value at about 200 cm. The algorithm for weighting the values may fail for such distances and incorrect measurements are misjudged to be correct. At 240 cm we find a relatively wide range of values between 240 and 250 cm and a single value of 255 cm. Here we are close to the upper limit of the range and some disturbances in the measurements, therefore, yield values >255 cm, which because they are outside the range are all recorded as a single value of 255 cm, as indicated in the LEGO MINDSTORMS EV3 Firmware Source Code Documentation (The LEGO Group 2013b). In summary, the measurement setup can be used to measure distances of up to one meter with a precision of about 1 cm, as indicated in the LEGO MINDSTORMS EV3 Hardware Developer Kit (The LEGO Group 2013a), while

4.5 Exercises

133

distances of larger than two meters are difficult to measure. Comparing the most frequent values (i.e. the modes) of the measurements with the actual distance from the wooden board reveals significant systematic errors. All distances recorded by the sensor are larger than the distance measured with the tape measure. For example, the distances recorded by the sensor at 100 cm are all between 0.4 and 1.2 cm more than 100 cm and therefore the same order of magnitude as the random error. Since we have been able to quantify the systematic error and to determine the accuracy of the setup, we can correct it in our exercise by subtracting 1 cm from all measured distances. However, we cannot avoid the random error: we can only reduce it by increasing the sample size.

4.5.3 Spatial Resolution of the LEGO EV3 Ultrasonic Sensor The quality of geoscientific mapping and imaging depends on the spatial resolution of the methods used for the measurements. The spatial resolution depends not only on the technical resolution of the measuring device, but also on the size of the object to be detected in relation to its distance from the sensor, and on the influence of external disturbances. The following exercise demonstrates how to determine the spatial resolution of a setup used to map a thin stick in front of a wooden board. We regard this stick as a unit impulse and want to obtain a one-dimensional image of this impulse, i.e. determine the impulse response of the setup. The absolute value of the Fourier transform of the impulse response is the magnitude response of the setup, from which we can determine the spatial resolution of the measurements (see Chap. 6 of Trauth 2020). We will use as an example the above measurements of the distance of the LEGO MINDSTORMS EV3 Ultrasonic Sensor (Item No. 45504) from a wooden board, which in our example is 93 cm wide and 53 cm high. About 25 cm in front of the board we place a 0.8 cm wide LEGO stick, which consists of two LEGO Beams 15 (Item No. 64871) fixed on top of each other with adhesive tape. The two beams are attached to a base made of two pieces of LEGO Technic Beam Triangle Thin (Item No. 99773) with three pieces of LEGO Blue Long Pin with Friction Ridges Lengthwise (Item No. 4514553). The sensor is mounted on a vehicle with a LEGO EV3 Large Servo Motor (Item No. 45502) and four LEGO Black Train Wheel with Axle Hole and Friction Band items (Item No. 55423/57999). The vehicle moves in 90 steps over a distance of *30 cm, on rails that are parallel to the wooden board and at a distance of *35 cm from it. The sensor takes a measurement approximately every 0.33 cm, i.e. it yields individual distances rather than the sort of complete image of the object of interest that would be obtained from a camera or a scanner. However, since we move the sensor, we can map the geometry of the wooden board and stick with our setup. We carry out multiple (e.g. 250) measurements at each position and then average these measurements to improve the signal-to-noise ratio in our series of distances. The 90 averaged values along a distance of *30 cm are then standardized, i.e. adjusted so that the sum of all

134

4 Geometric Properties

measured values comes to one before we calculate the Fourier transform of the impulse response to determine the spatial resolution of the experimental setup. Exercise 1. Download the LEGO MINDSTORMS Education EV3 software from the LEGO Education webpage. Install the software on your computer and read the manual. Download the LEGO MINDSTORMS EV3 Support from MATLAB from the manufacturer’s webpage. Install the hardware support on your computer and read the manual. Set up a USB connection between your computer and the EV3 Brick. 2. Assemble a four-wheeled rail vehicle with a large motor, with an ultrasonic sensor on the side that takes measurements perpendicular to the direction of movement. The sensor should point towards a wooden board placed at a distance of about 30 cm from the sensor, as measured with a tape measure. We place a *0.8 cm thick stick at a distance of *20 cm in front of the board and it is the distance of this stick from the sensor, as well as its width, that is to be determined. This is achieved by moving the vehicle a few millimeters, obtaining 250 readings from the sensor, and then again moving the vehicle to the next position. Write a MATLAB script to move the vehicle, acquire the distance values with the sensor, and then record the measurements of the sensor. 3. Display the series of distance measurements. Discuss and delete possible faulty measurements, standardize the data, and calculate the magnitude response of the setup. Discuss the results, especially the suitability of the setup for detecting objects. Solution 1. Assemble the vehicle according to the LEGO building instructions. The building instructions are in a LEGO Digital Designer Model (LXF) file, which can be opened with the free LEGO Digital Designer software or STUDIO 2.0 by BrickLink, available for computers running both macOS and Windows. 2. Create a MATLAB script that will establish a connection to the motor and the ultrasonic sensor and then uses a for loop to move the vehicle in 90 steps, and a nested for loop to carry out replicate measurements (e.g. 250) of the distance between the sensor and the wooden board or the stick in front of it. 3. Store the data in MATLAB, calculate the mean value of the replicate distance measurements, and display the impulse response determined from this calculation. If necessary, eliminate erroneous measured values such as outliers and then calculate the magnitude response by Fourier transforming the impulse response. 4. As we can see, the 0.8 cm wide stick is recognized but is reproduced as a relatively wide area of *7 cm at a distance of *12 cm from the sensor. The actual distance of the stick measured with the tape measure is *10 cm, a bit closer than that measured by the sensor. In the calculated magnitude response

4.5 Exercises

135

for objects of any size, you can read how much the amplitude of the measurement signal is attenuated until it can no longer be distinguished from the noise caused by measurement errors with very small objects. MATLAB Script We use the following MATLAB script for tasks 2–4. All graphics generated by the script, as well as the measured raw measurement data can be found in the book’s electronic supplementary material. The supplement also contains video material and the construction plans for the vehicle. Since the vehicle is not very big we do not mount the EV3 Brick on the vehicle but instead, we put it on the side. However, we then need longer cables than those offered by LEGO (25, 35, and 50 cm). The cables shown on the photos in the book’s electronic supplementary material can be purchased from Mindsensors at a reasonable price; they are more flexible than the LEGO cables and up to one meter long. We first clear the workspace and the Command Window, and close all Figure Windows. clear, clc, close all

We then select a USB connection in the MATLAB script by typing mylego = legoev3('USB');

We then establish a connection to motor A, which is attached to the EV3 output port A, and a connection to the ultrasonic sensor, which is attached to the EV3 input port 1. The connection of the motor A needs to be defined using motor in the MATLAB script, but the input port for the sensor is automatically detected by MATLAB using sonicSensor. We can use the motor and sonicSensor functions to create object handles mymotor_A and mysonicsensor for these connections, which will later be used to control the motor and to read the data from the sensor. mysonicsensor = sonicSensor(mylego); mymotor_A = motor(mylego,'A'); resetRotation(mymotor_A); mymotor_A.Speed = -10;

We then start the exercise, which includes 250 distance measurements at 90 locations along the railway line. for i = 1 : 90 start(mymotor_A) pause(0.20) stop(mymotor_A) rotation_A(i) = readRotation(mymotor_A); for j = 1 : 250

136

4 Geometric Properties

distance(i,j) = readDistance(mysonicsensor); end disp(num2str(mean(distance(i,j),2))) pause(1) end

We then calculate the arithmetic mean distancemean of the 250 readings at each position. distancemean = (mean(distance'))';

We now calculate the total distance fulltrip covered from the number of revolutions made by the motors. Unfortunately, these readings are not very precise and we, therefore, get a *5% error in the total distance. The LEGO Black Train Wheel with Axle Hole and Friction Band (Item No. 55423/57999) has a radius of 8.5 mm. fulltrip = 2*pi*8.5 * max(abs(double(rotation_A)))/360 * 0.1

We use this information to create a length axis by typing trip = 0+fulltrip/length(distancemean):... fulltrip/length(distancemean):... fulltrip; trip = trip';

We then calculate the sampling interval (in cm) by typing sampleint = fulltrip/length(distancemean)

The raw distancemean data are displayed to determine the parameters for standardizing the distancemean data so that the total sum of all the measured distances comes to one. figure('Position',[200 600 800 400],... 'Color',[1 1 1]); axes('LineWidth',1,... 'FontName','Helvetica',... 'FontSize',12,... 'XGrid','on',... 'YGrid','on'); line(trip,-distancemean,... 'LineWidth',1); xlabel('Length (cm)',... 'FontName','Helvetica',... 'FontSize',12);

4.5 Exercises

137

ylabel('Distance (m)',... 'FontName','Helvetica',... 'FontSize',12);

As we can see from the plot, the sensor is *0.33 m away from the board. We, therefore, subtract this value from the distances, replace all negative values with a zero, and then standardize the data to a total of 1 by dividing distancemeancwall by the sum of distancemeancwall, to obtain the impulse response of the setup. distancemeancwall = -distancemean + 0.33; distancemeancwall(distancemeancwall 24 bits. HEIF gained greater popularity not before September 2017 when Apple included the format in the iOS 11 and macOS High Sierra operating systems (https://www.mpegstandards.org).

5.5

Processing Images on a Computer

We first need to learn how to read an image from a graphics file into the workspace. As an example we use a satellite image covering a 10.5 km by 11 km area in northern Chile: https://asterweb.jpl.nasa.gov/gallery/images/unconform.jpg

The file unconform.jpg is a processed TERRA-ASTER satellite image that can be downloaded free-of-charge from the NASA web page. We save this image in the working directory. The command

162

5 Visible Light Images

clear, close all, clc I1 = imread('unconform.jpg');

reads and decompresses the JPEG file, imports the data as a 24-bit RGB image array, and stores it in a variable I1. The command whos

shows how the RGB array is stored in the workspace: Name I1

Size

Bytes Class

729x713x3

Attributes

1559331 uint8

The details indicate that the image is stored as a 729-by-713-by-3 array, representing a 729-by-713 array for each of the colors red, green, and blue. The listing of the current variables in the workspace also provides the information that the array is a uint8 array, indicating that each array element representing one pixel contains 8-bit integers. These integers represent intensity values between 0 (minimum intensity) and 255 (maximum). As an example, here is a sector in the upper-left corner of the data array for red: I1(50:55,50:55,1) ans = 66 uint8 matrix 174 165 171 184 191 189

177 169 174 186 192 190

180 170 173 183 190 190

182 168 168 177 185 188

182 168 167 174 181 186

182 170 170 176 181 183

We can now view the image using the command imshow(I1)

which opens a new Figure Window showing an RGB composite of the image (Fig. 5.3). In contrast to the RGB image, a grayscale image needs only a single array to store all the necessary information. We, therefore, convert the RGB image into a grayscale image using the command rgb2gray (RGB to gray):

5.5 Processing Images on a Computer

163

Fig. 5.3 RGB true-color image from the unconform.jpg file. After decompressing and reading the JPEG file into a 729-by-713-by-3 array, MATLAB interprets and displays the RGB composite using the function imshow. Original image courtesy of NASA/GSFC/METI/ERSDAC/JAROS and U.S./Japan ASTER Science Team.

I2 = rgb2gray(I1);

The new workspace listing now reads Name

Size

I1 I2 ans

729x713x3 729x713 6x6

Bytes Class

Attributes

1559331 uint8 519777 uint8 36 uint8

in which the difference between the 24-bit RGB and the 8-bit grayscale arrays can be observed. The variable ans for Most recent answer was created above using I1 (50:55,50:55,1), without assigning the output to another variable. The commands imshow(I2)

164

5 Visible Light Images

display the result. It is easy to see the difference between the two images if they are displayed in separate Figure Windows. Let us now process the grayscale image. First, we display a histogram of intensities by typing imhist(I2)

A simple technique to enhance the contrast in such an image is to transform this histogram to obtain an equal distribution of grayscales. I3 = histeq(I2);

We can again view the difference using imshow(I3)

and save the results in a new file. imwrite(I3,'unconform_gray.jpg')

We can read the header of the new file by typing imfinfo('unconform_gray.jpg')

which yields. ans = struct with fields: Filename: FileModDate: FileSize: Format: FormatVersion: Width: Height: BitDepth: ColorType: FormatSignature: NumberOfSamples: CodingMethod: CodingProcess: Comment:

[1x40 char] '18-Dec-2013 11:26:53' 138419 'jpg' '' 713 729 8 'grayscale' '' 1 'Huffman' 'Sequential' {}

5.5 Processing Images on a Computer

165

The command imfinfo can therefore be used to obtain useful information (name, size, format, and color type) on the newly-created image file. There are many ways of transforming an original satellite image into a practical file format. The image data could, for instance, be stored in an indexed color image format, which consists of two parts: a colormap array and a data array. The colormap array is an m-by-3 array containing floating-point values between 0 and 1. Each column specifies the intensity of the red, green, and blue colors. The data array is an x-by-y array containing integer elements corresponding to the lines m of the colormap array, i.e. the specific RGB representation of a certain color. Let us transfer the above RGB image I1 into an indexed image I4. The colormap map of the image I4 should contain 16 different colors. The result of [I4,map] = rgb2ind(I1,16); imshow(I1), figure, imshow(I4,map)

saved as another JPEG file using imwrite(I4,map,'unconform_ind.jpg')

clearly shows the difference between the original 24-bit RGB image (2563 or about 16.7 million different colors) and a color image of only 16 different colors. The display of the image uses the default MATLAB colormap. Typing imshow(I4,map) cmap = colormap

retrieves the 16-by-3 array of the current colormap. cmap = 0.0588 0.5490 0.7373 0.3216 0.6471 0.7961 0.4510 0.2000 0.4824 0.4039 0.6667

0.0275 0.5255 0.7922 0.2706 0.6784 0.8549 0.3922 0.1451 0.5412 0.4078 0.7020

0.0745 0.4588 0.7020 0.2667 0.6157 0.9176 0.3333 0.1451 0.5843 0.4784 0.7451

166

5 Visible Light Images 0.8980 0.2824 0.9569 0.1765 0.5843

0.8745 0.2902 0.9647 0.1686 0.5843

0.7255 0.4039 0.9608 0.2902 0.6078

We can replace the default colormap with any other built-in colormap. Typing help graph3d

lists the predefined colormaps: parula hsv hot gray bone copper pink white flag lines colorcube vga jet prism cool autumn spring winter summer

- Blue-green-orange-yellow color map - Hue-saturation-value color map. - Black-red-yellow-white color map. - Linear gray-scale color map. - Gray-scale with tinge of blue color map. - Linear copper-tone color map. - Pastel shades of pink color map. - All white color map. - Alternating red, white, blue, and black color map. - Color map with the line colors. - Enhanced color-cube color map. - Windows colormap for 16 colors. - Variant of HSV. - Prism color map. - Shades of cyan and magenta color map. - Shades of red and yellow color map. - Shades of magenta and yellow color map. - Shades of blue and green color map. - Shades of green and yellow color map.

The default MATLAB colormap is parula, which is the former name of a genus of New World warblers with blue, green, and yellow feathers. The advantages of parula over the previous default colormap jet are discussed in blog posts entitled A New Colormap for MATLAB by Steve Eddins of The MathWorks, Inc. at https://blogs.mathworks.com/steve/

5.5 Processing Images on a Computer

167

An introduction to the history of colormaps entitled Origins of Colormaps, which includes the history of jet, is provided by Cleve Moler, the creator of MATLAB, at https://blogs.mathworks.com/cleve/

As an example of colormaps, we can use hot from the above list by typing imshow(I4,map) colormap(hot)

to display the image with a black-red-yellow-white colormap. Typing edit hot

reveals that hot is a function that creates an m-by-3 array containing floating-point values between 0 and 1. We can also design our colormaps, either by manually creating an m-by-3 array or by creating another function similar to hot. As an example the colormap precip.m (which is a yellow-blue colormap included in the book’s file collection) was created to display precipitation data, with yellow representing low rainfall and blue representing high rainfall. Alternatively, we can also use random numbers. rng(0) map = rand(16,3); imshow(I4,map)

to display the image with random colors. Finally, we can create an indexed color image of three different colors, displayed with a simple colormap of full-intensity red, green, and blue. [I5,map] = rgb2ind(I1,3); imshow(I5,[1 0 0;0 1 0;0 0 1])

Typing imwrite(I4,map,'unconform_rgb.jpg')

saves the result as another JPEG file.

168

5.6

5 Visible Light Images

Image Enhancement, Correction and Rectification

This section introduces some fundamental tools for image enhancement, correction and rectification. As an example, we use an image of varved sediments deposited around 33 kyrs ago in a landslide-dammed lake (25°58.900′S 65°45.676′W) in the Quebrada de Cafayate of Argentina (Trauth et al. 1999, 2003). The diapositive was taken on 1 October 1996 with a film-based single-lens reflex (SLR) camera. A 30-by-20 cm print was made from the slide, which was then scanned using a flatbed scanner and saved as a 394 KB JPEG file. We also use this as an example because it demonstrates some problems that we can solve with the help of image enhancement (Fig. 5.4). We then use the image to demonstrate how to measure color-intensity transects for use in time series analysis. We can read and decompress the file varves_original.jpg by typing clear, close all, clc I1 = imread('varves_original.jpg');

a

b

c

d

Fig. 5.4 Results of image enhancements: a original image, b image with intensity values adjusted using imadjust and gamma = 1.5, c image with contrast enhanced using adapthisteq, d image after filtering using a 20-by-20 pixel filter with the shape of a Gaussian probability density function, a mean of zero and a standard deviation of 10, using fspecial and imfilter.

5.6 Image Enhancement, Correction and Rectification

169

which yields a 24-bit RGB image array I1 in the MATLAB workspace. Typing whos

yields. Name I1

Size 1096x1674x3

Bytes Class

Attributes

5504112 uint8

revealing that the image is stored as an uint8 array with dimensions of 1,096-by-1,674-by-3, i.e. 1,096-by-1,674 arrays for each color (red, green and blue). We can display the image using the command imshow(I1)

which opens a new Figure Window showing an RGB composite of the image. As we can see, the image has a low contrast level and very pale colors; the sediment layers are also not exactly horizontal. These are the characteristics of the image that we want to improve in the following steps. We first adjust the image intensity values or colormap. The function imadjust (I1,[li; hi],[lo ho]) maps the values of the image I1 to new values in I2, such that values between li and hi are adjusted to values between lo and ho. Values below li and above hi are clipped, i.e. these values are adjusted to lo and ho, respectively. We can determine the range of the pixel values using lh = stretchlim(I1)

which yields lh = 0.3255 0.7020

0.2627 0.7216

0.2784 0.7020

indicating that the red color ranges from 0.3255 to 0.7020, green ranges from 0.2627 to 0.7216, and blue ranges from 0.2784 to 0.7020. We can use this information to automatically adjust the image with imadjust by typing I2 = imadjust(I1,lh,[]);

170

5 Visible Light Images

which adjusts the ranges to the full range of [0,1], and then display the result. imshow(I2)

We can see the difference between the very pale image I1 and the more saturated image I2. The parameter gamma in imadjust(I1,[li;hi],[lo;ho],gamma) specifies the shape of the curve describing the relationship between I1 and I2. If gamma < 1 the mapping is weighted toward higher (brighter) output values. If gamma > 1 the mapping is weighted toward lower (darker) output values. The default value of gamma = 1 causes linear mapping of the values in I1 to new values in I2. I3 = imadjust(I1,lh,[],0.5); I4 = imadjust(I1,lh,[],1.5); subplot(2,2,1), title('Original subplot(2,2,2), title('Adjusted subplot(2,2,3), title('Adjusted subplot(2,2,4), title('Adjusted

imshow(I1) Image') imshow(I2) Image,Gamma=1.0') imshow(I3) Image,Gamma=0.5') imshow(I4) Image,Gamma=1.5')

We can use imhist to display a histogram showing the distribution of intensity values for the image. Since imhist only works for two-dimensional images, we examine the histogram of the red color only. subplot(2,2,1), title('Original subplot(2,2,2), title('Adjusted subplot(2,2,3), title('Adjusted subplot(2,2,4), title('Adjusted

imhist(I1(:,:,1)) Image') imhist(I2(:,:,1)) Image, Gamma=1.0') imhist(I3(:,:,1)) Image, Gamma=0.5') imhist(I4(:,:,1)) Image, Gamma=1.5')

The result obtained using imadjust differs from that obtained using histeq (which we used in the previous section to enhance the contrast in the image). The function histeq(I1,n) transforms the intensity of image I1, returning in I5 an image with n discrete intensity levels. A roughly equal number of pixels is ascribed to each of the n levels in I5, so that the histogram of I5 is approximately flat. Since histeq only works for two-dimensional images, histogram equalization (using histeq) has

5.6 Image Enhancement, Correction and Rectification

171

to be carried out separately for each color. We use n = 50 in our exercise, which is slightly below the default value of n = 64. I5(:,:,1) = histeq(I1(:,:,1),50); I5(:,:,2) = histeq(I1(:,:,2),50); I5(:,:,3) = histeq(I1(:,:,3),50); subplot(2,2,1), title('Original subplot(2,2,3), title('Original subplot(2,2,2), title('Enhanced subplot(2,2,4), title('Enhanced

imshow(I1) Image') imhist(I1(:,:,1)) Image') imshow(I5) Image') imhist(I5(:,:,1)) Image')

The resulting image looks quite disappointing and we, therefore, use the improved function adapthisteq instead of histeq. This function uses the contrast-limited adaptive histogram equalization (CLAHE) by Zuiderveld (1994). Unlike histeq and imadjust, this algorithm works on small parts (or tiles) of the image, rather than on the entire image. The neighboring tiles are then combined using bilinear interpolation to eliminate edge effects. I6(:,:,1) = adapthisteq(I1(:,:,1)); I6(:,:,2) = adapthisteq(I1(:,:,2)); I6(:,:,3) = adapthisteq(I1(:,:,3)); subplot(2,2,1), title('Original subplot(2,2,3), title('Original subplot(2,2,2), title('Enhanced subplot(2,2,4), title('Enhanced

imshow(I1) Image') imhist(I1(:,:,1)) Image') imshow(I6) Image') imhist(I6(:,:,1)) Image')

The result looks slightly better than that obtained using histeq. However, all three functions for image enhancement (imadjust, histeq and adapthisteq) provide numerous ways to manipulate the final outcome. The Image Processing Toolbox— User’s Guide (MathWorks 2021a) and the excellent book by Gonzalez et al. (2009) provide more detailed introductions to the use of the various available functions and their corresponding parameters for image enhancement. The Image Processing Toolbox also includes numerous functions for the 2D filtering of images. The most popular 2D filters for images are Gaussian filters, median filters, and filters for image sharpening. Both Gaussian and median filters

172

5 Visible Light Images

are used to smooth an image, mostly to reduce the amount of noise. In most examples, the signal-to-noise ratio is unknown and adaptive filters are therefore used for noise reduction. A Gaussian filter can be designed using h = fspecial('gaussian',20,10); I7 = imfilter(I1,h);

where fspecial creates predefined 2D filters such as moving average, disk, or Gaussian filters. The Gaussian filter weights h are calculated using fspecial (‘gaussian’,20,10), where 20 corresponds to the dimension of a 20-by-20 pixel filter following the shape of a Gaussian probability density function with a standard deviation of 10. Next, we calculate I8, which is a median-filtered version of I1. I8(:,:,1) = medfilt2(I1(:,:,1),[20 20]); I8(:,:,2) = medfilt2(I1(:,:,2),[20 20]); I8(:,:,3) = medfilt2(I1(:,:,3),[20 20]);

Since medfilt2 only works for two-dimensional data, we again apply the filter separately to each color (red, green, and blue). The filter output pixels are the medians of the 20-by-20 neighborhoods around the corresponding pixels in the input image. The third type of filter is used for image sharpening, which we can demonstrate using imsharpen. I9 = imsharpen(I1);

This function calculates the Gaussian lowpass filtered version of the image, which is used as an unsharp mask, i.e. the sharpened version of the image is calculated by subtracting the blurred filtered version from the original image. The function comes with several parameters that control the ability of the Gaussian filter to blur the image and the strength of the sharpening effect, and a threshold specifying the minimum contrast required for a pixel to be considered an edge pixel and sharpened by unsharp masking. Comparing the results of the three filtering exercises with the original image. subplot(2,2,1), imshow(I1) title('Original Image') subplot(2,2,2), imshow(I7) title('Gaussian Filter') subplot(2,2,3), imshow(I8) title('Median Filter')

5.6 Image Enhancement, Correction and Rectification

173

subplot(2,2,4), imshow(I9) title('Sharpening Filter')

demonstrates the effect of the 2D filters. As an alternative to these time-domain filters, we can also design 2D filters with specific frequency response. Again, the book by Gonzalez et al. (2009) provides an overview of 2D frequency-selective filtering for images, including functions used to generate such filters. The authors also demonstrate the use of a 2D Butterworth lowpass filter in image processing applications. We next rectify the image, i.e. we correct the image distortion by transforming it to a rectangular coordinate system using a similar script to that used for georeferencing satellite images. This we achieve by defining four points within the image as the corners of a rectangular reference area (which is our reference area). We first define the upper left, lower left, upper right, and lower right corners of the reference area, and then press return. Note that it is important to pick the coordinates of the corners in this particular order. In this instance, we use the original image I1, but we could also use any other enhanced version of the image from the previous exercises. As an example we can click the left side of the ruler at 1.5 cm and at 4.5 cm, where two thin white sediment layers cross the ruler, to select these two points as the upper-left and lower-left corners. We then choose the upper-right and lower-right corners, further to the right of the ruler but also lying on the same two white sediment layers, close all imshow(I1) movingpoints = ginput

and click return which yields movingpoints = 517.0644 511.5396 863.2822 859.5990

508.9059 733.5792 519.9554 739.1040

or similar values. The image and the reference points are then displayed in the same Figure Window. close all imshow(I1) hold on line(movingpoints(:,1),movingpoints(:,2),...

174

5 Visible Light Images

'LineStyle','none',... 'Marker','+',... 'MarkerSize',48,... 'Color','b') hold off

We arbitrarily choose new coordinates for the four reference points, which are now on the corners of a rectangle. To preserve the aspect ratio of the image, we select numbers that are the means of the differences between the x- and y-values of the reference points in movingpoints. dx = (movingpoints(3,1)+movingpoints(4,1))/2- ... (movingpoints(1,1)+movingpoints(2,1))/2 dy = (movingpoints(2,2)+movingpoints(4,2))/2- ... (movingpoints(1,2)+movingpoints(3,2))/2

which yields dx = 347.1386 dy = 221.9109

We therefore choose fixedpoints(1,:) fixedpoints(2,:) fixedpoints(3,:) fixedpoints(4,:)

= = = =

[0 0]; [0 dy]; [dx 0]; [dx dy];

The function fitgeotrans now takes the pairs of control points, fixedpoints, and movingpoints, and uses them to derive a spatial transformation matrix tform using a transformation of the type projective. tform = fitgeotrans(movingpoints,fixedpoints,'projective');

We next need to estimate the spatial limits for the outputs, XBounds and YBounds, using outputlimits.

5.6 Image Enhancement, Correction and Rectification

175

xLimitsIn = 0.5 + [0 size(I1,2)] yLimitsIn = 0.5 + [0 size(I1,1)] [XBounds,YBounds] = outputLimits(tform,xLimitsIn,yLimitsIn)

We then use imref2d to reference the image to world coordinates. Rout = imref2d(size(I1),XBounds,YBounds)

An imref2d object Rout encapsulates the relationship between the intrinsic coordinates anchored to the rows and columns of the image and the spatial location of the same row and column locations in a world coordinate system. Finally, the projective transformation can be applied to the original RGB composite I1 to obtain a rectified version of the image (I10). I10 = imwarp(I1,tform,'OutputView',Rout);

We now compare the original image I1 with the rectified version I10. subplot(2,1,1), imshow(I1) title('Original Image') subplot(2,1,2), imshow(I10) title('Rectified Image')

We see that the rectified image has black areas at the corners. We can remove these black areas by cropping the image using imcrop. I11 = imcrop(I10); subplot(2,1,1), imshow(I1) title('Original Image') subplot(2,1,2), imshow(I11) title('Rectified and Cropped Image')

The function imcrop creates displays of the image with a resizable rectangular tool that can be interactively positioned and manipulated using the mouse. After manipulating the rectangle into the desired position, the image is cropped by either double-clicking on the rectangle or choosing Crop Image from the tool’s context menu.

176

5.7

5 Visible Light Images

Exercises

The following exercises demonstrate the acquisition and processing of images within the wavelength range of visible light. We first will take photos with smartphones and import them into MATLAB, and then acquire images directly from a webcam with MATLAB. The more advanced exercises demonstrate how images are stitched, enhanced, rectified, and referenced. The last two exercises use the LEGO MINDSTORMS EV3 Color Sensor to scan a simple image, having first determined its spatial resolution. Each exercise involves setting up the equipment, acquiring the measurements, and then analyzing the acquired measurements on the computer using the same software tools that are also used in large-scale exercises.

5.7.1 Smartphone Camera/Webcam Images with MATLAB This first, simple exercise demonstrates how to take a photo with a smartphone camera (or any other available digital camera) and import it (e.g. in JPEG format) into MATLAB, display it, and then export it in another format (e.g. in PNG format). The exercise uses the MATLAB Support Package for USB Webcams and takes pictures with two different webcams, one being a camera built into a computer display, the other an external camera. We again display the images and then export them to a hard drive. Exercise 1. Take a picture with a smartphone or any other camera, import it into MATLAB, display the picture, and save it as a PNG file. 2. Take a picture with a webcam, import it into MATLAB, display the picture, and save it as a PNG file. Solution 1. Take a picture and use functions from the Image Processing Toolbox to process, display, and save the images. 2. Download the MATLAB Support Package for USB Webcams from the manufacturer's webpage. Connect the webcam to a computer or use the built-in webcam of a laptop or a display. Use functions from the support package and the Image Processing Toolbox to process, display, and save the images. MATLAB Script We first read and decompress the file exercise_5_7_1_photo_1.jpg by typing clear, clc, close all I1 = imread('exercise_5_7_2_photo_1.jpg');

5.7 Exercises

177

which yields a 24-bit RGB image array I1 in the MATLAB workspace. Typing whos

yields Name I1

Size

Bytes Class

Attributes

3024x4032x3 36578304 uint8

revealing that the image is stored as a uint8 array with dimensions of 3,024-by-4,032-by-3, i.e. 3,024-by-4,032 arrays for each color (red, green, and blue). We can display the image using the command imshow(I1)

which opens a new Figure Window showing an RGB composite of the image. We then obtain a list of the webcams connected to the system. webcamlist

which yields ans = 21 cell array {'LG UltraFine Display Camera'} {'C922 Pro Stream Webcam' }

We select the LG UltraFine Display Camera using webcam. clear, clc, close all cam = webcam('LG UltraFine Display Camera')

which yields cam = webcam with properties: Name: 'LG UltraFine Display Camera' AvailableResolutions: {'1920x1080'} Resolution: '1920x1080'

178

5 Visible Light Images

We then preview the webcam picture by typing preview(cam)

and take a picture using snapshot. I2 = snapshot(cam);

Typing whos

yields Name

Size

I2 1080x1920x3 cam 1x1

Bytes 6220800 8

Class

Attributes

uint8 webcam

revealing that the image is stored as a uint8 array with dimensions of 1,080-by-1,920-by-3, i.e. 1,080-by-1,920 arrays for each color (red, green, and blue). We can display the image using the command imshow(I2)

and export the image as a PNG file using imwrite(I2,'exercise_5_7_1_photo_2.png')

We again obtain a list of the webcams connected to the system. clear, clc, close all webcamlist

We select the C922 Pro Stream Webcam using webcam. cam = webcam('C922 Pro Stream Webcam')

5.7 Exercises

179

which yields cam = webcam with properties: Name: 'C922 Pro Stream Webcam' AvailableResolutions: {'1920x1080'} Resolution: '1920x1080'

We then preview the webcam picture by typing preview(cam)

We close the preview using closePreview(cam)

and take a picture using snapshot. I3 = snapshot(cam);

Typing whos

yields Name

Size

I3 1080x1920x3 cam 1x1

Bytes 6220800 8

Class

Attributes

uint8 webcam

revealing that the image is stored as a uint8 array with dimensions of 1,080-by-1,920-by-3, i.e. 1,080-by-1,920 arrays for each color (red, green, and blue). We can display the image using the command imshow(I3)

180

5 Visible Light Images

and export the image as a PNG file using imwrite(I3,'exercise_5_7_1_photo_3.png')

We can clear the webcam object using clear('cam')

We now have a complete workflow for acquiring, processing, and displaying images with MATLAB.

5.7.2 Enhancing, Rectifying and Referencing Images The following exercise demonstrates how to enhance, rectify and reference a photo. For this, we use a photo taken with a smartphone in our institute building. The picture shows an exhibition of 56 different marble tiles from Turkey. This workflow can also be used to process outcrop photos or photos of fossils. Exercise 1. Take a photo of a rectangular arrangement of objects, such as the exhibition of marble tiles. Alternatively, you could take a photo of a shelf, or a building with windows. 2. Rectify and reference the image onto an orthogonal centimeter grid. 3. Determine the size of the marble tiles from the image. Solution 1. Take the photo with a camera and import the image into MATLAB. 2. Measure the width and height of the rectangular arrangement of objects. In our example, the display of marble tiles is 656 cm wide and 131 cm high. 3. Select xy coordinates from the image using the MATLAB function ginput. 4. Use the functions fitgeotrans and imwarp to rectify and reference the image. 5. Use the data cursor of the Figure Window to determine the size of the marble tiles. MATLAB Script We first read and decompress the file exercise_5_7_2_photo_1.jpg by typing

5.7 Exercises

181

clear, close all, clc I1 = imread('exercise_5_7_2_photo_1.jpg');

which yields a 24-bit RGB image array I1 in the MATLAB workspace. Typing whos

yields Name I1

Size 3024x4032x3

Bytes Class Attributes 36578304 uint8

revealing that the image is stored as an uint8 array with dimensions of 3,024-by-4,032-by-3, i.e. 3,024-by-4,032 arrays for each color (red, green, and blue). We can display the image using the command imshow(I1)

which opens a new Figure Window showing an RGB composite of the image. As we can see, the image has a low level of contrast and very pale colors, and the display of the marble tiles is not exactly horizontal. We first adjust the image intensity values or colormap using imadjust. for i = 1 : 3 I2(:,:,i) = imadjust(I1(:,:,i)); end

The function can only be applied to 2D (or grayscale) images, so we process the three RGB channels separately using a for loop, and then display the result. imshow(I2)

We can see the difference between the very pale image I1 and the more saturated image I2. We next rectify the image, i.e. we correct the image distortion by transforming it to a rectangular coordinate system. This we achieve by defining four points within the image as the corners of a rectangular area (our reference area). As before, we first define the upper left, lower left, upper right, and lower right corners of the rectangular display of the 56 tiles of marbles. Remember that it is important to pick the coordinates of the corners in this particular order.

182

5 Visible Light Images

movingpoints = ginput

and click return which yields movingpoints = 1.0e+03 * 0.1528 0.2518 3.8068 3.6928

1.1577 1.8807 1.1427 1.7757

or similar values. We have measured the 656 cm-by-131 cm board with a tape measure and thus know the actual coordinates (in cm) of the four corners of the image: fixedpoints(1,:) fixedpoints(2,:) fixedpoints(3,:) fixedpoints(4,:)

= = = =

[0 0]; [0 131]; [656 0]; [656 131];

The function fitgeotrans now takes the pairs of control points, fixedpoints, and movingpoints, and uses them to derive a spatial transformation matrix tform using a transformation of the type projective. tform = fitgeotrans(movingpoints,fixedpoints,'projective');

We next need to estimate the spatial limits for the outputs, x and y, corresponding to the projective transformation tform, and a set of spatial limits for the inputs xLimitsIn and yLimitsIn. xLimitsIn = 0.5 + [0 size(I1,2)]; yLimitsIn = 0.5 + [0 size(I1,1)]; [x,y] = outputLimits(tform,xLimitsIn,yLimitsIn);

We then use imref2d to reference the image, i.e. to transform the coordinates of the image in local space to the coordinates in world space. The local space is the coordinate system of the original image as taken by the camera. The local coordinate system has its origin (0,0) in the upper left corner of the image, or corresponding to the (1,1) element of an image array according to MATLAB

5.7 Exercises

183

conventions. The world (or global) coordinate system is defined by the user, for example, a rectangular coordinate system with an origin in the upper left corner of the display of the marble tiles and a centimeter-scale in x and y directions. If we have more than one image, as in the next exercise, we can transform the local coordinate systems of the individual images into the same world coordinate system. They can then be stitched together to form, for example, an outcrop or landscape panorama, or a mosaic of air-photos or satellite images. Rout = imref2d(size(I1),x,y);

An imref2d object Rout encapsulates the relationship between the intrinsic coordinates anchored to the image's rows and columns and their coordinates in a world coordinate system. Finally, the projective transformation can be applied to the original RGB composite I1 to obtain a rectified version of the image (Fig. 5.5). I3 = imwarp(I2,tform,'OutputView',Rout);

We can now display the resulting image by typing

Fig. 5.5 Result of enhancing, rectifying, and referencing a photographic image taken with a camera. The width and height of the rectangular arrangement of objects were measured with a measuring tape. The pixel xy coordinates were picked from the image using the MATLAB function ginput. The image was rectified and referenced using the functions fitgeotrans and imwarp.

184

5 Visible Light Images

close all figure('Position',[50 500 1200 600]) axes('YDir','reverse',... 'XLim',[x(1) x(2)],... 'YLim',[y(1) y(2)],... 'FontSize',18), hold on xlabel('Width (cm)',... 'FontSize',18) ylabel('Height (cm)',... 'FontSize',18) image(I3,'XData',x,'YData',y) axes('YDir','reverse',... 'XLim',[x(1) x(2)],... 'YLim',[y(1) y(2)],... 'XGrid','on',... 'YGrid','on',... 'Color','none',... 'XTickLabels',[],... 'YTickLabels',[])

We see that the rectified image has black areas at the corners. We can remove these black areas by cropping the image using imcrop. The function imcrop creates displays of the image with a resizable rectangular tool that can be interactively positioned and manipulated using the computer mouse. After manipulating the rectangle into the desired position, the image is cropped by either double-clicking on the rectangle or choosing Crop Image from the tool’s context menu.

5.7.3 Stitching Multiple Smartphone Images The following exercise demonstrates how to take several images (in our example two) with one camera (e.g. from a smartphone), combine them into a single image, and then display the result. This process of creating panoramas from multiple images is routinely performed by image processing programs such as Gimp, Affinity Photo, and Adobe Photoshop. Digital cameras today are also capable of taking panoramas by capturing strips of images in rapid succession and combining them immediately into a single image. In the geosciences, we stitch together images to produce, for example, panoramas of outcrops or landscapes, or mosaics of air-photos or satellite imagery. Stitching the images together with MATLAB instead of using an image processing software offers the advantage of being able to control each step of the image processing.

5.7 Exercises

185

Exercise 1. Take two overlapping photos of a geological outcrop from the same location. 2. Import both images into MATLAB and display them. 3. Merge both images into a single image and display the result. Solution 1. To generate a panorama you need to take two or more photos from approximately the same position, with an image overlap of about 40–60 percent. 2. Since smartphone cameras use automatic settings for brightness, contrast, and color, the images will be slightly different and may therefore need to be corrected for the differences in these parameters to obtain the best possible results. 3. Use cpselect to select pairs of control points on two overlapping images. The function yields the selectedMovingPoints and selectedFixedPoints to be used to transform the second image so that it matches the first image. 4. Use fitgeotrans to calculate the matrix t, with which to perform the transformation. 5. Use imwarp to transform the second image so that it matches the first image. 6. Use either imshowpair or imfuse, together with imshow, to display the resulting panorama. MATLAB Script We first read and decompress the files exercise_5_7_2_photo_1A.jpg and exercise_5_7_2_photo_2.jpg by typing clear, clc, close all I1 = imread('exercise_5_7_3_photo_1A.jpg'); I2 = imread('exercise_5_7_3_photo_2.jpg');

which yields two 24-bit RGB image arrays I1 and I2 in the MATLAB workspace. Typing whos

yields Name I1 I2

Size

Bytes Class

3024x4032x3 36578304 uint8 3024x4032x3 36578304 uint8

Attributes

186

5 Visible Light Images

revealing that the images are stored as uint8 arrays with dimensions of 3,024-by-4,032-by-3, i.e. 3,024-by-4,032 arrays for each color (red, green, and blue). Since the functions used to merge the images projects I2 onto I1, we have to extend the first image I1 by 3,000 pixels. We achieve this by concatenating I1 with a white image (i.e. all pixels have the value 255 in all three color channels R, G, and B) that has the same number of rows as I1 but 3,000 columns. I1 = horzcat(uint8(255*ones(size(I1,1),3000,3)),I1);

Again typing whos

yields Name I1 I2

Size

Bytes Class

Attributes

3024x7032x3 36578304 uint8 3024x4032x3 36578304 uint8

revealing that the size of image I1 is now 3,024-by-7,032-by-3. We can display the result in two subplots. subplot(1,2,1), imshow(I1) subplot(1,2,2), imshow(I2)

We then use cpselect from the Image Processing Toolbox to select control points in the two images I1 and I2. The cpselect function yields the selected moving points mp and fixed points fp to be used to transform the second image so that it matches the first image. [mp,fp] = cpselect(I2,I1,'Wait',true);

We then use fitgeotrans to calculate the matrix t with which to perform the transformation. t = fitgeotrans(mp,fp,'projective');

We then use imref2d to reference the image I1 to a world coordinate system and use imwarp to transform the second image so that it matches the first image.

5.7 Exercises

187

Rfixed = imref2d(size(I1)); I2_registered = imwarp(I2,t,'OutputView',Rfixed);

We then use either imshowpair or imfuse, together with imshow, to display the panorama image. imshowpair(I1,I2_registered,'blend','Scaling','joint')

We see that the rectified image has black areas at the corners. We can remove these black areas by cropping the image using imcrop. The function imcrop creates displays of the image with a resizable rectangular tool that can be interactively positioned and manipulated using the compuer mouse. After manipulating the rectangle into the desired position, the image is cropped by either double-clicking on the rectangle or choosing Crop Image from the tool’s context menu. The function imfuse together with imshow does the same as imshowpair but first calculates an image C array that we then display using imshow. C = imfuse(I1,I2_registered,'blend'); figure imshow(C)

Alternatively, we can use the Image Viewer app imtool to display the result in a Figure Window with imtool(C)

The function imshowpair was designed to compare images rather than showing the result of image stitching. There are various options available to display these differences, e.g. by overlaying one image on top of the other. There is, however, no way to display both images simultaneously at full intensity and in the original colors. We, therefore, use a variant of imshowpair in which we replace the black corners in the I2_registered image with the corresponding values in the I1 image (Fig. 5.6). clear I I = I2_registered; I(I==0) = I1(I==0); imshow(I)

We can save the result in a PNG file using

188

5 Visible Light Images

Fig. 5.6 Result of stitching together with two smartphone images. Two images were taken at approximately the same distance from the object, with an image overlap of about 40–60%. The function cpselect was used to select control points for the two images. The function fitgeotrans was used to calculate the transformation matrix. The function imwarp was then used together with the transformation matrix to transform the second image so that it matched the first image.

print -dpng -r300 exercise_5_7_3_photo_2.png

We can also save the data in a .mat file by typing. save exercise_5_7_3_data_1.mat

An alternative way to merge the images that lets the software select the control points can be found in the documentation from MathWorks for the Computer Vision Toolbox (MathWorks 2021b). A script based on the Feature Based Panoramic Image Stitching example in this documentation is provided in script called exercise_5_7_3_script_1_vs2.m in the book’s electronic supplementary material.

5.7.4 Spatial Resolution of the LEGO EV3 Color Sensor The quality of the results of imaging methods in the geosciences depends on the spatial resolution of the particular setup used. This depends not only on the technical resolution of the measuring device but also on the size of the object to be detected relative to its distance from the sensor and on the influence of any external disturbances. In this section, we carry out an exercise similar to that in Sect. 4.5.3 4.5.3, where we determined the spatial resolution of an acoustic measurement

5.7 Exercises

189

setup. However, in this exercise we determine the spatial resolution of an optical system, using a wooden board with a sheet of paper on it that has a vertical black bar painted on it. We treat this black line as a unit impulse and will try to obtain a one-dimensional image of this impulse, i.e. to determine the impulse response of the setup used. The absolute of the Fourier transform of the impulse response is the magnitude response of the setup used, from which we can determine the spatial resolution of the measurements (see Chap. 6 of Trauth 2020). As an example, we will use light intensity measurements from a LEGO MINDSTORMS EV3 Color Sensor (Item No. 45506) positioned in front of a wooden board, which in our example is 93 cm wide and 53 cm high. A white sheet of paper on which a 0.5 cm wide vertical black bar has been painted is attached to this board. The sensor is mounted on a vehicle with a LEGO EV3 Large Servo Motor (Item No. 45502) and four pieces of the LEGO Black Train Wheel with Axle Hole and Friction Band (Item No. 55423/57999). The vehicle moves in 90 steps over a distance of *30 cm, along rails that are parallel to the wooden board and *5 cm from it, with the sensor taking a measurement approximately every 0.9 cm. The sensor records individual light intensity values rather than a complete image of the object of interest, as would be recorded by a camera or scanner. However, since the sensor changes its position between readings, we map the black bar painted on the paper attached to the wooden board. We acquire and average multiple (e.g. 250) measurements at each sensor position to improve the signal-to-noise ratio of our series of light intensities. The 90 measured values along a distance of *30 cm are then standardized (i.e. the measured values adjusted so that they add up to one) before we calculate the Fourier transform of the impulse response to determine the spatial resolution of the sensor. Exercise 1. Download the LEGO MINDSTORMS Education EV3 software from the LEGO Education webpage. Install the software on your computer and read the manual. Download the LEGO MINDSTORMS EV3 Support from MATLAB from the manufacturer's webpage. Install the hardware support on your computer and read the manual. Establish a USB connection between your computer and the EV3 Brick. 2. Assemble a four-wheeled rail vehicle with a large motor and a color sensor on the side, set up to acquire measurements perpendicular to the direction of movement. The sensor should point towards a wooden board that is set up vertically about * 5 cm away, as measured with a tape measure. Attach to this board a white sheet of paper on which a 0.5 cm wide vertical black bar has been painted, the geometry of which is to be recorded with the sensor. To obtain the necessary measurements, move the vehicle a few millimeters, take 250 readings with the sensor, and then move the vehicle to the next position. Write a MATLAB script that will move the vehicle, acquire the measurements with the sensor, and then record those measurements.

190

5 Visible Light Images

3. Display the series of light intensity measurements, discuss and eliminate possible faulty measurements, standardize the data, and calculate the setup's magnitude response. Discuss the results, especially the suitability of this particular setup for detecting small objects. Solution 1. Assemble the vehicle according to the LEGO building instructions. These are in a LEGO Digital Designer Model (LXF) file which can be opened with the free LEGO Digital Designer software, or with STUDIO 2.0 by BrickLink, both of which are available for computers running either macOS or Windows. 2. Write a MATLAB script that establishes a connection between the computer, the motor, and the color sensor, uses a for loop to move the vehicle in 90 steps, and then uses a nested for loop to acquire replicate measurements (e.g. 250) of the intensity of the light emitted from the paper on the wooden board. 3. Store the data in MATLAB, calculate the mean values of the replicate light intensity measurements, and display the impulse response determined from these mean values. If necessary, we delete any obviously erroneous values such as outliers and then calculate the magnitude response by Fourier transforming the impulse response. 4. As we can see, the 0.5 cm wide black bar is recognized, but is reproduced as a relatively wide area of *8 cm. In the calculated magnitude response for objects of any size, we can read how much the amplitude of the measurement signal is attenuated until with very small objects it can no longer be distinguished from the noise due to measurement errors. MATLAB Script We use the following MATLAB script for tasks 2–4. All graphics generated by the script, as well as the raw data, can be found in the book’s electronic supplementary material. In this supplement, you can also find the construction plans for the vehicle. Since the vehicle is not very big we do not mount the EV3 Brick on the vehicle but instead, put it on the side of the vehicle. However, we then need longer cables than those offered by LEGO (25, 35, and 50 cm). The cables shown on the photos in the book’s electronic supplement can be purchased from Mindsensors (http://www.mindsensors.com) at a reasonable price; they are more flexible than the LEGO cables and up to one meter long. We first clear the workspace and the Command Window and close all figures. clear, clc, close all

Next, we select a USB connection in the MATLAB script by typing mylego = legoev3('USB');

5.7 Exercises

191

We then establish a connection between the computer, the motor A, which is attached to the EV3 output port A, and the color sensor, which is attached to the EV3 input port 1. The connection of the motor A must be defined using motor in the MATLAB script, but the input port for the sensor is automatically detected by MATLAB using colorSensor. Using motor and colorSensor creates the mymotor_A and myrgbsensor object handles for these connections that will later be used to control the motor and to read the data from the sensor. myrgbsensor = colorSensor(mylego); mymotor_A = motor(mylego,'A'); resetRotation(mymotor_A); mymotor_A.Speed = -10;

We then start the exercise, which includes 250 light intensity measurements at 90 locations along the railway line. for i = 1 : 90 start(mymotor_A) pause(0.20) stop(mymotor_A) rotation_A(i) = readRotation(mymotor_A); for j = 1 : 250 lightintensity(i,j) = ... readLightIntensity(myrgbsensor); end disp(num2str(mean(lightintensity(i,j),2))) pause(1) end

We then calculate the arithmetic mean lightintensitymean of the 250 readings at each position. lightintensitymean = (mean(lightintensity'))';

We now calculate the total distance fulltrip covered from the number of revolutions made by the motor. Unfortunately, these readings are not very precise and we, therefore, get a *5% error in the total distance. The LEGO Black Train Wheel with Axle Hole and Friction Band (Item No. 55423/57999) has a radius of 8.5 mm. fulltrip = 2*pi*8.5 * max(abs(double(rotation_A)))/360 * 0.1

192

5 Visible Light Images

We use this information to create a length axis by typing trip = 0+fulltrip/length(lightintensitymean):... fulltrip/length(lightintensitymean):... fulltrip; trip = trip';

We then calculate the sampling interval (in cm) by typing sampleint = fulltrip/length(lightintensitymean)

The raw lightintensitymean data are displayed to determine the parameters for standardize the lightintensitymean data, i.e. to obtain a total sum for the measured light intensity values of 1. figure1 = figure('Position',[200 600 800 400],... 'Color',[1 1 1]); axes1 = axes('LineWidth',1,... 'FontName','Helvetica',... 'FontSize',12,... 'XGrid','on',... 'YGrid','on'); line1 = line(trip,-lightintensitymean,... 'LineWidth',1); xlabel1 = xlabel('Length (cm)',... 'FontName','Helvetica',... 'FontSize',12); ylabel1 = ylabel('Lightintensity (m)',... 'FontName','Helvetica',... 'FontSize',12); title1 = title('Raw Light Intensity Mean Data',... 'FontName','Helvetica',... 'FontSize',12);

We then remove outliers from the data set by typing [lightintensitymeanoutliers,outliers] = rmoutliers (lightintensitymean,... 'movmedian',3,'SamplePoints',trip); tripoutliers = trip; tripoutliers(outliers) = [];

and display the result using

5.7 Exercises

193

figure1 = figure('Position',[200 600 800 400],... 'Color',[1 1 1]); axes1 = axes('LineWidth',1,... 'FontName','Helvetica',... 'FontSize',12,... 'XGrid','on',... 'YGrid','on'); line1 = line(tripoutliers,-lightintensitymeanoutliers,... 'LineWidth',1); line1 = line(trip,-lightintensitymean,... 'LineWidth',1,... 'Color',[0.8 0.5 0.3]); xlabel1 = xlabel('Length (cm)',... 'FontName','Helvetica',... 'FontSize',12); ylabel1 = ylabel('Lightintensity (m)',... 'FontName','Helvetica',... 'FontSize',12); title1 = ... title('Raw Light Intensity Mean Data w/o Outliers',... 'FontName','Helvetica',... 'FontSize',12);

We then have to standardize the data by subtracting the light intensity of the baseline (i.e. the intensity values of the white paper) and to replace negative values with 0. lightintensitymeancwall = -lightintensitymeanoutliers + 15; lightintensitymeancwall(lightintensitymeancwall