Space Technology: A Short Introduction 3031348176, 9783031348174

This engaging and accessible book is designed as a quick and easy way to get up to speed on all things in space technolo

266 24 3MB

English Pages 123 [124] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
List of Figures
1 Introduction
1.1 Space and Startups, a.k.a “NewSpace”
2 Artificial Satellites; The Shortest Introduction Ever
3 Semiconductors in Space: From Sand to Satellites
3.1 Let’s Meet at the Junction
3.2 The Transistor Drama
3.3 The Space Environment
3.3.1 Unpacking Single Event Effects (SEEs)
4 The Hectic Ride to Space
4.1 Rideshares, Dispensers, and Orbital Transfer Vehicles
5 Configuring Spacecraft
6 A Peek Under the Hood
6.1 The Skeleton: Structures and Mechanisms
6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks
6.2.1 Mobile Networks and Satellites
6.2.2 Lasers in Orbit
6.2.3 Connectivity
6.3 The Software: Hello World in Space
6.3.1 What Does It Take to Run Software on a Spacecraft?
6.3.2 How Is Software Updated or Changed in Orbit?
6.3.3 What Type of Languages Are Used for Coding Flight Software?
6.3.4 Can I Run Linux on a Satellite? What About Windows?
6.3.5 Can I Host a Website on a Satellite?
6.3.6 What Kind of Skills Are Required for Doing Flight Software?
6.3.7 How Is Flight Software Designed?
6.3.8 What Does “Software-Defined Satellites” Mean?
6.3.9 Bugs and Glitches in Orbit
6.4 The Orientation: Attitude Control
6.5 The Space Sauna: Thermal Control
6.6 The Avionics
6.7 The Payload
6.8 Putting It Together: Assembly, Integration and Test
6.8.1 Mechanical Tests
6.8.2 Thermal Vacuum Test (TVAC)
6.8.3 Software Verification
6.8.4 Concluding and Shipping
7 Satellites and Machine Learning
7.1 Can There Be Too Much Data?
8 Operating Distant Machines Floating in Space
9 Making Reliable and Dependable Spacecraft
10 TL:DR; Frequently Asked Questions About Space
10.1 Q0: Why Launch a Metal Box into Space?
10.2 Q1: What Are the Rules for Launching Something into Space?
10.3 Q2: How Are Satellites Designed and Developed? Also, Is It Done Differently in NewSpace Versus Classic Space?
10.4 Q3: What’s Typically Under the Hood of a Satellite?
10.5 Q4: How Are Satellites Launched?
10.6 Q5: Ok the Thing Is up in Space, Now What?
10.7 Q6: How Do Satellites Orient Themselves in Space?
10.8 Q7: How Are Satellites Operated?
10.9 Q8: What Does the Software on Board of a Satellite Do Exactly?
10.10 Q9: How Do Satellites Generate Power?
10.11 Q10: How Big and Heavy Are Current Commercial Satellites?
10.12 Q11: What Is the Lifetime of a Satellite?
10.13 Q12: What Can Affect the Lifetime of a Satellite?
10.14 Q13: What Is an Orbit Exactly?
10.15 Q14: Why Are Orbits Crowded and How Is This an Issue?
10.16 Q15: Why Are Satellites Assembled in Clean Rooms?
10.17 Q16: How Are Satellites Currently Distributed Across Different Orbits?
Recommend Papers

Space Technology: A Short Introduction
 3031348176, 9783031348174

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Ignacio Chechile

Space Technology A Short Introduction

Space Technology

Ignacio Chechile

Space Technology A Short Introduction

Ignacio Chechile ReOrbit Helsinki, Finland

ISBN 978-3-031-34817-4 ISBN 978-3-031-34818-1 (eBook) https://doi.org/10.1007/978-3-031-34818-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Space and Startups, a.k.a “NewSpace” . . . . . . . . . . . . . . . . . . . . . . .

1 2

2

Artificial Satellites; The Shortest Introduction Ever . . . . . . . . . . . . . . . .

7

3

Semiconductors in Space: From Sand to Satellites . . . . . . . . . . . . . . . . . . 3.1 Let’s Meet at the Junction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The Transistor Drama . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Space Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Unpacking Single Event Effects (SEEs) . . . . . . . . . . . . . . .

11 14 17 23 27

4

The Hectic Ride to Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Rideshares, Dispensers, and Orbital Transfer Vehicles . . . . . . . .

31 33

5

Configuring Spacecraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

6

A Peek Under the Hood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 The Skeleton: Structures and Mechanisms . . . . . . . . . . . . . . . . . . . . 6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Mobile Networks and Satellites . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Lasers in Orbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 The Software: Hello World in Space . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 What Does It Take to Run Software on a Spacecraft? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 How Is Software Updated or Changed in Orbit? . . . . . . . 6.3.3 What Type of Languages Are Used for Coding Flight Software? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Can I Run Linux on a Satellite? What About Windows? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.5 Can I Host a Website on a Satellite? . . . . . . . . . . . . . . . . . . 6.3.6 What Kind of Skills Are Required for Doing Flight Software? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.7 How Is Flight Software Designed? . . . . . . . . . . . . . . . . . . . .

39 39 43 47 49 52 55 57 58 59 60 61 61 61 v

vi

Contents

6.3.8 What Does “Software-Defined Satellites” Mean? . . . . . . 6.3.9 Bugs and Glitches in Orbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Orientation: Attitude Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Space Sauna: Thermal Control . . . . . . . . . . . . . . . . . . . . . . . . . . The Avionics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Payload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Putting It Together: Assembly, Integration and Test . . . . . . . . . . . 6.8.1 Mechanical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Thermal Vacuum Test (TVAC) . . . . . . . . . . . . . . . . . . . . . . . . 6.8.3 Software Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.4 Concluding and Shipping . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62 63 66 74 76 81 82 82 85 86 86

7

Satellites and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Can There Be Too Much Data? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87 95

8

Operating Distant Machines Floating in Space . . . . . . . . . . . . . . . . . . . . .

99

9

Making Reliable and Dependable Spacecraft . . . . . . . . . . . . . . . . . . . . . . . 103

6.4 6.5 6.6 6.7 6.8

10 TL:DR; Frequently Asked Questions About Space . . . . . . . . . . . . . . . . . 10.1 Q0: Why Launch a Metal Box into Space? . . . . . . . . . . . . . . . . . . . 10.2 Q1: What Are the Rules for Launching Something into Space? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Q2: How Are Satellites Designed and Developed? Also, Is It Done Differently in NewSpace Versus Classic Space? . . . 10.4 Q3: What’s Typically Under the Hood of a Satellite? . . . . . . . . . 10.5 Q4: How Are Satellites Launched? . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Q5: Ok the Thing Is up in Space, Now What? . . . . . . . . . . . . . . . . 10.7 Q6: How Do Satellites Orient Themselves in Space? . . . . . . . . . 10.8 Q7: How Are Satellites Operated? . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Q8: What Does the Software on Board of a Satellite Do Exactly? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10 Q9: How Do Satellites Generate Power? . . . . . . . . . . . . . . . . . . . . . 10.11 Q10: How Big and Heavy Are Current Commercial Satellites? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12 Q11: What Is the Lifetime of a Satellite? . . . . . . . . . . . . . . . . . . . . . 10.13 Q12: What Can Affect the Lifetime of a Satellite? . . . . . . . . . . . . 10.14 Q13: What Is an Orbit Exactly? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15 Q14: Why Are Orbits Crowded and How Is This an Issue? . . . 10.16 Q15: Why Are Satellites Assembled in Clean Rooms? . . . . . . . . 10.17 Q16: How Are Satellites Currently Distributed Across Different Orbits? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

109 109 109 110 110 110 111 111 112 112 113 114 114 115 115 116 116 117

List of Figures

Fig. 2.1 Fig. 3.1 Fig. 3.2 Fig. 3.3 Fig. 3.4 Fig. 3.5 Fig. 3.6 Fig. 3.7 Fig. 6.1 Fig. 6.2

Fig. 6.3

Fig. 6.4 Fig. 6.5 Fig. 6.6

Fig. 6.7 Fig. 6.8 Fig. 6.9 Fig. 6.10 Fig. 6.11 Fig. 7.1 Fig. 7.2

Newton’s cannon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SiO2 structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A bipolar transistor with one junction in forward-bias and another one in reverse-bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NOT gate with BJT transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NAND gate with BJT transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Van Allen radiation belts; cross them is not the nicest ride for a satellite going somewhere (public domain) . . . . . . . . . . . . . . Magnetic field strength at Earth’s surface (Creative Commons) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applicability of SEE to different device types . . . . . . . . . . . . . . . . Junkers J 1 (public domain) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subsystem faults and types of faults. Source Fault-Tolerant Attitude Control of Spacecraft (Qinglei Hu, Bing Xiao, Bo Li, Youmin Zhang) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of faults in the attitude control subsystem. Source Fault-Tolerant Attitude Control of Spacecraft (Qinglei Hu, Bing Xiao, Bo Li, Youmin Zhang) . . . . . . . . . . . . . . . . . . . . . . . . . . . Funny little configuration to have by default . . . . . . . . . . . . . . . . . . Good luck running for the tram in Helsinki without friction forces (Creative Commons) . . . . . . . . . . . . . . . . . . FA and FB are forces applied at different distances from the center O, creating different torques. Credit public domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building blocks of a computer-based control system . . . . . . . . . . A generic avionics block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . AOCS functional chain as a member of the avionics architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subsystem federated architecture with a star topology . . . . . . . . . A backplane connecting 1 CPU unit and 2 peripheral boards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An unannotated graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Google searches over time about something unspecified . . . . . . .

8 12 18 20 21 24 26 27 40

56

56 65 68

69 74 76 77 79 79 88 88 vii

viii

Fig. Fig. Fig. Fig. Fig.

List of Figures

7.3 7.4 7.5 7.6 7.7

Fig. 7.8

Fig. 8.1

Fig. 10.1 Fig. 10.2 Fig. 10.3 Fig. 10.4 Fig. 10.5 Fig. 10.6

Plot is about people searching about dogs . . . . . . . . . . . . . . . . . . . . Google searches about Christmas are obviously seasonal . . . . . . A bit of less obvious seasonal spikes in data . . . . . . . . . . . . . . . . . . Pythagorean theorem searches versus time . . . . . . . . . . . . . . . . . . . . A probable correlation: Pythagorean theorem searches and school season (note the interesting noise during the COVID-19 pandemic) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Life of a Turkey: all is great, and nothing indicates the trend will change; until it does. Source The Black Swan, by Nicholas Nassim Taleb; Wikimedia Commons . . . . . . . Fuel pump state machine (you probably don’t think of this while you’re topping your car, but it’s what the pump needs to deal with) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A ground antenna (photo by Donald Giannatti on Unsplash) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A sketch illustrating deployable solar panels . . . . . . . . . . . . . . . . . . A team in action in a cleanroom (photo by Laurel and Michael Evans on Unsplash) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribution of satellites for altitudes between 0 and 50,000 km altitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribution of satellites for altitudes between 0 and 2000 km (LEO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribution of satellites for altitudes between 400 and 700 km (LEO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89 90 91 92

93

94

101 113 114 117 119 119 120

1

Introduction

The idea of this text is not about conveying tedious, encyclopedic information about space in the condescending tone books seem to have adopted lately, but to provide a quick, lightweight introduction to the fundamentals you need to know about space technology in a colloquial tone. Ideally, you are reading this text during a short flight back home from a job interview, a conference or similar. During short flights, there’s nothing much to do really, infotainment is nothing to write home about; so why not read a bit about all things space and get up to speed? You might be a newcomer to the industry; a marketing manager, a legal counsel, an investor, a software engineer, or a project manager joining a space tech company after a gig in a different industry. How much do you know about space? Unless you are an enthusiast who did some research on your own, chances are that you know little or nothing about it, other than the fact that satellites somehow seem to go to space. But there is of course more than that, for example: . . . . . . . .

What are the physics laws behind an object orbiting a celestial body? What kind of environments do satellites face when in orbit? What are satellites made of? What does it entail to ride a rocket into orbit? What is the process to design and build satellites? How do people on the ground keep track of satellites as they fly? What happens if a satellite fails? What kind of data do satellites deal with?

This short guide got your back. When you finish reading these lines, you will be equipped with a good dose of the fundamentals about the peculiar endeavor of creating artificial satellites. Moreover, you will also get an idea about the technologies that have enabled—and keep enabling—space activities, like materials,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_1

1

2

1

Introduction

radio, telecommunications, optics, and software, with some brief historical background provided whenever possible. But beware: this is not a handbook, nor a doctoral thesis. The depth of the topics overall is at the introductory level. I tried on purpose not to bloat this text with distracting figures and focus on content and fundamentals. This is nothing but a primer. If any section in this text interests you more than any other, I trust you will do a deeper research using the links provided. Google is your friend. Of course, once you land if you happen to be flying while reading.

1.1

Space and Startups, a.k.a “NewSpace”

Lately, space technology has been happening at the dungeons of very small and dynamic companies: space startups. Life in startups is quite well documented, perhaps somewhat over documented and a tad romanticized. There are books, blogs, and popular TV series. Startup platitudes are all over the place in social media— everyone seems to know how to run them, how to scale them, adding their own bit of advice, their own experiences of what works, what doesn’t. Although most startups more or less go through similar phases, in reality each one of them is unique. No surprise there: the same applies for us people; we all go through the same life stages, from infancy to adolescence and adulthood, although we all—luckily—experience each of these life cycle stages very differently. Space startups are a subset of the startup universe, yet a peculiar type. What’s so special about them? Think about a space startup for a moment. A handful of people trying to build spaceships, on hair-thin budgets, short runways (bankruptcy is constantly lurking around the corner), and often without having a customer in sight. There isn’t perhaps another startup type with so many odds against, just by looking at the challenges they face. In a way, space startups are like salmon. Salmon swim against the river’s current the whole way—sometimes up to thousands of kilometers, leaping over obstacles, waterfalls, rapids, and dams. These amazing fish can jump two meters high or even higher. And all while hungry predators like bears and eagles wait around every river bend to catch them when they jump out of the water. Just like salmon, many space startups will perish along the way. But a fair lot will make it up the river, in the process becoming what Nicolas Nassim Taleb calls “antifragile”—what doesn’t kill them makes them stronger—and eventually reaching orbit. Antifragility is a property of systems in which they increase their probability of survival as a result of shocks, volatility, mistakes, faults, attacks, or failures. For antifragile space startups, an extra day they exist, the higher the chances of continuing existing. When analyzing space startups, survivorship bias1 is a trap we repeatedly fall into. For each startup that makes it big, there are thousands of others who didn’t

1

Survivorship bias is the logical error of concentrating on entities that passed a selection process while overlooking those that did not. This can lead to incorrect conclusions because of incomplete data.

1.1 Space and Startups, a.k.a “NewSpace”

3

make it and have volumes to speak but die in silence. In terms of survivability, there is more insight from the salmon who did not make it up the river than from those that did, for the former knows what didn’t work—where the bears are lurking—whereas the latter could have been just lucky. Granted, a dead salmon can’t talk. Also live salmon can’t talk, but you get the gist of the analogy. What’s inside a space startup if you crack it open? How can a small bunch of people get to launch something into space? Wasn’t space supposed to be done by governmental agencies, with their billion-dollars budgets, their thousands of employees, their bureaucracy? No. You can get to fly something in space with, say, less than 10 people and a modest budget. How? The magic tends to revolve around vision, motivation, industrial loads of hard work, commitment, and an obsessively systemic mindset. Yes, this sounds like yet another of those platitudes out of a vanity-published management book sold in airport bookstores. But there is no magic and it all boils down to common sense and few things to pay attention whenever possible. The first one is complexity. Designing a satellite is a complex task. If you open the hood of any satellite, what you will see is an intricate network of computers, each one performing a specialized job—command and data handling, attitude control, radio communications, payload control, data processing, etc. Each computer is a world on its own, running lots of software. Making sure those “worlds” combine seamlessly in order to give a spacecraft its functional integrity in a harmonic way requires a good deal of cross-functional and system level analysis such as architecture design, thermal analysis, structural analysis, power generation, physical configuration, and a very long et cetera. What’s more, all those things are heavily intertwined. Such interdisciplinary is nothing else than Systems Engineering,2 which more a craft than a profession. The good thing is that it does not require you hiring a veteran Systems Engineering wizard you couldn’t possibly dream to afford. In fact, Systems Engineering is a glorified term for a combination of good knowledge of the technical fundamentals, critical thinking, problem solving and common sense. Although these factors do improve with experience, there’s plenty of young people with a good dose of them. The second one is wheel reinvention (the avoidance thereof). One of the most important factors in small space startups is to minimize rediscovering said round artifact used to help things move from A to B. This also means, space startups must stay extremely focused on what their métier is. Mind you, the métier may change along the way and the space startups that prevail are those who identify in time when the focal point is wrong and are able to swerve before the iceberg gets too close.

2

Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize its body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function.

4

1

Introduction

The third one. The not-so-glamorous side of building things: supply chain. The grocery list. Supply chain management is an art. It deals with uncertainty, change, prices, inflation, secrecy, proprietary data, volatility, convoluted technical specifications, variants, and a ridiculous amount of foresight to predict what a company will manufacture years from now. Supply chain is a challenging endeavor when you are small, young, and the new kid on the block. Suppliers tend to pay attention to the big guns—their established customers—and rightly so; who could blame them? Then, the small guys must elbow their way to source themselves parts and components, at times closing partnerships with other fellow young startups in need. In NewSpace, John Does attract Jane Does. In the process, some space startups may choose to maverick their make vs buy strategy and go vertical or in-house, supposedly to shield themselves from supplier uncertainty, only to continue being locked in with suppliers because they realize they cannot produce up to the last bolt. On the flip side, horizontal integration may create uncomfortable situations if a critically important, complex subsystem is provided by a third party in which the startup has zero control (and sometimes zero trust). Fourth one, the almost literally million-dollar question: what on earth to sell. Next time you walk past a pizza place, think about how clear the product is for them. They make pizzas, that’s what they sell. Simple as that. Zero ambiguity. They define flavors, toppings, variants. They choose names for the variants and print menus. They make people happy by selling warm flat tasty discs with cheese, tomato sauce and stuff. It’s so clear that if you go and ask anyone working there what it is that they sell, they will all say the same: we sell pizza—what a silly question to ask! Now, for a visitor entering the office of a space startup and picking up someone from the team and asking: what the hell is this startup selling, the answer may vary depending on who the mysterious visitor should ask. That’s the situation usually at the early—and not so early—stages: an amorphous idea involving space technology does not always automatically map to a product. It can be data, can be insight from the data, can be the spacecraft, can be a subsystem, can be a service on top of all that. Products must be discovered, and such discovery process may take long and be exhausting. Ideally, the product shall be discovered before the money runs out. And fifth, last and perhaps the most important thing: everyday life. A space startup is not just a romantic adventure about reaching the stars. Or, it might be, but reaching the stars comes as the culmination of disciplined work and sound day-to-day company operations as people share many hours a week, shoulder to shoulder, overcoming hurdles and finding their way through the job. In short, a space startup is—no surprise there—an actual company which needs to be run. There are operational matters such as talent capture, facility management, frozen pizza, coffee, and of course, team matters. Building healthy teams where learning and making mistakes is part of the job and, more fundamentally, where it is fun to do all that is a bigger accomplishment than flinging shiny boxes beyond the

1.1 Space and Startups, a.k.a “NewSpace”

5

Kármán line.3 All this is what NewSpace startups are about. Satellites, at the end of the day, are by-products. The main ‘product’ of a space startup is the network of brains behind the technology. So, let’s dive into this. The chapters of this text are reasonably self-contained, although there might be suggestions in certain parts to jump here and there for elaboration. I do not expect you to read this from cover to cover, but to selectively sift through the pages as the topics that resonate on you and your curiosity will capture your attention. Some chapters go a bit more technical than others, and if the content in those makes absolutely no sense, jump back to the safety of the less technical sections. If you are really, really busy, there is a TL; DR (too long, didn’t read) chapter at the end (Chap. 10) which summarizes the text in a set of frequently asked questions. As a CTO at a space startup like ReOrbit, I am responsible for ensuring that the technology roadmap comes together and aligns well with the business model. But my job is, as I see it, more than that. Fundamentally, as a CTO, my role is to ensure the team of engineers I lead enjoy developing space technology and feel safe trying things out and screwing up in the process, learning from the mistakes and charging back stronger than before. There is no innovation possible without experimentation, and space technology moves forward thanks to those who venture themselves into the unknown, for most of the ‘knowns’ today in space were unknowns yesterday. Last but definitely not least, a mention of ReOrbit. ReOrbit is a space company based in Helsinki, Finland, and with offices in Stockholm and Argentina. Founded in 2019, ReOrbit designs and develops satellites for a variety of different payloads and applications. At ReOrbit, satellites are designed as network routers and thus equipped with the capabilities to ensure secure and reliable data transport from satellite to satellite or satellite to ground. Find more information at www.reorbit. space. With all this being said, here we go. Ignacio Chechile, Chief Technology Officer, ReOrbit. April 2023. Helsinki, Finland.

3

The Kármán line is a proposed conventional boundary between Earth’s atmosphere and outer space set by the international record-keeping body FAI (Fédération Aéronautique Internationale) at an altitude of 100 km. However, such definition of the edge of space is not universally adopted.

2

Artificial Satellites; The Shortest Introduction Ever

No one here is alone. Satellites in every home. —Blur, “The Universal”

Abstract

Three and a half years after the launch of the first artificial satellite, Sputnik 1, there were already 115 artificial satellites orbiting the Earth. Things escalated quickly. What is the story behind the first artificial satellites? What are the physics laws involved? This chapter presents the shortest introduction ever to the topic.

The first published mathematical study of the possibility of an artificial satellite was the now famous Newton’s cannonball, a thought experiment by Isaac Newton to explain the motion of natural satellites, published in his Philosophiæ Naturalis Principia Mathematica (1687). In it, Newton thought of a cannon situated at the summit of a mountain and being fired. Now, depending on the velocity imprinted by the cannon, the ball would fall at different distances from the muzzle. See the image below: a certain initial velocity would cause the ball to fall at the point D. A slightly higher velocity would bring the ball up to point E, F and G. Now, if we increased the velocity consistently in few more steps, there would be a velocity for which the ball just does not fall back to the surface of the planet anymore but keeps on falling “eternally” (provided no friction), which is the closed curve in the illustration, and what rockets basically do to satellites: imprint them the right velocity and letting them achieve closed paths (yes, this is a bit oversimplistic and there’s more than that, as we will see). Mind that if we kept increasing the velocity after this point, the ball will eventually escape the planet orbit and start wandering in interplanetary space. But that’s out of the scope for this text (Fig. 2.1). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_2

7

8

2

Artificial Satellites; The Shortest Introduction Ever

Fig. 2.1 Newton’s cannon

The first fictional depiction of a satellite being launched into orbit was a short story by Edward Everett Hale, “The Brick Moon” (1869). The idea appeared again in Jules Verne’s The Begum’s Fortune (1879). In 1903, Konstantin Tsiolkovsky published Exploring Space Using Jet Propulsion Devices, which is the first academic treatise on the use of rocketry to launch spacecraft. Herman Potoˇcnik entertained the idea of using orbiting spacecraft for detailed peaceful and military observation of the ground in his 1928 book, The Problem of Space Travel. He described how the special conditions of space could be useful for scientific experiments. The book described geostationary satellites (first put forward by Tsiolkovsky) and discussed communication between them and the ground using radio but fell short of the idea of using satellites for mass broadcasting and as telecommunications relays. In a 1945 Wireless World article, the English science fiction writer Arthur C. Clarke described in detail the possible use of communications satellites for mass communications. He suggested that three geostationary satellites would provide coverage over the entire planet. In May 1946, the United States Air Force’s Project RAND released the Preliminary Design of an Experimental World-Circling Spaceship, which stated “A satellite vehicle with appropriate instrumentation can be expected to be one of the most potent scientific tools of the Twentieth Century”. The United States had been considering launching orbital satellites since 1945 under the Bureau of Aeronautics of the United States Navy.

2

Artificial Satellites; The Shortest Introduction Ever

9

In 1946, American theoretical astrophysicist Lyman Spitzer proposed an orbiting space telescope. In February 1954, Project RAND released “Scientific Uses for a Satellite Vehicle”, by R. R. Carhart. This expanded on potential scientific uses for satellite vehicles and was followed in June 1955 with “The Scientific Use of an Artificial Satellite”, by H. K. Kallmann and W. W. Kellogg. In the context of activities planned for the International Geophysical Year (1957–1958), the White House announced on 29 July 1955 that the U.S. intended to launch satellites by the spring of 1958. This became known as Project Vanguard. On 31 July, the Soviet Union announced its intention to launch a satellite by the fall of 1957. The game was on. The first real artificial satellite would end up being Sputnik 1, launched by the Soviet Union on 4 October 1957 under the Sputnik program. The 84 kg spacecraft worked for roughly 2 weeks, and it reentered the atmosphere a few months after. Its architecture was rather rudimentary: its batteries weighed 51 kg, it was equipped with a 1Watt transmitter which encoded telemetry in low frequency pulses which would be broadcast and heard on AM radio, and it was pressurized with nitrogen. Sputnik 1 helped to identify the density of high atmospheric layers through measurement of its orbital change and provided data on radio signal distribution in the ionosphere. The unanticipated announcement of Sputnik 1’s success precipitated the Sputnik crisis in the United States and ignited the so-called Space Race within the Cold War. Sputnik 2 was launched on 3 November 1957 and carried the first living passenger into orbit, a dog named Laika. Explorer 1 became the United States’ first artificial satellite, launched on 31 January 1958. The information sent back from its radiation detector led to the discovery of the Earth’s Van Allen radiation belts. The TIROS-1 spacecraft, launched on April 1, 1960, as part of NASA’s Television Infrared Observation Satellite (TIROS) program, sent back the first television footage of weather patterns to be taken from space. In June 1961, three and a half years after the launch of Sputnik 1, the United States Space Surveillance Network had already cataloged 115 Earth-orbiting satellites. Things escalated really quick. Expectedly, early satellites were built to unique designs. With advancements in technology, multiple satellite missions began to be built on single model platforms called satellite buses. The first standardized satellite bus design was the HS-333 geosynchronous (GEO) communication satellite made by Hughes and launched in 1972. Oddly enough, many satellites are still designed and built as one offs—in other words, the 70s way—although multi-mission buses are growing in popularity. We will talk about this in due time. As of today, there are more than 5000 operative satellites orbiting our planet. If we count both operative and inoperative spacecraft, forgotten stages of rockets and whatnot, we need to talk about 10,000 objects flying over our heads. Since Sputnik 1, satellite architecture and design methods have evolved consistently. Satellites’ capabilities have improved fast thanks to the progress certain

10

2

Artificial Satellites; The Shortest Introduction Ever

enabling technologies have made on their own. One of those foundational technologies stand out from the rest: semiconductors. Let’s talk about that in the next chapter.

3

Semiconductors in Space: From Sand to Satellites

I don’t like sand. It’s coarse and rough and irritating. —Anakin Skywalker, Star Wars: Episode II, Attack of the Clones

Abstract

In space, microprocessors and solid-state devices are ubiquitous because satellites need software, storage, and digital logic in order to process information on-board, and operate. Systems on Chip (SoCs), FPGAs and logic gates are heavily used. The software and machine code that spacecraft run on-board to manage their resources, their orientation or to control a payload sensor executes on these types of devices, and the space environment is not precisely nice with their underlying microscopic structure. In this chapter, we delve into how sand is converted into electronic devices and how those devices survive in orbit.

The sand we find on the beaches is mostly composed of silica, which is another name for silicon dioxide, or SiO2 . Silica is one of the most complex and abundant families of materials, existing as a compound of several minerals. Silica is a crystalline material, which means that its atoms are linked in an orderly spatial lattice of silicon-oxygen tetrahedra, with each oxygen being shared between two adjacent tetrahedra. Sand is abundant of silica and many other things, including macro particles such as plastic and other stuff, so SiO2 must be cleaned to be industrially used.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_3

11

12

3

Semiconductors in Space: From Sand to Satellites

Fig. 3.1 SiO2 structure

Once all the macro impurities are removed, silica is melted in a furnace at high temperature and is reacted with carbon to produce silicon of a relative purity.1 Somewhere in 1915 a Polish scientist called Jan Czochralski woke up one morning on the wrong side of the bed and made a mistake: instead of dipping his pen into his inkwell, he dipped it in molten tin—why our Jan had molten tin on his desk is beyond me—and drew a tin filament, which later proved to be a single crystal. He had invented by accident a method2 which remains in use in most semiconductor industries around the world to grow silicon monocrystalline structures, manufactured as ingots3 that are then sliced into ultra-thin wafers that companies use to etch their integrated circuits layouts on.4 The process provides an almost pure, monocrystalline silicon chip makers can work with. Crystals and their orderly structure have fascinated scientists for ages, perhaps due to the fact they provide an illusion of order and for that reason offer a relatively easier grasp of the underlying physics: condensed matter is a complex matter—heh—but when it’s arranged in a more or less symmetrical way in three dimensions, it may give the impression to be a tad simpler to comprehend. In a silicon crystal, each silicon atom forms four covalent bonds with four oxygen atoms, that is, each silicon atom sharing electrons with four oxygen atoms (see Fig. 3.1). As we know, temperature is the quantitative measure of the kinetic energy of all particles that form a substance or material. In crystals, atoms do not really

1

More about the process here: https://aip.scitation.org/doi/pdf/10.1063/5.0046150. Expectedly, the process is called after him: https://en.wikipedia.org/wiki/Czochralski_method. 3 More about the production of ingots here: https://www.microchemicals.com/technical_inform ation/czochralski_floatzone_silicon_ingot_production.pdf. 4 See more about the etching process here: https://www.kth.se/social/upload/510f795cf276544e1d dda13f/Lecture7Etching.pdf. 2

3

Semiconductors in Space: From Sand to Satellites

13

go anywhere but they vibrate in their fixed positions. Temperature in crystalline structures indicates how violently atoms shake at their spots. Valence electrons,5 in thermal equilibrium with the crystal they belong to, share the kinetic energy with the rest of the material. But temperature tends to describe the average energy across the lattice. Momentary differences in local temperature may cause an electron to muster the guts to break its covalent bond and go free.6 A bond without its precious electron is a broken bond, and as such will try to recover from this absence, so the affinity with neighboring electrons intensifies. If the broken bond manages to capture an electron from a neighboring bond, the problem is only passed to the neighbor, which will also soon pass it to the next one, and so on. The “hole” left behind by the initial emancipated electron spreads across the lattice. What happens with the initial fugitive electron? It travels across the structure, emotionally disengaged from the problem it caused. Worth noting is that a broken bond creates two phenomena: wandering holes and wandering free electrons. Another way of calling such free electrons is conduction electrons. Undisputed kings of negative charge, electrons leave positively charged zones behind them. Therefore, in the vicinity of holes, the charge is now more positive, and such positivity travels as the hole travels. Therefore, we can say holes have positive charge. A wafer of pure monocrystalline silicon or germanium does not do much in and of itself. It is just an ‘intrinsic’ material with electrons and holes moving around because of bonds constantly being broken due to thermal agitation. Intrinsic materials create electron–hole pairs in exact numbers because one exists because of the other (along with some other particles existing inside intrinsic silicon as well, like photons). Intrinsic materials would be of little practical use if we couldn’t break the balance between electrons and holes. How to break that harmony? By opportunistically sprinkling our crystals with more electrons (or more holes) by means of adding impurities. Didn’t we say impurities were bad? Yes, but these are more sophisticated, controlled impurities, unlike the microplastic that washes ashore on beaches as a product of our pointless mass consumption urges. But here’s the catch: we cannot just add loose electrons like we add pepper to salad—the Coulomb forces would be insane due to the sudden electric charge imbalance. All we can do is to add atoms that can contribute with electrons, called donors. Examples of donors are phosphorus or arsenic. Typical proportions of impurity atoms is one of these guys for every million silicon atoms. When a donor atom is implanted in the lattice, it mimics the Si atom quite well; it completes the four covalent bonds the same way as Si atoms do. But arsenic happens to have 5 valence electrons, so one electron does not belong to any bond, and because it’s not trapped in any potential barrier, it has a higher energy than their

5

A valence electron is an electron in the outer shell associated with an atom, and that can participate in the formation of a chemical bond if the outer shell is not closed. 6 This is due to thermionic emission. Thermionic emission (also known as Edison effect) is the liberation of electrons by virtue of its temperature. This occurs because the thermal energy given to the charge carrier overcomes the work function of the material.

14

3

Semiconductors in Space: From Sand to Satellites

other 4 cousins, and thus it has high chances of leaving the atom behind, leaving it positively charged as a gift. An ion is born, fixed in the crystalline structure. The material remains electrically neutral at the macro level, but it’s now populated with positively charged spots, all balanced by the free electrons wandering around. Conversely, acceptor impurities do the opposite. Aluminum, Indium and Gallium, for instance, are good examples of acceptor elements. Adding acceptors is a way of adding holes to a lattice, without breaking the macro electric neutrality. An Indium atom fits comfortably in the lattice, impersonating a Silicon atom, but it has only 3 valence electrons. You get the score. A hole is now there, because one covalent bond is missing. This vacant bond is open for business, and eventually it will get filled by an electron, breaking the impurity atom neutrality, and thus creating a negative ion. In summary: impurities, whether donors or acceptors, will end up all being ionized. Donors will quickly lose an electron, and acceptors will quickly lose a hole (or gain an electron) because the energy to allow such ionization is quite low. Thermal agitation will make sure that practically all impurities will be ionized, therefore we can consider that all donors will lose their extra electron. This simplifies the math: we can estimate that the density of conduction electrons will be more or less equal to the density of donor atoms. The same goes for conduction holes. This is important: a piece of silicon crystal with more donor impurities than acceptor impurities will be called type n. Similarly, if more acceptors than donors are added to the silicon, the material will be called type p. Conduction electrons and holes will not have it easy while traveling inside the lattice. Multiple things will alter their trajectories: repulsion forces coming from fellow moving carriers, un-ionized impurity atoms, ionized impurity atoms, and whatnot. Life of a charge carrier is not simple.

3.1

Let’s Meet at the Junction

The magic starts to unfold when we sandwich type-n and type-p materials together. This is called a junction, and its properties are worth mentioning, because it sets the foundations of all solid-state devices out there. Junctions are not perfect; it is impossible to define an ideally abrupt boundary between a material partially doped with donors and another part partially doped with acceptors. Junctions must be gradual, and this does not affect the physics behind them. It is very important to note that junctions are not made by welding one type-n crystal with a type-p crystal. A junction must still be made of a single crystal; there is no practical means of attaching together two bars of silicon with different impurities dosage and expect that it will work. The crystal lattice perfection is a key factor when it comes to junction’s performance. In equilibrium (that is, with the piece of silicon that hosts the junction at some nonzero temperature, with no electric field applied), the concentration of acceptors will be maximum on the p-side, then decrease to zero as we approach the junction, and the same for donors on the n-side. With carriers moving due to thermal

3.1 Let’s Meet at the Junction

15

agitation, they cross the boundary thrusted by the gradient of impurities concentrations at the far ends. Holes come across the chasm and reach to the n-side, where they recombine easily because of the high density of electrons there. Equivalently, electrons cross the boundary to the p side, and recombine. Then, a zone starts to appear around the border, a zone without carriers. A no man’s land of sorts, where all ions are complete. Because acceptor and donor ions are fixed to the lattice, the area around the boundary will be charged slightly negative on the p side (because electrons have found their spots in acceptors) and slightly positive on the n side, because electrons have fled the scene. These non-zero charge levels stemming from the fixed ions create an electric field, which causes the diffusion process to settle when such electric field is intense enough to create displacement currents that cancel further currents from the doping concentration gradient. In all our analyses thus far, we have only considered the piece of material to be only interacting with its surroundings by thermal energy. But that is only one part of the story. There are several other ways equilibrium in a silicon bar can be disrupted: electric fields, magnetic fields, and light. In a n-type material, holes are the minority carriers. Equivalent, in a p-type, electrons are minority carriers. Minority carriers are many, many orders of magnitude less numerous than majority carriers. Now if we put the silicon bar under uniform light, the photons of the light beam will break bonds all across the lattice, creating pairs of electron-holes. Light photons have created carriers of both signs in equal amounts, but the minority carriers are the ones noticed here. Imagine that an extra number of electrons on the n-side will not move the needle; at the end of the day there were a myriad of other electrons there, so they are nothing special. But an increasing number of holes on the n-side will be comparatively noticed. The injection of minority carriers is an important effect which will also play a part in the discovery of the bipolar transistor. You start to see the tendency of semiconductors to easily become a mess just by being beamed with some harmless light. Now, to break the equilibrium in the junction, we must apply a voltage to the junction. In forward bias, the p-type is connected with the positive terminal and the n-type is connected with the negative terminal of a voltage source. Only majority carriers (electrons in n-type material or holes in p-type) can flow through a semiconductor for a macroscopic length. The forward bias causes a force on the electrons pushing them from the n side toward the p side. With forward bias, the depletion region is narrow enough that electrons can cross the junction and inject into the p-type material. However, they do not continue to flow through the p-type material indefinitely, because it is favorable for them to recombine with holes. The average length an electron travels through the p-type material before recombining is called the diffusion length, and it is typically on the order of micrometers. Although the electrons penetrate only a short distance into the p-type material, the electric current continues uninterrupted, because holes (the majority carriers on that side) begin to flow in the opposite direction. The total current (the sum of the electron and hole currents) is constant, in spatial terms. The flow of holes from the p-type region into the n-type region is exactly analogous to the flow

16

3

Semiconductors in Space: From Sand to Satellites

of electrons from n to p. Therefore, the macroscopic picture of the current flow through this device involves electrons flowing through the n-type region toward the junction, holes flowing through the p-type region in the opposite direction toward the junction, and the two species of carriers constantly recombining in the vicinity of the junction. The electrons and holes travel in opposite directions, but they also have opposite charges, so the overall current is in the same direction on both sides of the material, as required. Now we do the opposite. Connecting the p-type region to the negative terminal of the voltage source and the n-type region to the positive terminal corresponds to reverse bias. Because the p-type material is now connected to the negative terminal of the power supply, the holes in the p-type material are pulled away from the junction, leaving behind charged ions. Likewise, because the n-type region is connected to the positive terminal, the electrons are pulled away from the junction, with similar effect. This increases the voltage barrier causing a high resistance to the flow of charge carriers, thus allowing minimal electric current to cross the boundary. But some current—a leakage current—does flow. Leakage current is caused by the movement of minority carriers (electrons in p-type and holes in ntype) across the depletion region of the junction. As the depletion region widens, the potential barrier at the junction increases. However, even though the potential barrier is high, a small number of minority carriers can still cross the junction by thermionic emission7 or tunneling. The amount of leakage current depends on several factors, including the doping concentration of the semiconductor material, the temperature, and the voltage applied across the diode. Higher doping concentrations and higher temperatures can increase the number of minority carriers and therefore increase the leakage current. The increase in resistance of the p–n junction results in the junction behaving as an insulator. The strength of the depletion zone electric field increases as the reverse-bias voltage increases. But everything has a limit. Once the electric field intensity increases beyond a critical level, the p–n junction depletion zone may break down and current shall begin to flow even when reverse-biased, usually by what is called the avalanche breakdown8 processes. When the electric field is strong enough, the mobile electrons or holes may be accelerated to high enough speeds to knock other bound electrons free, creating more free charge carriers, increasing the current and leading to further “knocking out” processes and creating an avalanche. In this way, large portions of a normally insulating crystal can begin to conduct. This breakdown process is non-destructive and is reversible, as long as the amount of current flowing does not reach levels that cause the semiconductor material to overheat and cause thermal damage.

7

Thermionic emission is the process by which electrons escape from the surface of a material due to their thermal energy. When a material is heated to a sufficiently high temperature, the thermal energy of the electrons increases, and some of them gain enough energy to overcome the potential barrier that holds them in the material. 8 https://en.wikipedia.org/wiki/avalanche_breakdown.

3.2 The Transistor Drama

17

It is important to say that the hectic scene inside a semiconductor described in this section can be noticed from the outside. All these electrons and holes knocking about the junction create a good deal of noise which can affect external circuits. For instance, shot noise, also known as Schottky noise, is a type of electrical noise that arises from the random nature of the flow of electric charge carriers in the material. In semiconductors, shot noise occurs when the electrons and holes cross the junction, and is caused by the discrete nature of charge carriers and their motion. Because of the discrete nature of charge carriers, current in a junction does not flow smoothly but rather in bursts or “shots” of current. These bursts occur when electrons or holes overcome the potential barrier and move from one side to the other. The size and frequency of these bursts depend on several factors, including the voltage applied, the temperature of the material, and the concentration of charge carriers. At the beginning of this section, we commented that thermal agitation caused electrons to break loose from their atoms in the lattice and go wild, creating electron–hole pairs. This process causes a noise called Johnson-Nyquist noise, also known as thermal noise, and is a type of electrical noise that arises from the random thermal motion of charge carriers in the presence of thermal energy, which means that it obviously increases with temperature. Thermal noise is present in all electric circuits, and in radio receivers it may affect weak signals. There is also flicker noise, which although not fully understood, it is believed to be related to the trapping and release of charge carriers by defects or impurities in the semiconductor material. All these noises can affect the performance of the external circuits—and more importantly, low-noise circuits—using the semiconductors, and the relevance of these noises may change depending on the application, the current levels and frequencies involved. Overall, what we have described in this section is nothing by the inner workings of a diode. A diode is a solid-state device which conducts current primarily in one direction. As we will see, being able to control the direction of flow of electrons and holes would prove to be of importance. Why stop with only one junction?

3.2

The Transistor Drama

A drama you didn’t expect: the transistor drama. After Bardeen and Brattain’s December 1947 invention of the point-contact transistor,9 William Shockley dissociated himself from many of his colleagues at Bell Labs, and eventually became disenchanted with the institution itself. Some hint that this was the result of jealousy at not being fully involved in the final, crucial point-contact transistor

9

https://en.wikipedia.org/wiki/Point-contact_transistor.

18

3

Semiconductors in Space: From Sand to Satellites

Fig. 3.2 A bipolar transistor with one junction in forward-bias and another one in reverse-bias

experiments and frustration at not progressing rapidly up the laboratory management ladder. Mr. Shockley had, in the words of his employees, an unusual management style.10 Shockley recognized that the point-contact transistor delicate mechanical configuration would be difficult to manufacture in high volume with sufficient reliability. He also disagreed with Bardeen’s explanation of how their transistor worked. Shockley claimed that positively charged holes could also penetrate through the bulk germanium material, not only trickle along a surface layer. And he was right. On February 16, 1948, physicist John Shive achieved transistor action in a sliver of germanium with point contacts on opposite sides, not next to each other, demonstrating that holes were indeed flowing through the thickest part of the crystal. All we have said before about the p–n junction before applies to transistors. But transistors have three distinctive areas, with two boundaries or junctions: n–p–n, or p–n–p, typically called emitter, base and collector. Emitters are heavily doped with impurities, and for that it is usually called n++ or p++. The base is weakly doped, and for the collector this is not so important, and its doping depends on the manufacturing process. The most important constructive factor is the based width, or W. The junction separating emitter from base is called, no wonder, emitter junction, whereas the junction separating base from collector is called—drum roll—collector junction. Naming at least is not complicated (Fig. 3.2). To understand the inner workings of a transistor of this kind, let’s assume a p– n–p arrangement where we forward-bias the emitter junction, that is, the positive terminal of the voltage source connected to the emitter, and the negative terminal to the base (see figure above). Conversely, we reverse-bias the collector junction:

10

https://www.nature.com/articles/442631a. (Eventually, William Shockley would also be famous for being a notorious racist)

3.2 The Transistor Drama

19

negative terminal of a power source to the collector, positive terminal to the base. This way, the emitter to base current is large because the junction is forwardbiased—with the current value being governed by the diode equation.11 Given that this junction is highly asymmetric (the doping of the emitter p-region is orders of magnitude higher than the doping of the base n-region), the emitter current will be largely composed of holes going from the p-side to the n-side (current 1 in the figure). If the base width (W) is narrow enough, and because the base area is electrically neutral, the holes traversing through the emitter junction will find their way to the collector junction where the electric field will capture and inject them into the collector area (currents 3 and 4 in the figure). Some holes will recombine in the base (current 6), creating a base current which is very small due to the low doping of the base section and the small width of the base. With all this, the emitter current is passing almost unaltered to the collector. The collector current is almost independent of the collector–base voltage, as long as this voltage remains negative. Otherwise, the collector would also inject holes into the base, altering the overall functioning of the device. This is an important mode (saturation mode) we will talk about. The electric field at the collector junction injects the holes into the collector area, and the magnitude of this electric field does not affect the number of holes arriving to that place. It is the base and the diffusion that happens there which defines the number of holes that will make it to the collector. Even zero volts between collector and base would keep that current flowing. Thus far, we have been analyzing the behavior of the transistor mostly from its direct-current (DC) biasing perspective. The analysis to follow should be about observing how the transistor behaves while in the active region and when fed with small—and not so small—AC signals superimposed to base voltages, causing the device’s biasing to fluctuate around certain points, and how the input and output signals should match each other, minimizing alterations (i.e., distortion). Although understanding this is of great importance and a topic in itself which finds applications in a myriad of fields such as analog circuits, radiofrequency, communications, hi-fi audio, and whatnot, for this discussion we shall focus on the device in switching mode, that is, moving between defined, discrete conduction states: from cut-off to saturation, and swinging between them as fast as possible. In this mode, the transistor acts as a switch, evolving from one extreme state (cutoff, or open switch) to the other (saturation, closed switch) as fast as possible. A transistor operating in the cutoff region has its two junctions working in reverse bias mode. In this situation, only leakage current flows from collector to emitter. Conversely, in saturation, the device has both junctions in forwardbias mode, allowing a small depletion layer and allowing the maximum current to flow through it. By controlling the biasing of the emitter–base junction, we can make the transistor transition between these two modes; full current conduction or practically zero.

11

https://en.wikipedia.org/wiki/shockley_diode_equation.

20

3

Semiconductors in Space: From Sand to Satellites

Fig. 3.3 NOT gate with BJT transistor

Table 3.1 NOT gate truth table

A

Output

0

1

1

0

The transistor in switching mode sets the foundation of the underlying behavior of practically all digital electronics and computer systems out there. So, all this hassle with electrons, holes, donors, acceptors, minority carriers and gossip at Bell Labs only to create a switch? Really? Yes. A very special kind of switch, one that would go down history to spark a revolution. The junctions we described above, in the form of diodes and transistors, would become the basic building blocks of our modern digital toolbox. A toolbox that supports today’s machine learning, artificial intelligence, cloud computing, but also Instagram, TikTok and the metaverse. How? Combining transistors in switching mode can form logic gates. For instance, a simple bipolar junction transistor (BJT, the one whose inner working we described in Sect. 3.2) can form a NOT gate, which basically takes an input and inverts it (Fig. 3.3; Table 3.1).12 Similarly, a BJT can form a NAND gate (Fig. 3.4; Table 3.2).

12

Note that “1” and “0” states here are just logic states and do not represent specific voltages. Eventually, the semiconductor industry would standardize the voltage thresholds in logic among other specs, giving way to logic families.

3.2 The Transistor Drama

21

Fig. 3.4 NAND gate with BJT transistor Table 3.2 NAND gate truth table

A

B

Output

0

0

1

0

1

1

1

0

1

1

1

0

Eventually, logic gates would form flip-flops.13 Flip-flops would form registers, decoders, multiplexers, demultiplexers, but also adders, subtractors and multipliers, which in turn would form arithmetic units (ALUs). As integration technology and processes would mature, designers would start packing several logic blocks such as memories, ALUs and buses inside smaller and smaller silicon dies. Then, engineers would create a clever digital machine whose behavior could be slightly modified—this means, it would perform different arithmetic operations and data

13

A flip-flop is a digital circuit that can be thought of as a single bit of memory that can store either a 0 or a 1. It has two stable states, which are typically referred to as “SET” and “RESET”. The flipflop can be set to either of these states using appropriate input signals, and it will remain in that state until it is reset or set again.

22

3

Semiconductors in Space: From Sand to Satellites

movements between parts of its architecture—by means of binary words called instructions stored in a memory, giving way to machine code and CPU architectures. Corrado Böhm in his Ph.D. thesis14 would conceive the foundations for the first compiler—which still lacked the name as he called it “automatic programming”, with Böhm being one of the first computer science doctorates awarded anywhere in the world—an invention that would appear as a way of coping with the natural lack of human readability of machine code. The word ‘compiler’ would eventually be coined by Grace Hopper, who would go and implement the first compiler ever. Compilers would accelerate the process of development run time behavior in CPUs, what we now call software. Not without creating some crisis in the process.15 In our eternal quest for more and more abstraction, and as different CPU architectures would proliferate, porting software from architecture to architecture would become more problematic, so we would sort this by packing layers of standardized software libraries and services that would dramatically ease our way of programming application software on top of dissimilar hardware, giving way to what we now call operating systems that would, in the process, make some people obnoxiously rich. And as bipolar integrated circuits would pass the baton to more efficient fabrication processes,16 and as the physical lengths of integrated transistors would shrink and their density would double roughly every 2 years,17 their switching speed from cutoff to saturation would continue decreasing and with better integration technologies, more complex architectures became possible, making System-OnChips, CPLDs and later FPGAs feasible devices and products. Combined with a new breadth of spectrum-efficient digital modulation and signal processing techniques, mobile devices would materialize, maturing with them important related

14

Translated version of the thesis: http://www.itu.dk/~sestoft/boehmthesis/boehm.pdf. Software crisis is a term used in the early days of computing science to describe the difficulty of writing efficient computer programs in the required time. The software crisis was due to the rapid increases in computer power and the complexity of the problems that could not be tackled. With the increase in the complexity of the software, many software problems arose because existing methods were inadequate. 16 CMOS is actually based on metal–oxide–semiconductor field-effect transistors (MOSFET) which is a different kind of transistor compared to the Bipolar Junction Transistor (BJT). MOSFETs have an insulated gate, the voltage of which determines the existence and conductivity of a conduction channel used for amplifying or switching electronic signals. 17 The doubling period is often misquoted as 18 months because of a prediction by Moore’s colleague, Intel executive David House. In 1975, House noted that Moore’s revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months (with no increase in power consumption). Mathematically, Moore’s Law predicted that transistor count would double every 2 years due to shrinking transistor dimensions and other improvements. As a consequence of shrinking dimensions, Dennard scaling predicted that power consumption per unit area would remain constant. Combining these effects, David House deduced that computer chip performance would roughly double every 18 months. 15

3.3 The Space Environment

23

domains and technologies like displays, allowing us to create arbitrary arrangements of pixels in screens whose colorful photons would hit our retinas, creating appealing user human–machine interfaces in applications that would allow us to, for example, send an emoji to a friend for comedic purposes. How does space technology relate to all these happenings? In space, microprocessors and solid-state devices are ubiquitous because satellites need software, storage, and digital logic in order to process information on-board and act accordingly. Systems on Chip (SoCs), FPGAs and logic gates are heavily used. The software and machine code that spacecraft run on-board to manage their resources, orientation or to control a payload sensor executes on these types of devices, and the space environment is not precisely nice with the microscopic structure that we have just described above. Let’s see why.18

3.3

The Space Environment

Although we all are technically in space as we travel across interstellar regions while riding on this geoid we call earth,19 we tend to live in a sort of crystal bubble in terms of the coziness of this blue dot we live in. Space is a harsh place to be, at least compared to life here at the surface of the ground. We happen to be protected by two huge shields: the magnetosphere, which captures and deflects particles of different energies that otherwise would be harmful for us, and by a thick layer of gas we call atmosphere which captures and neutralizes space debris wanting to hit us in the head. And both shields complement each other well. Unlike Mercury, Venus, and Mars, Earth is surrounded by an immense magnetic field called the magnetosphere. The Earth has a magnetic field because it has a molten outer core of iron and nickel that is constantly in motion. The motion of the liquid outer core creates electrical currents, which in turn generate a magnetic field, as André-Marie Ampère stated in his eponymous circuital law. Our magnetosphere shields us from erosion of our atmosphere by the solar wind (charged particles the Sun continually spews at us), erosion and particle radiation from coronal mass ejections (massive clouds of energetic and magnetized solar plasma and radiation), and cosmic rays from deep space. The magnetosphere plays the role of gatekeeper, repelling this unwanted energy that’s harmful to life on Earth, trapping most of it a safe distance from Earth’s surface in doughnut-shaped zones called the Van Allen Belts.

18

Content for this section has been adapted and illustration has been taken from the book “Electrónica del Estado Sólido” by Ángel Tremosa. 19 The name Earth is an English/German name which simply means the ground. It comes from the Old English words ‘eor(th)e’ and ‘ertha’. In German it is ‘erde’. The name Earth is at least 1000 years old.

24

3

Semiconductors in Space: From Sand to Satellites

Fig. 3.5 Van Allen radiation belts; cross them is not the nicest ride for a satellite going somewhere (public domain)

The inner Van Allen belt is located typically between 6000 and 12,000 km (1–2 Earth radii20 ) above Earth’s surface, although it dips much closer over the South Atlantic Ocean. The outer radiation belt covers altitudes of approximately 25,000–45,000 km (4–7 Earth radii). As you may imagine, any semiconductor on-board of a satellite crossing these regions will not have the best time ever. Geostationary satellites must pierce through the inner belt on their way to their final orbits (Fig. 3.5). Hardware exposed to space must be ready to withstand all aspects of the environment. This includes vacuum, thermal cycling, charged particle radiation, ultraviolet radiation, plasma effects and atomic oxygen. Radiation is generally classified as being either ionizing or non-ionizing. The basic dividing line between the two is the energy levels involved. Ionizing radiation has sufficient energy to strip electrons from atoms, thus creating ions, that is, atoms with charge. Remember the previous section about holes, electrons, impurities, and the delicate mechanism inside the silicon lattice? Now imagine such a fragile microscopic scenery being bombarded by highly energetic particles impacting the crystal structure and knocking electrons out of place and disrupting the charge balance across the place. This is what electronics on board of every satellite flying over our heads is experiencing as we speak. Examples of ionizing radiation are alpha and beta particles, protons, X-rays, and gamma rays. Neutrons are not directly ionizing, but the resulting radiation from their collisions with nuclei is ionizing.

20

The Earth is almost, but not quite, a perfect sphere. Its equatorial radius is 6378 km, whereas its polar radius is 6357 km. A radius value of 6371 km is usually adopted.

3.3 The Space Environment

25

In contrast, non-ionizing radiation only has sufficient energy to change the energy state of electrons. Examples of non-ionizing radiation are visible and infrared light, microwaves, and radio waves. Non-Ionizing radiation cannot induce upsets in electronic devices but can still create undesired effects. Additionally, Galactic cosmic rays (GCR) comprised of high-energy particles, overwhelmingly protons, impact the Earth’s atmosphere constantly. These particles, when they collide with molecules in the Earth’s atmosphere, produce a wide range (and a high number) of particles, primarily neutrons and protons. Neutrons are particularly troublesome because they can penetrate most man-made construction.21 The hard vacuum of space with its pressures below 10E−4 Pa (0.0010 Pa) causes some materials to outgas, which in turn affects any spacecraft component with a line-of-sight to the emitting material, principally optics sensitive to impurities in their lenses. Another effect to suffer in space is thermal cycling. Thermal cycling occurs as the spacecraft moves through sunlight and shadow while in orbit or while maneuvering. Thermal cycling temperatures are dependent on the spacecraft component thermo-optical properties, i.e., solar absorptance, or how much solar energy the material absorbs, and infrared emittance, or how much thermal energy can be emitted to space. The lower the ratio of absorptance to emittance, the cooler the temperature of the spacecraft surface. Thermal cycling can cause cracking, delamination, and other mechanical problems, particularly in assemblies where there is mismatch in the coefficient of thermal expansion between materials. Radiation can also affect materials. Charged particle radiation, along with ultraviolet radiation can cause cross-linking (hardening) and chain scission (weakening) of polymers, darkening and color center formation in windows and optics. If all that was not enough, micrometeoroids and space debris particles may impact at high velocities. All of these may have significant effects on material properties (Fig. 3.6). Satellites are designed to incorporate mitigation measures for the undesired effects from radiation mentioned: Ionization Dose, which refers to the cumulative effect of the energy deposited in matter by ionizing radiation per unit mass (known as Total Ionization Dose, or TID), and Single Event Effects (SEE), which are related to single, highly energetic particles interacting with the atomic structure of semiconductors and altering its behavior, both destructive and non-destructively. TID affects semiconductors in several ways, for example by modifying threshold voltages. The mechanics of this is as follows: the trapping of holes in the material may cause a charge buildup and it occurs in the bulk of the semiconductor oxide. These charges will increase the gate oxide electric fields, leading to a change in the current–voltage (I–V) characteristic of the device. The most prominent change is the shift of the power-ON (threshold) voltage which is negative for NMOS and positive for PMOS. As a result, a device might become unresponsive to some commands as it might get “stuck” in a specific state.

21

https://www.microsemi.com/document-portal/doc_view/130760-neutron-seu-faq.

26

3

Semiconductors in Space: From Sand to Satellites

Fig. 3.6 Magnetic field strength at Earth’s surface (Creative Commons)

Devices might also see an increase in their leakage current (remember the concept of leakage current at the beginning of this chapter). In NMOS transistors, charges might draw an image charge in the semiconductor which can reverse the interface and free leakage paths. These parasitic leakage currents cause degraded timings and increase power consumption. In general, BJT transistors are more robust against radiation compared to MOS (Metal–Oxide–Semiconductor) transistors. This is because the operation of a BJT transistor is based on the physical movement of minority carriers (see Sect. 3.2), which are not as susceptible to radiation-induced damage as the oxide layer in MOS transistors. In contrast, MOS transistors rely on the formation of a thin oxide layer, which can be disrupted by ionizing radiation. Furthermore, MOS transistors are more prone to Single Event Effects (SEEs, see next section), which occur when a charged particle strikes the gate oxide and alters the state of the transistor. BJT transistors are less sensitive to SEEs since the minority carrier transport is not affected by the radiation. TID can also cause amplifier gain degradation. This usually manifests as a reduction in gain with increasing total dose exposure. To compensate for it, more power needs to be supplied to the device. And TID can also cause dark signals in camera sensors as a direct effect of the charging of gate oxides. This is manifested as an increased noise background and is observed in both CCD and CMOS technologies. As a consequence, the dynamical range of the imager is compromised. This is a major problem with Star Trackers that could fail to locate reference stars. For TID mitigation, for the on-board electronics to maintain its electric performance (timing, current consumption) throughout the mission lifetime while subject to ionizing radiation energy deposition, a typical measure is to add shielding, which basically consists in adding barriers of certain materials. The effectiveness

3.3 The Space Environment

27

Fig. 3.7 Applicability of SEE to different device types

of a material in shielding radiation is determined by its half-value thicknesses, that is, the thickness of material that reduces the radiation by half. This value is a function of the material itself and of the type and energy of ionizing radiation. As for single event effects, the on-board electronics and on-board software shall be designed in a way that the SEEs will disrupt the nominal operations of the subsystem the least. Given that SEEs can be of destructive and non-destructive nature, different strategies will be defined for each case. Let’s unpack SEEs in the next section.

3.3.1

Unpacking Single Event Effects (SEEs)

There are different kinds of single event effects, and different types of electronic devices are susceptible to SEEs in different ways. The table below summarizes how SEEs impact different types of devices, both for non-destructive and destructive single event effects (Fig. 3.7).22 It can be seen that, for instance, analog and mixed signal circuits tend to be more robust (immune to different types of SEEs), as opposed to memories and Field Gate Programmable Arrays (FPGA) which tend to be highly susceptible to several kinds of SEEs. The SEEs listed above are split in two halves: non-destructive (that is, unable to cause permanent damage) or destructive (able to create permanent damage). Let’s unpack the acronyms.

22

FAA, “Single Event Effects Mitigation Techniques Report” https://www.faa.gov/aircraft/air_ cert/design_approvals/air_software/media/TC-15-62.pdf.

28

3

Semiconductors in Space: From Sand to Satellites

3.3.1.1 Non-destructive Effects . . . . . . .

SEU: Single event upset MBU: Multiple bit upset MCU: Multiple cell upset SEFI: Single event functional interrupt SET: Single event transient SEC-DED: Single error correction and double error detection SED: Single event disturb. Single Event Upset (SEU)

An SEU causes a change of state in a storage cell. The SEU affects memory devices, latches, registers, and sequential logic. Depending on the size of the deposition region and the amount of charge deposited, a single event can upset more than one storage cell in which case the effect is called a multiple cell upset (MCU). Multiple Bit Upset (MBU) An MBU is defined as a single event that causes more than one bit to be upset during a single measurement. During an MBU, multiple bit errors in a single word can be introduced, as well as single bit errors in multiple adjacent words. Single Event Functional Interrupt (SEFI) The loss of functionality (or interruption of normal operation) in complex integrated circuits due to perturbation of control registers or clocks is called a single event functional interrupt (SEFI). An SEFI can generate a burst of errors or long duration loss of functionality (e.g., lockup). The functionality may be recovered either by cycling the power, resetting, or reloading a configuration register. Single Event Transient (SET) A single event transient (SET) is a short impulse generated in a gate resulting in the wrong logic state at the combinatorial logic output. The wrong logic state will propagate if it appeared during the active clock edge. Single Event Disturb (SED) The transient unstable state of a static random-access memory (SRAM) cell is described as resulting from a single event disturb (SED). This unstable SRAM state will eventually reach a stable state and the characterization will fall under SEU. Because the unstable state of the cell can be long enough that read instructions can be performed and soft errors generated, SEDs are identified separately.

3.3 The Space Environment

29

3.3.1.2 Destructive Effects Destructive Single Event Effects are: . . . . . .

SHE: Single Event Hard Error SEL: Single event latch-up SESB: Single Event Snap-Back SEB: Single Event Burnout SEGR: Single event gate rupture SEDR: Single event dielectric rupture. Single Event Hard Error (SHE)

A single event hard error (SHE) is used to highlight the fact that a neutron-induced upset (e.g., SEU, MBU) is not recoverable. For example, when a particle hit causes damage to the device substrate in addition to the flipping bit, an SHE is declared in lieu of an SEU. Single Event Latch up (SEL) In a four-layer semiconductor device, an SEL occurs when the energized particle activates one of a pair of the parasitic transistors, which combines into a circuit with large positive feedback. As a result, the circuit turns fully on and causes a short across the device until it burns up or the power is cycled. The effect of an electric short is potentially destructive when it results in overheating of the structure and localized metal fusion. Single Event Snap-Back (SESB) SESBs are a subtype of SEL and, like SEL, they exhibit a high current consuming condition in the affected device. When the energized particle hits near the drain, an avalanche multiplication of the charge carriers is created. The transistor is open and remains so (hence, the reference to a latch-up condition) until the power is cycled (the device snaps back). Single Event Burnout (SEB) A single event burnout (SEB) is a condition that can cause device destruction due to a high current state in a power transistor, and the resulting failure is permanent. An SEB susceptibility has been shown to decrease with increasing temperature. SEBs include burnout of power metal oxide–semiconductor field effect transistors (MOSFET), gate rupture, frozen bits, and noise in charge-coupled devices. Single Event Gate Rupture (SEGR) An SEGR is caused by particle bombardment that creates a damaging ionization column between the gate oxide and drain in power components. It typically results in leakage currents at the gate and drain that exceed the normal leakage current on a non-exposed device. SEGRs may have destructive consequences.

30

3

Semiconductors in Space: From Sand to Satellites

Single Event Dielectric Rupture (SEDR) The single event dielectric rupture (SEDR) has been observed in testing but not in space-flight data. Therefore, it is currently considered mostly an academic curiosity. An SEDR is identified from a small permanent jump in the core power supply current.

3.3.1.3 Mitigation For the typical mitigation techniques against SEEs, two distinctive approaches are frequently used: internal and external. Internal here means, intra-integrated circuit (inside the chip). For example, certain mitigation techniques require increasing distances in the semiconductors or adding capacitive hardening of dynamic random access memories by inserting trench capacitors23 and transmission gates. All these methods need to be implemented by the chip designers and carry the penalty of requiring larger semiconductor area, with all costs related. Hardened chips may be manufactured on insulating substrates instead of the usual semiconductor wafers. Silicon on insulator (SOI) and silicon on sapphire (SOS) are commonly used. Another approach is about shielding the package against radiation to reduce exposure of the die or shielding the chips themselves (from neutrons) by use of special elements like depleted boron consisting of isotope boron-11 in the glass passivation layer protecting the chips. External mitigation techniques are those which can be added at the board or subsystem level, i.e., outside the chip. Current limiters, scheduled power cycling, memory scrubbing, generous design margins; these are all techniques which do not require tampering semiconductors and can be added by the system designers, with the penalty of increasing the overall complexity of the design, especially mass and power.

23

A deep trench capacitor (DTC) is a three-dimensional vertical capacitor formed by etching a deep trench into a silicon substrate.

4

The Hectic Ride to Space

It can’t be that hard, it’s just lift versus drag and rotation. —Jamaal (Me, Myself and Irene, 2000) while they try to figure out how to fly a helicopter

Abstract

Satellites are given the necessary velocity to achieve orbit by means of a launch vehicle that extracts chemical energy stored in the fuel and transforms it into heat and work to increase its own positional and motion energy, and with that the energy of the occasional passengers. Rockets are way more than just heat engines: they need structures, instrumentation, data links, piping, guidance, navigation, control, computers, and software. The road to orbit is a wild ride, full of shocks, accelerations and vibrations spacecraft must survive before they even have a chance to provide a service in orbit. This chapter describes the hectic journey and how it impacts the satellite’s design process.

The most typical use case of launchers is a vertical rocket in a launchpad at ground level in which vertical speed gained by its engines thrust ensures departure from the thickest atmosphere layers, only to then maneuver to imprint the right tangential velocity in the higher atmosphere to achieve the required orbital velocity. It sounds simple, but it is extremely complicated: rockets have hectic dynamics, engines are complex, so an incredible variety of situations are experienced by a spacecraft while attached to a rocket on its way to space. Other launcher technologies besides vertical launchers exist, which are less popular and less cost effective. Aircraft based, balloon based, and even slingshot-based launchers exist to disparate levels of maturity. We will refer to vertical launch as the baseline for this text, due to its popularity among all available options as of today.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_4

31

32

4 The Hectic Ride to Space

A launch generates high stresses on spacecraft structures. From a satellite perspective, launch starts when the boosters’ engines ignite (the launcher lifts off from the pad) and ends with the spacecraft separation, when it is finally allowed to float alone in space. Mind you, it is a wild ride. Launchers are generally designed in stages which discretizes a typical flight in a set of events where different parts of the rocket are separated and jettisoned in a complex sequence which involves stopping and starting rocket engines, creating a number of accelerations and shocks in the process. Rockets undergo different aerodynamic scenarios and can suffer from turbulence just as commercial airliners do, adding to the overall hecticness of the ride. In vehicles such as the Ariane 5 launcher, peak axial load factors of 4.2 g are expected (a load factor is a dimensionless multiple of g that represent the inertia force acting on the spacecraft or unit). There are many loads present during rocket flight, both in the lateral and longitudinal (i.e., axial, or along the rocket long body) directions. Some loads are predicted as a function of time and others can only be estimated statistically as random loads. In the mechanical environment, we find: . Steady state accelerations . Low frequency vibrations . Broadband vibrations, such as: – Random vibrations – Acoustic vibrations . Shock vibrations. Loads are transmitted from the launch vehicle to the payload—the satellite— through the mechanical interface which bonds the rocket and spacecraft together, which is usually a ring with bolts. During lift-off and early phases of the launch, a high level of vibrations transfer from the vehicle to the payload through their common interface. The principal sources of vibrations are the rocket engines thrusting, and aerodynamic turbulence and acoustic noise as pressure waves impinging on light weight appendages which can be excited beyond safe margins. Everything is so hectic that bolts may simply come off, as well as cables, boards, or brackets on board the satellite if not properly secured. As you may imagine, assembling a spacecraft requires adding a good amount of glue. In summary, the design process must consider all these factors and include the necessary measures to ensure launch survivability, considering that the satellite will be violently shaken and hit several times before it can even get to start thinking of providing its service.

4.1 Rideshares, Dispensers, and Orbital Transfer Vehicles

33

Tip: Launch vehicle manuals are great sources of information about loads, sequences, and cool infographics.1 ,2

4.1

Rideshares, Dispensers, and Orbital Transfer Vehicles

NewSpace companies, in general, cannot afford first class tickets on rockets. That is, having a comfy space for them alone to stretch the legs and sleep like a baby. Such tickets are worth tens of millions, and because small satellites coming from NewSpace in general are made on a budget, they can afford what is called a ‘rideshare’ in space jargon. Rideshares are ways the launch providers can make a better use of their available launch volume and make a profit in the process. For this, rocket companies provide adapters such as the popular EELV Secondary Payload Adapter (ESPA3 ) which can accommodate several secondary payloads attached to the available rings. Launch vehicles can stack several ESPA rings together. Once they deliver the primary payload, launch providers begin to deliver the different secondary payloads sequentially. One rocket may contain hundreds of secondary payloads.4 Broker companies can also make a buck or two by attaching dispensers into ESPA ports and exploit the space for dispensing smaller satellites. Even further, companies may attach Orbital Transfer Vehicles (OTVs) which are satellite dispensers with their own propulsion, computers, and attitude control—motherships of sorts—which can carry several payloads and insert them into customer-specific target orbits that rocket launchers cannot (or do not want to) go for. A breed of new actors in the market offering these services have bloomed in the last 5 years or so, and many of them are still maturing their technologies and trying to reach commercial capacity, not without problems. The fundamental challenge for OTVs is to convince customers that it is worth the risk of riding on-board of another full blown spacecraft in order to reach to the destination orbit. Constellation operators face a dilemma: acquiescing to the non-ideal orbits rocket launchers drop them to with high reliability and proven heritage, versus reaching a customized orbit (within certain limits) on board of a low-maturity, comparatively low-reliability mothership? At the time of writing—April 2023—the market appears to prefer the former, but it is true that OTVs are still new, and the trend may eventually change if they prove to be a reliable option, not only cost-wise but also technically speaking.

1

https://www.arianespace.com/wp-content/uploads/2011/07/Ariane5_Users-Manual_Octobe r2016.pdf. 2 https://www.spacex.com/media/falcon-users-guide-2021-09.pdf. 3 The EELV Secondary Payload Adapter (ESPA) is an adapter for launching secondary payloads on orbital launch vehicles. Originally developed for US launch vehicles in the 2000s to launch secondary payloads on space missions of the United States Department of Defense that used the Atlas V and Delta IV, the adapter design has become a de facto standard and is now also used for spaceflight missions on non-governmental private spacecraft missions as well. 4 Transporter-6 mission from SpaceX launched 114 secondary payloads in early January 2023.

5

Configuring Spacecraft

Make it simple, but significant. —Don Draper, fictional character on Mad Men

Abstract

Configuring a spacecraft means finding an optimal geometric and physical shape that can satisfy requirements within cost and schedule. And it also means defining the spatial and functional arrangement of the on-board equipment. Satellites are like Russian dolls: boxes inside boxes. Some of those ‘boxes’ are required to be at specific locations and looking in specific directions. This chapter dives into the complex, multidisciplinary challenge of configuring spacecraft where the main goal is to make everybody as happy as possible, only to end up making almost everybody unhappy.

Satellites are highly intertwined collections of subsystems (see next chapter for more details about these subsystems), and the main outcome of the configuration analysis phase is to define a geometric baseline of the placement and orientation of the most relevant domains (namely structure, power, attitude control and payload, which concentrate a high percentage of the spacecraft mass and complexity) while observing the high level requirements, including the envelope constraints, which is the volume available to occupy inside a selected launch vehicle. If the spacecraft flies in a rideshare (see Sect. 4.1), the envelope is typically a tight squeeze, which makes the configuration process more interesting, to use a polite term. The configuration process starts at the beginning of the project, and it is periodically reviewed throughout the system lifecycle. In short, configuration analysis

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_5

35

36

5

Configuring Spacecraft

is practically a permanent, on-going activity. During configuration, each subsystem (including payload) contributes to a tree-like decomposition called the product breakdown structure (PBS), including details such as: . . . .

List of units, with unique designators Environmental limits (allowable flight temperatures, accelerations, radiation) Power, size and mass of subsystems Interfaces.

Configuring spacecraft requires analyzing the information provided by the selected launch vehicle thoroughly to understand the allowed envelope inside the fairing, the mechanical interfaces, and relevant requirements in terms of structural frequencies and loads as we saw in the previous chapter. Configuring spacecraft must also include a good amount of technical bookkeeping and budgeting, which entails analyzing the stocks and flows of on-board resources such as power, pointing, data links, propellant, thermal and data, in order to understand and ensure their generation, consumption, and storage. In an iterative capacity, and as more information flows in, the configuration analysis shall: . Refine REL) . Refine . Refine . Refine

the detailed Bill of Materials (also called Required Equipment List, or mass of each unit + contingency power consumption of each unit + contingency mass properties along the body axes for control purposes.

While configuring spacecraft, the feedback loops are everywhere. For instance, the power budget feeds solar arrays sizing which feeds the mass budget which impacts the propulsion design which feeds the power budget (if electric propulsion is chosen). The efficiencies of the power subsystem design process feeds battery dimensioning activities, which is also shaped by orbital period and the eclipse time (during eclipses, there is no power generation, so satellites survive with the power batteries source). Propulsion analysis feeds the propellant budget which in turn feeds thrusters’ definition (thrust, orientation, type, and number of thrusters) which also feeds the mass budget and the power budget because thrusters are electrically actuated. Attitude control configuration activities require analyzing star-tracker orientation to avoid sun intrusion, which may require to move equipment out of their way, but as more elements come into the satellite body, there’s less and less space for playing this “Tetris” of sorts. Analysis of shadows must be performed to avoid a deployable appendage affecting optics or solar panels, same as plume impingement analysis, which is a phenomenon that occurs in propulsion subsystems where the exhaust plume impinges on a surface such as a spacecraft structure, and causes damage or degradation. In time, the mass budget starts to provide an inertia matrix—which indicates how mass is distributed along the spacecraft axes—the actuators must be able to

5

Configuring Spacecraft

37

slew for maneuvers, which feeds the reaction wheel sizing, which in turn depends on orbital disturbances, and so on. There is an unwritten axiom that silently rules space engineers: you move a screw in a satellite, and you unleash hell. For other subsystems, things don’t get any easier. For the payload: size, mass, mounting requirements, deployment sequence (if any), fields of view, power required, electrical interfaces, data transfer requirements, electromagnetic interference requirements. All with the added complexity that payloads are usually supplied by a different organization compared to the bus designer, so add on top of that all the complexities involved with two dissimilar companies with different cultures, possibly different languages, and different processes and methodologies interacting together. For communications, the process includes antennas dimensioning and orientation, uplink/downlink requirements in terms of signal to noise ratio (or its more specialized derivatives like Eb/N01 ), power output of transmitters, sensitivity for receivers, antenna apertures, attenuators, weather attenuation, and so forth. For data links, ground stations are rarely owned by the spacecraft operators, so bus providers must ensure compatibility with equipment of third-party ground stations which may vary from provider to provider. As subsystem design matures, light appears at the end of the tunnel, and the configuration starts to feed from better and better data, taking more detailed information in stemming from all the subdomains and allowing for certain modifications to be made. The earlier the changes are done, the cheaper and simpler to execute them. Configuration analysis is a collaborative activity between systems engineers, domain experts and project managers. Change is, ironically, a constant in space design, and a fluid communication between the team is key to catch the changes as they inevitably happen. Big changes in space missions happening too late in the schedule is usually bad news: the bigger the change, the bigger the ripple waves, and the broader the analysis that must be performed to assess that everything is still compliant with requirements. Once the configuration process gains enough maturity, it provides the confidence to procure or manufacture the different elements and components, and soon the design starts to be ready for Assembly, Integration and Test (AIT, see Sect. 6.8 for more details). This involves handling each component—either in-house or supplied by a third party—and assembling them into the final configuration, without forgetting the glue of course (see Chap. 4). With the spacecraft assembled, the next step is to test and verify it. This involves a series of comprehensive tests to ensure that the spacecraft can withstand the stresses of the launch and operate effectively in the indented environment. This

1

In digital communications, Eb/N0 (energy per bit to noise power spectral density ratio) is a normalized signal-to-noise ratio (SNR) measure, also known as the “SNR per bit”. It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.

38

5

Configuring Spacecraft

stage involves verifying that each subsystem is functioning correctly, and that the spacecraft design is, a priori, able to meet the mission objectives, which largely drive the design process. We’ve talked about verification thus far. In terms of validation, the mission objectives will only be validated with the satellite placed in the target environment. Bibliography loves to confuse these two terms. As we hinted few paragraphs ago, some spacecraft designs are conceived as “buses” or “platforms”, that is, they are generic designs which are not strictly mapped to one particular mission but aim to be “multi-mission”. Multi-mission design requires a different, more modular approach where the architecture must be broken down into building blocks that can be reused from mission to mission. This is far beyond the scope of this chapter, but we will say here that multi-mission spacecraft design requires applying product variant management techniques to ensure that the “core” design and the specific mission designs map consistently without jeopardizing reliability while at the same time maximizing reuse and commonality. I wrote extensively about modularity in spacecraft in some other text,2 feel free to go and take a deeper look.

2

https://link.springer.com/chapter/10.1007/978-3-030-66898-3_5.

6

A Peek Under the Hood

A fist is more than the sum of its fingers. —Margaret Atwood

Abstract

To have a working satellite in orbit, you need a set of “building blocks” in place. As a bare minimum, all satellites need housing, need a way of generating and storing power, need radio links, orientation control, computers, and industrial amounts of software, and glue. This chapter describes these building blocks and how their interconnection creates working spacecraft.

6.1

The Skeleton: Structures and Mechanisms

The history of aerospace is also the history of materials. In 1915, just 12 years after the Wright Brothers had made the first ever powered flight, this was a hot topic for discussion among aviation experts: how could metal fly? A century later, we are flying rockets made of sophisticated composite materials carrying satellites made of aluminum alloys, carbon fiber, titanium, and many other materials, including some 3D printed parts. But, a hundred years ago, things were a tad different. Aircraft were made to be as light as possible, often using wood, steel wires and canvas; the idea of a plane made entirely of metal seemed technically infeasible. One man knew differently, though. German pioneer Hugo Junkers saw the future of aviation not only in aerial battles and flying competitions but in largescale transport of goods and passengers. That would require a major change to the way aircraft were made. His revolutionary J 1 (the space character between the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_6

39

40

6

A Peek Under the Hood

Fig. 6.1 Junkers J 1 (public domain)

J and the 1 is intentional) was the world’s first all-metal aircraft, as well as the first to use a single monoplane wing. The J 1 was nicknamed the Blechesel (“Tin Donkey” or “Sheet Metal Donkey”). Junkers found that steel made the J 1 tough and durable, but heavy and sluggish to handle. He turned his attention to aluminum, which had emerged at the start of the twentieth century as a viable manufacturing material. Lightweight and strong, it is a third the weight of steel, making it ideal for aircraft. He used it to develop the world’s first civil airliners, such as the F13 and G24. Junkers’ work was noticed by Henry Ford, who borrowed heavily from it (too heavily, said Junkers, and decided to take him to court1 ) to make the Ford Trimotor in 1925. These aircraft welcomed the age of long-distance passenger aviation, although it wasn’t until the early 1930s that metal aircraft could be manufactured cost-effectively (Fig. 6.1). In space, materials are incredibly important, although not from an aerodynamic perspective. Aerodynamics does not play such a relevant part even though there is, to be technically accurate, aerodynamic drag in space. Such small drag—due to the thin remnants of the atmosphere’s upper layers—manifests as a disturbance force that alters satellites’ orbits and as a torque acting in the spacecraft body which affects the satellite’s orientation. Spacecraft actuators must be sized to account for this disturbance, and other disturbances such as gravity, radiation pressure, etc. (see Sect. 6.4 for more details on disturbances). Mass reduction is a strong driver for material research in space because launch cost is related to the amount of mass to be launched. But mass cannot be optimized or minimized in an isolated manner, because of the complex tradeoff mass reduction may cause on stiffness, thermal conductivity, and structural response. Schematically, spacecraft metallic structures can be modeled as a distributed network of springs, masses, and dampers. The “spring” part in a structure comes

1

http://www.blancorincon.com/Fragatas/The%20Story%20of%20the%20Ford%20Trimotor.htm.

6.1 The Skeleton: Structures and Mechanisms

41

from the incapacity of the materials of being perfectly rigid, as all materials distort under load. Up to certain loads, materials show a linear relationship between load and displacement (just like a linear spring). Within this region, the material or component will return to its original shape should the load be removed. If we continue loading it, the material will not be linear anymore, and it will show permanent deformation and rupture will be reached. The stiffness and strength depend on cross-sectional area, length, and material. Stiffness is defined as a measure of the load required to cause a certain amount of deflection in the material. Strength-to-weight ratio is a critical factor in choosing structural materials for spacecraft. Static and dynamic loads must be considered, along with thermal performance, corrosion protection, manufacturability, reparability, and cost. Static loads are those loads which remain constant in time whereas dynamic loads vary with time. Examples of static loads are the weight of computers and units on board (load is present when a steady acceleration is applied). Examples of dynamic loads are engine thrust, sound pressure and gusts of wind during launch. A structure’s stiffness and mass are distributed as a result of the material’s elasticity and density. When subjected to time-varying forces, a structure’s total response is the sum of the responses of its mode of vibration. Fortunately, only a small number of modes (typically those with lowest frequency) are of interest. The displacement associated with higher order modes isn’t enough to produce significant stress. The frequency band over which vibration is a concern depends on the structure’s parameters, such as size, material, geometry and on the excitation environment. Every structure has an infinite number of natural frequencies corresponding to different mode shapes of vibration. For a long highway bridge, for example, excited by wind gusts, only the modes with frequencies less than one or two Hz might be significant. For a spacecraft’s primary structure during launch, one might need to consider modes up to 50 Hz. For smaller structures like unit boxes and electronic components, their frequency responses can be at several hundred Hertz. One of the main requirements driving spacecraft structures is to withstand the wild ride on top of the rockets (see Chap. 4) without degradation to ensure a correct performance of the mission once in orbit, because, well, that’s when the actual work starts. Structures provide housing to spacecraft’s key components in specific locations and orientations, considering thermal management, field of view, orientation, etc. But structures also: . Protect the spacecraft’s components from the environment during ground operations, launch, deployment, and mission operations. . Ensure that structural vibration will not interfere with the launch vehicle’s control system. Similarly, the spacecraft’s structural vibration in its deployed configuration must not interfere with its own control system. . Ensure on-board vibrations will not affect payload performance. . Ensure that the materials used will survive ground, launch and on-orbit environments (pressure, humidity, radiation, contamination, thermal cycling,

42

6

A Peek Under the Hood

and atomic particles) without rupturing, collapsing, excessively distorting or contaminating critical components. The primary structure is the skeleton of the spacecraft, and the main vessel of the loads transferred between the spacecraft’s components and the launch vehicle. The primary structure provides the mechanical interface not only with the launch vehicle but also with all the mechanical ground support equipment (MGSE) needed during the design and development phase, notably during AIT activities. One of the most important design decisions in space projects is the definition of the primary structure. But secondary structures are crucial as well. Secondary structures include support beams, booms, trusses, and solar panels. Smaller structures, such as boxes and brackets that support harnesses are called tertiary structures. The primary structure is designed for stiffness (or natural frequency) and to survive steady-state accelerations and transient loading. Secondary structures are also designed for stiffness, but they are also affected by on-orbit thermal cycling, acoustic pressure, and the high frequency environment. Structure design complexity is constrained by cost and by the requirement that all its elements should be producible, and testable. A producible design is one that the engineers can build using affordable raw materials, using established and simple processes and available equipment and tooling. A testable product is one that can be handled and verified on ground without the need of overly complex infrastructure and procedures, and with measurable success criteria. Most materials expand when heated and contract when cooled. In space, the Sun ensures that an orbiting spacecraft’s temperatures are neither uniform nor constant. As a result, spacecraft structures distort. The various materials that make up a spacecraft expand or contract different amounts as temperatures change. Thus, they push and pull on each other, resulting in stresses that can cause them to yield or rupture. Most spacecraft requires accurate predictions of thermal deformations to verify pointing and alignment requirement for sensors or communication antennas. Thermo-elastic forces are usually the ones which drive the joints design in structure with dissimilar materials regarding CTE (coefficient of thermal expansion) thus causing high shear loads in fasteners that joint dissimilar materials. There are guidelines and practices to help to reduce the effects of thermal deformation and stresses: . One shall contain critical members and assemblies in multilayer insulation (MLI) to keep temperature changes and gradients as low as possible. . When possible, one must design structures and their interfaces with isostatic mount designs in order to avoid thermally induced boundary loads. . Design structural members from materials with low CTE to minimize thermal deformations. . For structures with large temperature excursions, it’s recommended to use materials with similar CTEs. When attaching members of different materials, one must design the joints to allow the expected differences in thermal expansion.

6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks

43

The structure is an essential subsystem in any spacecraft. There is little point in having the most sophisticated computers, the fanciest software-defined avionics if the whole thing will fall apart into pieces as soon as the rocket lifts off.

6.2

The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks

German physicist Heinrich Hertz was among the first to experiment with the generation of radio waves by creating high voltage sparks between two conductors, pioneering with the investigations of electromagnetics and propagation. In reality, there were vast numbers of radio waves in the universe long before Hertz performed his famous experiments. Electromagnetic waves are generated naturally whenever electric charges are accelerated or decelerated. All hot objects, in which charged particles are in rapid random motion, radiate electromagnetic energy in various frequencies. The stars are potent sources of electromagnetic energy, which is the basis of radio astronomy. On our own planet, atmospheric events such as lightning strikes produce showers of radio energy, noticeable as the background crashes and crackles heard on broadcast receivers during thunderstorms. In most of these cases of natural generation, the radio energy is incoherent, characterized by a jumble of photons of disparate energies. The same is true of many human-made sources of electromagnetic disturbances, such as electrical machinery.2 Italian inventor and electrical engineer Guglielmo Marconi had successfully transmitted radio energy for a bit more than a kilometer, generating his own “invisible waves” in Pontecchio, Italy. After failing to impress the Italian government, Marconi traveled to England at the age of 22, where he found financial backers for his work. By 1897, Marconi was broadcasting radio waves up to 20 km away and was commissioned to set up a wireless station on the Isle of Wight so that Queen Victoria could send Morse code messages to her son while he was aboard his yacht. In 1899, Marconi began work on a transatlantic broadcast, believing (unlike the leading physicists of the day) that the signals would follow the curvature of the earth. In 1901, he achieved a 3430 km transmission of three Ss (or three sets of three dots in Morse code) from Poldhu, Cornwall to Signal Hill in St. John’s. Needless to say, his work led to a communications revolution. But, why exactly? Marconi will not be remembered for generating precise radio waves—he was, technically speaking, spouting noise in a more or less controlled manner—but he will be remembered for leading the work to convey information wirelessly by

2

Adapted from Radio Antennas and Propagation: Radio Engineering Fundamentals (1st Edition), William Gosling.

44

6

A Peek Under the Hood

manipulating electrical signals with the help of electric circuitry. In more modern terms, he fathered radio access technology.3 In the late 1890s, Canadian American inventor Reginald Fessenden started to realize that spark transmitters were not as elegant, and he realized he could develop a far more efficient transmitter and coherent receiver combination. To this end, he worked on developing a high-speed alternator that generated pure sine waves and produced a continuous train of propagating waves of substantially uniform nature, or, in modern terminology, a continuous-wave (CW) transmitter, which still consisted in turning on and off a sine wave signal of fixed characteristics in a rather sloppy way. In time, the idea of doing such a thing—switching a signal on and off—would start to give way to a more subtle approach: keeping the carrier on and altering the attributes of said carrier at the rhythm of another signal containing the information to be transmitted. In other words, modulation. Fessenden would at some point introduce the concept of signal heterodyning, where a receiver would have a local oscillator producing a radio signal adjusted to be close in frequency to the signal being received. When the two signals would be mixed, a “beat” frequency equal to the difference between the two frequencies was created. Adjusting the local oscillator frequency would correctly put the beat frequency in the audio range, where it would be heard as a tone in a receiver’s speaker whenever the transmitter signal is present. Now we could hear tones, and thus the Morse code “dots” and “dashes” would become audible as beeping sounds. Nice. Soon, we would realize that nothing was preventing us from opportunistically altering a carrier’s amplitude with something more than mere tones (a tone is a signal composed of a single frequency) but using spectrally richer signals, such as audio.4 Consequently, voice communication and broadcasting would become a very popular thing. With all this, it soon became clear that: . The bandwidth of the modulating signal (its spectral richness of frequencies) would turn out to be of great importance since it would start requiring certain capabilities from transmitters and receivers for protecting such bandwidth across their signal chains if one pretended to ensure audio quality. . The rapid proliferation of radio transmitters would make evident the need for spectrum coordination. After all, the experimental early phase of radio communications was starting to be over, and we had to start regulating transmitters’ activity to reduce unwanted interference and aim for a more responsible use of a now key natural resource like the radio spectrum.

3

A Radio Access Technology or (RAT) is the underlying physical connection method for a radio based communication network. 4 Audio is the conversion of sounds waves into electric signals by means of a transducer such as a microphone.

6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks

45

Additionally, and as other technologies like transistors, integrated circuits and computers were maturing fast, we would realize that the modulating signals did not need to be only of analog nature. Digital signals could also be used to alter a carrier’s attribute. In the meantime, we also would figure out ways of altering other attributes of sinusoidal signals, namely frequency, and phase, which showed better bandwidth efficiencies, where a change in the carrier’s phase could be coded to represent several data bits. Digital modulation would make a glorious appearance, giving us the possibility of sending digital data between a transmitter and a receiver. A small practical problem showed up: Fourier analysis5 taught us that perfectly squared waves are equivalent to a sine wave at the fundamental frequency summed with an infinite series of odd-multiple harmonics at decreasing amplitudes (describing a sinc function in the frequency domain). This meant a quasi-infinite bandwidth to handle! So, inevitably, a breadth of techniques for limiting the bandwidth of digital signals prior to transmission materialized for ensuring that practical bandwidths could be achieved while guaranteeing the ones and zeros present in the data stream could still be discerned at the receiving end. A metric started to surface as important: data rate; that is, the number of bits of information that could be transmitted or received per unit of time (usually per second). We wanted it to be higher and higher (and we still do). Shannon and Hartley with their homonymous theorem6 would come to tell us to hold our horses because the theoretical upper bound on the information rate of data that could be communicated at an arbitrarily low error rate for a given received signal power through an analog communication channel subject to noise would be constrained by the bandwidth of the channel, the average received signal power over such bandwidth and the average power of the noise and interference over said bandwidth. Said theorem would make it very visible that such rates would be painfully governed by the signal-to-noise ratio (SNR). In time, our engineering predecessors would burn their brains devising techniques to squeeze more and more from a now contested and noisy radio spectrum. This included techniques to improve said SNR—pumping out power is not always a practical solution—with the aim to achieve lower and lower bit error rates (BER). Antenna technology improvements and techniques combined with the appearance of a multitude of coding techniques would allow radio engineers to reach the same BERs at lower SNRs, making their designs simpler. Spectrum use sophistication wildly kicked in. It became understood that we could also partition the spectrum of a communications channel in a series of non-overlapping bands called subcarriers (a technique called frequency-division multiplexing, or FDM). And now the concept of orthogonality would find a great application: remember we said before that a digital signal’s spectrum is described

5

In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. 6 https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem.

46

6

A Peek Under the Hood

by a sinc function? Well, if we chose the frequencies of contiguous sub-bands wisely, the individual peaks of the subcarriers’ sincs will line up with the nulls of the other subcarriers. A clever, constructive use of interference. Such overlap of spectral energy still gives us the ability to recover the original signal, which is the foundation of Orthogonal Frequency-Division Multiplexing (OFDM), a method to convey data using a set of orthogonal subcarriers at lower data rates as opposed as to modulating a single carrier at a very high data rate. OFDM would become the foundation of Wi-Fi (IEEE 802.11a/g/n/ac), 4G LTE and 5G mobile communication technologies. Fast-forwarding to these days, wireless networks are ubiquitous. The mobile phones we carry in our pockets are continuously exchanging data with cell phone towers at high data rates, with antenna towers operating arrays of antennas equipped with techniques to track users spatially, serving a big number of moving targets such as smartphones or connected devices as they move around cities, across the globe. Note that, thus far, we have not talked at all about what’s in the data being transmitted. And this is intentional. Wireless links can be thought of as onions of sorts, in the sense that they are designed in layers. So far, we have been strictly talking about the lowermost layer: the physical layer. This is about electric signals traversing across a transmitter, through an antenna that emits them as a shower of radio quanta which propagates towards a receiver, without caring about what those bits being transmitted mean. Engineers would spend a long time defining different formats and protocols for the data to follow. Such protocols would address important needs such as addressing, error handling, automatic retransmission requests, forward-error-correction, fragmentation, virtual channels multiplexing, and many other features. Still, at the end of the day, the fact is that only bits are communicated. We will expand on this in Sect. 6.2.3. But how does space fits in all this? Space hasn’t grown unaware of the technological progress of radio engineering in the last century. Radio links have been, for obvious reasons, the weapons of choice for transferring data from satellites to the ground. But the space industry seems to be showing, in terms of the state-of-the-art, two different faces: SATCOM and Earth Observation. On the SATCOM face, the spectrum exploitation level of sophistication is very high, using advanced frequency-division multiplexing or time-division multiplexing techniques to squeeze as many customers as possible into a single communications satellite. Called multiple access, it allows several carriers from several earth stations to access a SATCOM’s antenna. To serve its customers, the payload of a SATCOM satellite incorporates a repeater that consists of one or more channels, called transponders, operating at all times in parallel on different sub-bands of the total bandwidth used. SATCOM networks can be based on a single-beam antenna payload or multibeam antenna payload. In the context of a single-beam payload, the carriers transmitted by earth stations access the same satellite receiving antenna beam, and these same earth stations can receive all the carriers retransmitted by

6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks

47

the same antenna. The number of users that can be served with a single SATCOM satellite and with only one parabolic antenna is mesmerizing. On the Earth Observation face, the level of sophistication in terms of spectrum use decreases a bit. Earth observation satellites are still using predominantly single carrier channels and techniques, somewhat “classic” modulation schemes and protocols, and limited or no beam forming or beam scanning capabilities. In Earth Observation missions in LEO, the wireless links are mechanically steered with the satellite moving its body-mounted antenna to track a ground-based, big antenna and the ground station antenna, in turn, tracking the satellite. Satellites, irrespective of SATCOM, EO, pure science, or any other application, are about data. No satellite exists for the sake of being up there isolated, floating alone in space. Satellites are always part of a mission, and missions are about acquiring, transporting, forwarding, or downlinking various kinds of digital data, be it health status telemetry, internet traffic, IoT transactions, deep space astronomy, or a hyperspectral image of some location on the ground. Such bits and bytes need to move fast because someone, somewhere is eagerly waiting to make sense out of them. Augmented with other technologies like lasers, satellites are becoming nodes in a global-area wireless network. Nothing is really stopping us from being able to integrate satellites to our ground-based wireless networks, including our modern mobile networks.

6.2.1

Mobile Networks and Satellites

Mobile communications have evolved from generation to generation, adding better and better capabilities, and the trend is far from being over. Initially, the first two mobile comms generations aimed to ensure the efficient transmission of voice information. Newer generations added more digital technologies, bandwidth-efficient modulation schemes and a smarter use of an increasingly contested spectrum, allowing faster data rates, more secure radio access technology and a breadth of protocol layers ready to carry internet protocol (IP) datagrams. This, in turn, unlocked the possibility of merging mobile and global networks seamlessly, giving the end users the possibility of enjoying the same services and applications they used on their computers now from their smartphones, without compromising on quality. The last generation of deployed mobile communications—the fifth generation, or 5G—incorporates capabilities that were unheard of in previous generations. To exploit the potential, a new radio-access technology known as NR (New Radio)

48

6

A Peek Under the Hood

was devised. The key innovative factor of defining a brand-new radio access technology meant that NR, unlike previous evolutions, was not restricted by a need to retain backwards compatibility. This permitted rolling out a set of key use cases: . Enhanced mobile broadband (eMBB), which appears as the most straightforward evolution from previous generations, enabling larger data volumes and further enhanced user experience. . Massive machine-type communication (mMTC), providing services that are characterized by a massive number of devices, such as remote sensors, actuators, and monitoring equipment. Key requirements for such services include ultra-low device cost and ultra-low power consumption, allowing for extended device battery life of up to at least several years. Typically, each device consumes and generates only a relatively small amount of data, thus, support for high data rates is of less importance. . Ultra-reliable and low-latency communication (URLLC), with services targeted to ensure very low latency and extremely high reliability. Examples hereof are traffic safety, smart cities, automatic control, power grid and factory automation. Although 5G was deployed several years ago (Release 157 ), it keeps evolving. The newest evolution of 5G (Release 18>), is called 5G Advanced and it will add support for new applications and use cases. 5G Advanced is expected to bring significant enhancements around smarter network management by incorporating artificial intelligence and machine learning techniques for beam management, load balancing, channel state information feedback enhancement, improvements in positioning accuracy and user equipment network slicing. 5G Advanced is envisioning incorporating low-latency audio and video streaming services aimed for Extended reality (XR), along with a more energy-efficient use of network resources, and Deterministic Networking (DetNet8 ) capabilities to ensure deterministic data paths for real-time applications with extremely low data loss rates and packet delay variation. More importantly, recent releases of 5G have been progressing on integrating satellite communications with 5G NR techniques called “non-terrestrial networks”, or NTN. The study of non-terrestrial networks includes identifying NTN scenarios, architectures and use cases by considering the integration of satellite access in the 5G network including roaming, broadcast/multicast, secure private networks, etc.

7

3GPP uses a system of “Releases” which provide developers with a stable platform for the implementation of features at a given point and then allow for the addition of new functionality in subsequent Releases. 8 Deterministic Networking (DetNet) is an effort by the IETF DetNet Working Group to study implementation of deterministic data paths for real-time applications with extremely low data loss rates, packet delay variation (jitter), and bounded latency, such as audio and video streaming, industrial automation, and vehicle control.

6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks

49

The synergy between 5G and satellites is neither mere speculation nor hype, but tangible technical traction towards integrating space technology and mobile communications together. While 5G Advanced is about adapting the already established generation for new incremental use cases, 6G (Release 20>) is natively designed for the human digital needs of the next decade. The sixth generation is already in the making, coordinated by the 3rd Generation Partnership Project (3GPP9 ), the standards development organization behind the 6G initial research of enabling technologies, the definition of the requirements, the technical steering, and identification of use cases. An activity which is already ongoing and will span for the next half decade or so, refining the architecture and commencing implementation. The core driving factors for 6G will revolve around enhancing human communication, including immersive experience, telepresence, and multimodal collaboration and interaction. 6G will also aim to enhance machine communication, with the focus on autonomous machines and vehicles capable of sensing their surrounding environment in real time (network as a sensor). 6G will provide key enabling services, such as hyper-precise positioning, mapping, and smart health. Satellites and humans carrying smartphones have more in common than what meets the eye. Both are moving nodes in adaptive, time-variant networks. Unlike us humans, who roam around cities following rather complex adaptive patterns, satellites describe more deterministic paths in orbit, and by choosing their orbiting geometry carefully, connected constellations in Low Earth Orbit (LEO) can be deployed to achieve global coverage with low latency and smaller propagation losses. Today, almost half of the world’s population still lives in areas that do not have basic connectivity services. Non-terrestrial networks can provide affordable and reliable broadband services for areas where mobile operators do not find commercial feasibility to build terrestrial networks. By integrating different non-terrestrial network systems together, such as LEO satellites, unmanned aerial vehicles, and high-altitude platforms, non-terrestrial networks can be easily rolled out, connecting people through various devices such as smartphones and laptops, and helping sense and monitor critical infrastructure in a secure and power-efficient manner.

6.2.2

Lasers in Orbit

A reality which most Earth Observation companies face every day is: data bottlenecks. Satellites generate lots of data—a single acquisition from a camera or a radar can account for several gigabytes of raw data, creating terabytes of payload data per day. According to the European Space Agency (ESA), the Sentinel-1 radar satellite produces approximately 10 terabytes (TB) of data per day, while

9

See more here: https://www.3gpp.org/.

50

6

A Peek Under the Hood

the Sentinel-2 multispectral instrument generates around 20 TB of data per day.10 Another example is Landsat 8, a joint mission of NASA and the United States Geological Survey (USGS), which captures images of the Earth’s surface at a spatial resolution of 30 m. Landsat 8 generates approximately 700 scenes per day, with each scene covering an area of about 185 km by 185 km. Each scene contains approximately 700 megabytes (MB) of data, which adds up to approximately 500 gigabytes (GB) of data per day.11 But the real deal is how to make sure this amount of data gets to the right hands as fast as possible. On-board storage is not infinite, and more importantly, customers are having problems which are waiting to be solved with the help of geospatial data. As we have described above, space systems have relied on radio links to transfer payload and housekeeping—that is, health status—data to the ground stations, and to end users. And we also saw the dark side of radio links: interference, noise, and spectrum sharing. These are all somewhat intertwined—let’s unpack this a bit more. There is no such thing as a true point-to-point radio link. This means, one ideal transmitting antenna sending electromagnetic photons to be exclusively captured by a distant receiving entity. In reality, there are scattered photons all around the place. Statistically, a substantial number of photons will arrive at the intended receiving entity and the signal will be hopefully reconstructed—if the receiving entity is active and capable of doing so, but what happens with those nomad radio photons which went elsewhere? That is the key here. We as humans have managed to create billions of radiating/transmitting artifacts around the world, all of them scattering photons outside of their ideal boundaries, creating a soup of radio waves wandering around, getting where nobody called them and knocking the door at the wrong antennas. As we also described before, there are human-made transmitting devices which are not purposely meant to be radio transmitters—a car engine starting spreads radio energy in many different bands with various energies; such radio energy is a by-product of the workings of internal electric devices of the engine such as spark plugs, electric motors, etc. Transmitting and receiving antennas have their own spatial preference when it comes to collecting radio quanta—they collect more from some directions than others—and such preference is never “laser focused” (pun intended I guess?). For any receiving antenna collecting “right” and “wrong” electromagnetic energy, we can call the former signal and the latter noise. The proportion between these two is the already familiar signal-to-noise ratio. Due to the spectrum coordination needs we talked about in the previous section, every satellite carrying a radio transmitter requires approval from the International Telecommunications Union, or ITU. Such need for coordination does not come without its dose of bureaucracy. As more and more actors are coming into the space industry, with shorter timelines and the

10 11

Retrieved from https://sentinel.esa.int/web/sentinel/sentinel-data-access. Retrieved from https://www.usgs.gov/media/files/landsat-8-data-users-handbook.

6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks

51

determination to rapidly get into the market, the radio licensing process can be slow and cumbersome. In summary, radio links are great and are still getting better as technology matures, but the main downsides—notably interference and compliance/ regulation—are still there and will be there. The alternative to this? Laser communications. But why lasers? Laser communication is not regulated by the International Telecommunication Union, which means it can be used without restrictions and does not require licensing. The reason for this should be clear by now: laser-based links are comparatively more focused than radio links—they have small beamwidths, and incredibly high bandwidths—which means that between transmitter and receiver there are less photons going astray. How? Lasers produce a narrow beam of light in which all of its composing waves have very similar wavelengths, and they travel together in phase—the emitted photons are “in step” and have a definite phase relation to each other—concentrating a lot of energy on a very small area. As there is no such a thing as a free lunch, adding laser communications to satellites does not come without some challenges. Because the laser beam widths are so narrow, this requires that the pointing capabilities of the satellites carrying laser terminals be top-notch. What is more, satellites have to be structurally optimized to carry the highly sensitive optical equipment, reducing, and filtering mechanical jitter and unwanted vibrations, which calls for a solid structural analysis and design to precisely understand how the vibrations find their paths across the satellite’s structure. Last but not least, the on-board data handling architecture must be up to the task by allowing the high data rate frames to seamlessly flow from the optical terminal to the on-board resources such as processing units and data recorders, and vice versa. Another important property of laser communications is that their beams cannot pass through solid/opaque objects. Hence, manmade structures on earth as well as cloud cover can cause some attenuation in earth to space communications. However, it is not a major deterrent, and can be overcome with careful planning and considerations. More importantly, there are no such hurdles in space-to-space communications and hence laser technology is ideal for inter-satellite links and other space-based communication. There is quite some activity in the laser comms sector happening at the moment.12 NASA demonstrated an optical link from the Moon in 2018 with the Lunar Laser Communications Demonstration (LLCD). The LLCD demonstration consisted of a space terminal on the LADEE spacecraft13 and three ground terminals on Earth. Together, they demonstrated that it was possible to transfer up to 622 Mbps (megabits per second) of data from the Moon with a space terminal that weighs less, uses less power, and occupies less space than a comparable

12

https://www.nasa.gov/feature/goddard/2021/laser-communications-empowering-more-datathan-ever-before. 13 https://www.nasa.gov/mission_pages/ladee/main/index.html.

52

6

A Peek Under the Hood

RF system.14 Commercial actors in the market are increasingly incorporating laser terminals in their constellations.15

6.2.3

Connectivity

We discussed spacecraft configuration in Chap. 5, where we said that configuring a satellite is about defining the subsystems that will be sitting on-board. Ultimately, combining subsystems from disparate origins inside a technical artifact like a spacecraft means connecting them and making them exchange information. The act of communicating is remarkable. Two people are capable of holding a small talk about the weather in an elevator by making air vibrate in a layered manner by means of the vocal cords of one whose airwaves are caught by the ears of the other one, where different frequencies/pitches, pauses and timbers make up phonemes which eventually become syllables, words, and ultimately intelligible speech about how awfully humid the morning is going to be again. Why layered? Layered because you can dissect the act of communication in different parts and skins, like an onion. For starters, the air vibrations mean nothing by themselves— a glass falling to the floor and crashing to pieces creates a lot of air vibrations in the surroundings as well but that forms no speech—but the collection of those vibrating patterns in “packets” in a coherent manner and eventually interpreted by a human brain—in context—is what defines the messages being exchanged and ultimately—by stitching them with some criterion—the act of communicating. Contracts and rules take place in the process of discerning what means what, and there are contracts at each and every skin of the onion: phonemes, syllables, words, phrases, and even semantic rules at the topmost layer, for example, by being polite while the trivial conversation progresses and avoiding showing how much you want to leave the elevator. Imagine now that the two elevator passengers spoke different languages, and that they used a smartphone for translation: in this case, the phone acts as a passive ‘bridge’ between them converting from one protocol—language—to another without having the need to interpret or store what’s being said. The phone could also store a recorded message in one language and send it to another actor later on—something that is usually called store and forward. It could also happen that the message is not well understood or gets corrupted or broken by a sudden noise in the elevator, requiring one of the parties to ask for it to be re-sent. When we communicate as humans, there is error-control at play, and we subconsciously try to maintain a signal-to-noise ratio—think about how you automatically raise your voice when you are talking to a friend on the metro as the train goes faster and noise gets louder. And there is flow control: we don’t convey more information than our receivers are able to take; ever asked a professor in university to go

14 15

https://www.nasa.gov/directorates/heo/scan/opticalcommunications/llcd/. https://spacenews.com/all-future-starlink-satellites-will-have-laser-crosslinks/.

6.2 The Data Links: From Sparks to Mobile Networks, Lasers, and In-Orbit Networks

53

slower while writing something on the whiteboard? Too much data can easily clog channels, and throttling down the rates is a way to go. Elements inside a technical artifact like a spacecraft or a car imitate human communication and have all the elements of the elevator small talk, only with a more steampunk touch if you want. Elements on-board modern vehicles communicate by opportunistically manipulating signals through different transmission media—air, vacuum, copper wires or light, among others—packing things in meaningful “chunks” and defining a “dialect” or a contract on how the chunks are supposed to be collected, packed, and unpacked by every actor who attends the data party. What is more, computer communication defines what to do in presence of errors, how to manage things that are in principle too large to fit through some particular channel in smaller chunks only to be then reassembled in the right order at the receiving end. Should be no surprise, then, that data communication is also layered. As it happens, defining the layers is somewhat arbitrary, although there is one de-facto definition which includes seven layers,16 but the concept can be largely explained without falling into rigidly labeling things one way or another. The lowest layer is called the physical layer, just like airwaves in the elevator example. These are perturbations in some transmission media that can be eventually converted into ones or zeros a computer can manipulate better. It can be electric pulses on a copper wire, electromagnetic waves in optical fiber or vacuum, or it could be smoke signals, for that matter. Then, these media perturbations are converted to digital symbols and collected by the next layer—which probably needs to include some sort of mechanism to discern where the message is starting and ending, in case you were distracted looking at a bird when the smoke signals kicked in—and some extra information to be added beforehand to ensure eventual errors are corrected, if any. But this layer is strictly a local messenger and brings no value if two networks are supposed to collaborate, or if data needs to be retransmitted, that’s someone else’s job.17 This layer strictly delivers a collection of ones and zeros (“frames” in this layer’s jargon) from one point to another point without really asking more questions. Examples in space are SpaceWire, SpaceFibre, Ethernet, MIL-STD-1553, RS-422, AX.25, CCSDS AOS, etc. Many of these may also specify the physical layer. The next wrapper, the following layer of the onion—the network layer—is actually taking a peek into the bunch of ones and zeros because it knows that some of those might be actually a destination address and it might also mean the thing (“packet” in this layer jargon) could be targeted to another neighboring network. The network layer may do clever things such as routing packets through different paths according to some routing rules, and fancier operations such as ensuring

16

https://en.wikipedia.org/wiki/OSI_model. For completeness, some data link layers may include features such as Automatic Retransmission Request (ARQ) mechanisms and commutation, this is, how to use one single physical channel to convey data from multiple “virtual” channels.

17

54

6

A Peek Under the Hood

packets are in the right order or fragmenting them if too large for what the network can handle. An interesting fact starts to appear: the more ones and zeros a layer needs to spy on—these are called headers in the jargon—the more such data becomes unavailable as user payload data: there are metrics which tackle this ratio of “useful data vs header data”, for instance goodput and protocol efficiency. Examples of layer 3 in space are Space Packet and the good old Internet Protocol (IP), with the particularity that if satellites want to talk IP, most likely the IP datagrams need to be encapsulated inside more space-specific packets such as Space Packets, but that’s a technicality we’ll leave out from the scope of this chapter. Now, the fourth layer kicks in and says: “alright, it seems we have two entities here trying to have a conversation, how can I help?”. Well, it does help by adding everything a proper conversation would need: introduction, nods-with-the-head, yeses, ahas, okays, i-don’t-understands, tell-me-mores, please-repeat. In short, all things any normal chat has, but in a kind of more robotic way. Mind you, adding all these facilities still creates more overhead—the need to transmit information which is not strictly used for the conversation itself. Classical examples of layer 4 protocols are TCP and UDP. From this point on, further layers are there to distill the information in a bit less “machine-like” manner the higher they go, and making it increasingly a bit more human, often paying the penalty of losing goodput and performance in exchange of readability. For example, HTTP requests sending back and forth JSON objects is nothing else than a cry for human readability kicking in higher layers, whereas the underlying layers could not—and should not—care less about this; they are comfortable working with obscure, meaningless ones and zeros. By the way, the idea that boundaries between layers are strict and well-defined is a naive utopia: there is a good deal of overlaps, which creates a massive amount of confusion in the telecommunications engineering community. What is more, lower, or higher layers may re-appear down the road to encapsulate back other layers if need be, which is the governing principle of Virtual Private Networks, or VPNs, where IP datagrams are encapsulated and encrypted in higher-layer packets. There are also crucial matters such as authentication (ensuring you are talking to the entity you think you are talking to) and encryption (making sure data is not directly understood if it lands in the unintended hands) which must be considered when establishing a communication channel which might be subject to spoofing or sniffing (like radio links between ground stations and satellites are due to the fact their physical layer is open for tampering18 ). It is a dangerous assumption to believe only the expected, well-behaved parties are taking part in any exchange of data. Every data communication channel must include the right amount of cyber protective measures, with “right amount” being key here since overshooting will create too much overhead and cost.

18

See how amateur radio enthusiasts decode telemetry data from SpaceX rockets: https://www. youtube.com/watch?v=74_N163HyhA&ab_channel=ScottManley.

6.3 The Software: Hello World in Space

55

All the satellites flying over your head as you read these lines are very talkative. There are myriads of frames, packets, and data conversations happening inside and outside spacecraft. From the inside, from computers to computers: health status data, commands, files; a lot is being sent back and forth. As we speak, bit errors are happening, hopefully getting corrected. But also, potentially uncorrectable data corruptions that will make operators on the ground grab their heads. It’s been said: space is hard and, up there, ones and zeros can randomly swap places without notice due to the harsh environment we described in Sect. 3.3 and the delicate nature of semiconductors we spoke at the beginning of Chap. 3. There is a whole lot of talking from the satellites to the ground and vice versa, by means of electromagnetic waves in different bands of the radio spectrum, with antennas obsessively stalking each other. And last but not least, there are increasingly more conversations happening between fellow satellites while in orbit, typically by means of lasers as we talked in Sect. 6.2.2. Data communication is what makes satellites the indispensable tool they are. By means of data channels of different speeds and through different media, satellites can not only access their on-board resources to keep themselves safe and operative, but they also can convey information to ground and neighboring satellites, leveraging the augmentation network effects bring, compared to what they could achieve working in an isolated capacity. No one expects a computer not being connected to some network these days. Satellites are following suit.

6.3

The Software: Hello World in Space

Software enjoys a strange reputation in the space industry. On one hand—let’s call it the hero side—software tends to be the lifeline when it comes to solving problems on a troubled flying satellite, given that there is basically nothing else you can do after launch other than trying to fiddle with the on-board software and see if you can bring things back to normal. On the other hand—the villain side— software is considered the evil of all evils by non-software people. In any troubled space mission, the fingers naturally tend to point to software as the probable cause of failure (usually without tangible evidence). Any conversation next to a coffee machine between two non-software engineers (say, a mechanical and a thermal engineer) would either blame software, or radiation. What does evidence say? In Fault-Tolerant Attitude Control of Spacecraft (Qinglei Hu, Bing Xiao, Bo Li, Youmin Zhang, Elsevier), the authors collected and analyzed spacecraft data from databases such as the Satellite Encyclopedia (TSE), Satellite News Digest, Mission and Spacecraft Library, Sera Data SpaceTrak, Space Systems Engineering Database (SSED), and the Mission Failure Analysis for NASA AMES Research Center. As for the percentage of failures per subsystem type, the data collected shows that ACS (Attitude Control Subsystem, the system in charge of controlling the orientation of the spacecraft in space) accounts for 32% of failures in missions, followed by Power subsystem with 27% (Fig. 6.2, left hand). This should not come

56

6

A Peek Under the Hood

Fig. 6.2 Subsystem faults and types of faults. Source Fault-Tolerant Attitude Control of Spacecraft (Qinglei Hu, Bing Xiao, Bo Li, Youmin Zhang) Fig. 6.3 Types of faults in the attitude control subsystem. Source Fault-Tolerant Attitude Control of Spacecraft (Qinglei Hu, Bing Xiao, Bo Li, Youmin Zhang)

as a surprise,19 since the attitude control subsystem is the most complex subsystem on a spacecraft (to find about why it’s so complex, see Sect. 6.4). When it comes to the different types of faults, they can be further classified into several types of failures: mechanical failure, electrical failure, software failure, and other unknown failures. For mechanical failure, it mainly refers to the mechanical structure deformation caused by temperature changes, external force, friction, and pressure. An electrical failure accounts for power overload, overcurrents, short circuits, and abnormal battery performance. A software failure mainly consists of incorrect computer instructions and onboard software abnormalities. As you can see in Fig. 6.2 (right hand), software faults account for only 6% of the total, whereas electrical faults account for 45% of them. In Fig. 6.3, you can see the types of faults for the attitude control subsystem only. According to this data, there is no strong evidence that software faults are the main causes of mission losses. Or, put more bluntly: if anything, the evidence collected in that work indicates software is seldom the problem. There have been, though, software failures which have gone down in history as very resonant, such as the Ariane 5 failure,20 Mars Climate Orbiter,21 and others.22

19

A veteran space engineer once said: “there are two main reasons why space systems fail: one is attitude control, the other one is funding”. 20 http://sunnyday.mit.edu/nasa-class/Ariane5-report.html. 21 https://en.wikipedia.org/wiki/Mars_Climate_Orbiter#Cause_of_failure. 22 https://en.wikipedia.org/wiki/List_of_software_bugs#Space.

6.3 The Software: Hello World in Space

57

Even if software were the problem, there is no chance a spacecraft can be launched without any software on it. Satellites are becoming more and more software-intensive and nothing indicates this trend will stop anytime soon. Then, let’s discuss what’s so special about making and running software for space missions. For example: . What does it take to run software on a spacecraft? How does it differ from running software on a laptop sitting in a coffee shop? . How is software updated or changed in orbit? . What type of languages are used for coding flight software? . Can I run Linux on a satellite? What about Windows? . Can I host a website on a satellite? . What kind of skills are required for doing Flight Software? . How is Flight Software designed? . What does “software-defined satellites” mean? Let’s go one by one.

6.3.1

What Does It Take to Run Software on a Spacecraft?

Well, for sure you need a microprocessor, and some memory. These can be discrete—as in, individual devices—or they can be part of the same integrated circuit (usually called microcontroller unit, or MCU). In any case, what we call software is basically a set of instructions which are fetched from a memory and put in a pipeline to eventually be interpreted by the microprocessor architecture executed. Such instructions are typically mathematical operations of different kinds and data movements from one place to another. This doesn’t really differ from the software which runs on your laptop. The main difference is that the computing resources spacecraft have tend to be more specialized compared to the resources consumer electronics as laptops have. For example, spacecraft may use microprocessor architectures which consider reliability by design by adding voting capabilities in their internal registers. Because energetic particles can interact with the semiconductors that processors are made of (as we saw in Sect. 3.3), it means that an energetic particle may alter the content of a memory cell or a status register at any given time. One single bit being changed by radiation can cause serious consequences to the execution of software; for instance, an if statement may suddenly become true and execute a routine which could make the satellite change orientation. Hence, adding voting capabilities ensures that, upon the upset of a memory location or internal content of the processor, still the value finally used for evaluation will require that 3 identical registers will show the same reading. This comes with the penalty of a larger area of semiconductor used for implementing the voting scheme. Because bits can be upset at any time while in orbit, several techniques to mitigate this issue have been devised. For example, memory scrubbing, which consists of reading from each computer memory location, correcting bit errors with

58

6

A Peek Under the Hood

an error-correcting code (ECC) and writing the corrected data back to the same location. Parity bits, checksums, cyclic redundancy checks (CRC) and other error detection and correction schemes are also used. Computing resources on spacecraft have been historically more modest than resources on ground-based computers, although this is a gap that has been shrinking fast in recent years. On-Board computers are advancing rapidly, with more resources than ever before: more storage, more floating-point operations per unit of time (FLOPS), more millions of instructions per second (MIPS) executed. Still, on-board computers remain closer to what are called embedded systems, which are computers with specific roles and functionalities. Unlike a laptop—which is sold without really knowing or caring for what the laptop will be finally used for, whether it’s accounting, graphic design, or music production—spacecraft computers tend to be optimized for the specific function they are ought to perform: attitude control, thermal control, command and data handling, star trackers, protocols encoding and decoding, payload data acquisition, etc. Each one of these requires specific interfaces, specific libraries, sensors, actuators, etc. For example, attitude control needs to perform intricate calculations about rotations, matrix operations, sensor data fusion, and a lot more. And it has to talk to a variety of sensors with different characteristics, such as sun sensors, star trackers, gyros, magnetometers, and so forth. Microprocessors used in attitude control are typically optimized with several ALUs and data paths to perform many of these computations in a few CPU cycles.

6.3.2

How Is Software Updated or Changed in Orbit?

By means of…software. Yes, software can be designed to help modify software, including modifying itself. Spacecraft computers are equipped with what is called a bootloader. A bootloader is a program whose sole function is to fetch a piece of executable code—typically application code—from some location or through some data interface, for example a serial interface, and place it in a way the microprocessor can find it and run it. The bootloader is invoked during the power-on sequence of the processor, and usually loads and executes the same application code, over and over. Under special circumstances, the bootloader might be commanded to load another image from some other location or take a new image through some interface. Mind that bootloaders can also create some headaches. For example, if during the process while a bootloader is loading an application memory something odd happens and the image gets corrupted, then the application program will never execute because it is only partially present. This is typically and somewhat colloquially called “bricking” the thing; because the device becomes as useful as a brick. And it’s not really exclusive for space. You can brick your smart TV, your Wi-Fi router or your phone while updating firmware. The key is to ensure the bootloader can always recover from a failed flashing procedure. Goes without saying, bootloaders should not be able to overwrite themselves, at least not their most

6.3 The Software: Hello World in Space

59

critical part (there can be more than one bootloader calling another bootloader and another bootloader).

6.3.3

What Type of Languages Are Used for Coding Flight Software?

In theory, any language can be used for coding flight software. Languages never make it to orbit, only machine code does (unless the compilation happens in orbit, which is somewhat rare, although possible). Then, if the processor being used has a compiler which can translate whatever obscure language of your choice to its machine code, you’re good to go. Do you want to code your flight software in brainf*uck23 ? Go ahead. Anything is possible as long as you have a compiler. This applies, well, for compiled languages. Because flight software has historically stayed quite “embedded” and close to the hardware, compiled languages have been more popular, notably assembly and C. These languages offer to the programmer the tightest control of what’s happening as the software runs, while paying the penalty of a lower code readability, reuse, and whatnot. Other compiled alternatives to improve reusability is to employ Object Oriented Programming (OOP) techniques and languages, although this claim tends to be disputed. For example, in the book Coders at Work, Joe Armstrong—creator of Erlang programming language—says on software reusability and OOP: I think the lack of reusability comes in object-oriented languages, not functional languages. Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

On the other hand, interpreted languages such as Python are totally possible in spacecraft. Provided the on-board computer is capable of running an interpreter, all the rest comes reasonably easy. Interpreted languages tend to be relatively slower and sit on higher abstraction layers; don’t ask an interpreted language to handle hardware interrupts or hard-real time deadlines. But interpreted languages are highly flexible for scripting and automating non-critical things on-board like daily maintenance tasks.

23

Brainfuck is an esoteric programming language created in 1993 by Urban Müller. Notable for its extreme minimalism, the language consists of only eight simple commands, a data pointer, and an instruction pointer. While it is fully Turing complete, it is not intended for practical use, but to challenge and amuse programmers. Brainfuck requires one to break commands into microscopic steps.

60

6.3.4

6

A Peek Under the Hood

Can I Run Linux on a Satellite? What About Windows?

If you can run Linux, then you must. If the spacecraft computer is capable of handling Linux—this means, it has a certain number of resources and a Memory Management Unit, or MMU—then Linux is your friend. Even without an MMU, Linux can be your friend. But, with great power comes great responsibility: a full-blown Linux OS needs to be administered accordingly, and this may require certain capabilities for accessing and monitoring the operating system’s resources. Because satellites are remote systems—there is a lousy radio link between the spacecraft and the ground—this means that such a noisy link will be the only way to shell the remote operating system (more of this in Chap. 8). Therefore, remote shells such as SSH (encrypted) or Telnet (unencrypted and very basic) have to be used on top of radio, which needs IP stacks on top of space data link protocols such as CCSDS. Not such a big deal if the link budget is generous and the latency is small. This means, if the energy per bit sent back and forth is reasonably higher than the overall noise, and the round trip for packets from satellite to ground is reasonable for conversational protocols to work well. In high-latency links, for example for distant orbits, protocols such as TCP can be problematic, therefore shell access to a remote Linux-based spacecraft can be prohibitive. With proper link established, handling of the remote Linux system running on board the satellite becomes more of a sysadmin24 task. Using Linux and IP based links blurs the line between operating a satellite and operating a server on a network. As for Windows, technically speaking, you could run it on a satellite. But because Windows is an operating system which heavily relies on graphical user interfaces, the only way you could possible use it is to either use something like VNC to remote desktop it (although considering the radio link speeds and the amount of data VNC would generate would turn it very challenging to say the least), or put a screen on board (which needs to withstand launch, vacuum and radiation) and hook a camera looking at the screen while you send videos frames at a lower rate to the ground. I am just being polite: just don’t’ do it. I recommend you just discard the idea of running Windows on a satellite for now, but if you insist, don’t go without tools like Cygwin25 if you want to live.

24

A sysadmin, or admin is a person who is responsible for the upkeep, configuration, and reliable operation of computer systems, especially servers. 25 https://www.cygwin.com/.

6.3 The Software: Hello World in Space

6.3.5

61

Can I Host a Website on a Satellite?

Of course, you can. It is pretty straightforward provided you have some web server like nginx or apache2 installed on-board, PHP installed (optional, but recommended) and some index.html file sitting somewhere with a basic code like this:



Then, you can cURL the satellite, and the webpage should reply back. Again, the question remains on how you will connect to the remote spacecraft’s web. Since web pages rely on HTTP which in turn relies on TCP and IP, your radio link will have to be comfortable with all these protocols for the website to work and be responsive.

6.3.6

What Kind of Skills Are Required for Doing Flight Software?

It does require a certain level of awareness of the typical satellite architectures, interfaces, and whatnot. But all that can be learned. Flight software is nothing fundamentally different than embedded software. Because a satellite is a physical and dynamical system, the software developed for it must be designed to be aware of such dynamics. In short, the (minimal) complexity of software which controls a physical system is given by the complexity of the underlying physical system. It says minimal because of course software engineers can always over engineer it. And they will if given the chance.

6.3.7

How Is Flight Software Designed?

It’s an incremental work. But, if you take two satellites from two absolutely different operators and you somehow reverse engineer the machine code into source code, more or less those two different flight softwares will show similarities. Every flight software needs some of these modules or building blocks: telemetry handling, telecommand handling, FDIR (failure detection, isolation, and correction), thermal control, power control, scheduling, etc. Flight software can be designed and developed from scratch—and software engineers will always try to convince

62

6

A Peek Under the Hood

you it’s the only way to go—or it can rely on existing software frameworks which already contain some of the typical building blocks. Some of these frameworks are even flight-proven and open source, which makes them very attractive for low-budget NewSpace missions or University projects with very tight schedules.

6.3.8

What Does “Software-Defined Satellites” Mean?

The software-defined fad has gotten a bit out of hand lately. It is used too loosely and gets stretched to levels where it stops having meaning. But, in the strict sense of the term, when we say software-defined anything, we mean “something which has been done traditionally by means of mechanical or electrical devices but now is done by means of software”. Sounds obvious, but it’s not. For example, take software-defined radio.26 Historically, modulating/demodulating, mixing, and filtering radio signals has been performed using discrete—active and passive—devices. With the progress of semiconductor technology in areas such as analog-to-digital converters and digital signal processing, now it is possible to move such operations into software. How so? Well, manipulating electric signals is ultimately a mathematical problem. Filtering a signal, mathematically speaking, is done by computing the convolution of said signal (or, in the DSP27 jargon, a sequence) with the step response of the filter we want to use. Or, equivalently, multiplying their frequency responses (the signal’s and the filter’s) in the frequency domain. Then, a superheterodyne receiver is nothing else but an arrangement of mathematical manipulations of the input signal. So, if one manages to digitize the signal as soon as possible in the signal chain (this is, as close to the antenna as you can), then one can perform the math operations fully on the digital domain and still obtain the same results. On the same line, spacecraft have had certain things historically done by hardware elements, for example communication protocols, like CCSDS.28 Traditionally hardwired in highly specialized chips like ASICs and FPGAs, now the stacks can perfectly sit on software, which adds a good deal of flexibility when it comes to adding security layers and applying patches in case of cybersecurity threats. Software-defined protocols allow to scale up or down resources depending on the mission requirements. For data intensive space applications, software-defined techniques allow to readjust mission configurations post-launch, providing a more cost-effective way to optimize operations once the spacecraft faces the elements. Although there are some particularities about running software inside a metallic box that is orbiting at hundreds of kilometers above the ground while flying at thousands of meters per second of velocity, software is still software. Software

26

https://en.wikipedia.org/wiki/Software-defined_radio. DSP = Digital Signal Processing. 28 CCSDS is a collection of recommended standards issued by the Consultative Committee for Space Data Systems which specify several layers of a protocol to command and process telemetry data in space missions. 27

6.3 The Software: Hello World in Space

63

itself remains unaware about the altitude at which it’s executing. Variables are still variables, and if statements work the same way as in office space. The software engineers are the ones in charge of adding the correct safeguards to make the flight software execute seamlessly while flying alone in space and while the underlying hardware is being chastised by the space environment (see Sect. 3.3). Irrespective of where the software runs, the work follows the same flow: defining data structures and their relationships in runtime. You can over-engineer it by adding fancy methodologies and pile up dependencies, but flight software, at the end of the day, still boils down to data and how it behaves over time. There’s a phrase attributed to Linus Torvalds which goes: Bad programmers worry about the code. Good programmers worry about data structures and their relationships.

6.3.9

Bugs and Glitches in Orbit

Recently, I was reading an article from one major manufacturer of high-end cars in which the company was humblebragging about their “zero-bug policy” in their software development process. A claim that captured my attention. While the article started bold and promising, it quickly mutated into some realistic optimism:

With ‘zero bug’, they actually meant zero bugs that *are known*. Well, following that reasoning, all software that we develop has zero bugs…if we ignore them. One thing was spot on in the article though and can be read from the screenshot above; bug-free code and software production is impossible. Firstly, because we are fallible. And second, because, as software developers and testers, we can never foresee all the scenarios and situations our software will encounter once released. And some of those unforeseen scenarios might29 (will) surely cause bugs to reveal themselves. Therefore, as much as we spend on complex testing suites or the best testing talent possible (which are good investments by the way) we can never fully cover all potential situations.

29

“Anything that can go wrong, will, and at the worst possible moment.”—Finagle’s Law of Dynamic Negatives.

64

6

A Peek Under the Hood

Take commercial aircraft, with millions of lines of code running on-board. Airliners are tested very thoroughly. Still, nasty bugs happen. See the case of Flight 290430 in Warsaw, Poland where an A320 overran the runway at Ok˛ecie International Airport on 14 September 1993. Although the cause of the aircraft failing to brake in time was due to a combination of factors—including human factors— this accident involved certain design features of the aircraft. On-board software prevented the activation of both ground spoilers and thrust reversers until a minimum compression load of at least 6.3 tons was sensed on each main landing gear strut, thus preventing the crew from achieving any braking action by the two systems before this condition was met. But because of the wind conditions present that day—which impacted the way the A320 approached the runway—the aircraft stayed for a few seconds standing only on one side of the main landing gear; precious seconds of braking time lost which directly contributed to the incident. A scenario that clearly falls way outside of the typical testing strategy for this kind of system. No one knew that scenario existed, until it did. In fact, here we are not strictly talking about these kinds of very visible software bugs. We spoke so far about a kind of software bug that always takes the spotlight because it causes considerable losses and visible, sustained malfunction; colloquially called in some bibliography hindenbugs.31 But there is a more special and elusive kind of software bug: the glitch. A glitch is a slight and often temporary fault. Hard to reproduce, at times almost imperceptible from the operator’s point of view, glitches are the quintessential headache of any programmer. Now you see it, now you don’t. Let alone if the glitch appears to be a heisenglitch32 which is a type of glitch where the observation changes the behavior: looking for the glitch changes the results of the program, making the glitch not to reveal itself. Software glitches are often opportunistically invoked by engineers as the potential cause of any mysterious occurrence when something happens—or someone claims that something has happened—which cannot be easily explained. —Weird. Maybe it was just a glitch. “Just” a glitch? Story time. Eons ago, while observing the telemetry dashboard of a satellite—a screen full of time-series data represented in a myriad of colorful line plots depicting the trends of all relevant variables monitored on-board—I noticed something odd. Some plots were showing brief “jumps” in their values, only to come back

30

More details about this incident: https://en.wikipedia.org/wiki/Lufthansa_Flight_2904. A Hindenbug is a catastrophic bug that destroys data and may also shut down systems or cause other major problems with systems. It is a general IT slang word for a major bug that does more than just create a nuisance or an annoyance for users. 32 In computer programming jargon, a heisenbug is a software bug that seems to disappear or alter its behavior when one attempts to study it. The term is a pun on the name of Werner Heisenberg, the physicist who first asserted the observer effect of quantum mechanics, which states that the act of observing a system inevitably alters its state. In electronics, the traditional term is ‘probe effect’, where attaching a test probe to a device changes its behavior. 31

6.3 The Software: Hello World in Space

65

to the trend it was following right before the jump. It was most visible on plots which were smooth sine waves, like for example quaternion elements, where it could be clearly observed the curve being very briefly interrupted but these abnormal hiccups. The glitch was in some way “harmless” (only a visual artifact) which contributed to the fact no one had complained about it, but it clearly indicated there was something wrong. Which is the very nature of many software glitches. They might not be causing a major problem, but their presence indicates there is something that is not nominal. Glitches must be treated seriously, ruthlessly chased down and sorted out. Anyway, so I collected a good amount of telemetry data, and I went home to analyze it in depth. The hiccups were seen on most values coming from one particular subsystem connected to the on-board computer by means of a serial port. So, the problem was more or less circumscribed to that particular subsystem, which was good news. Said subsystem was packing all its telemetry frame in a fixed-length binary chunk composed of something like 4 kilobytes, and the unpacking of the frames was carefully done by the telemetry processing on the ground, converting all values into “human readable” units. Then, while mindlessly browsing through the binary data, I do not know really how, I observed that one particular value of the 255 possible values of the bytes sent by said serial channel that was oddly missing. There was no occurrence whatsoever of the byte valued 13 in decimal (0x0D in hex) at all in the several gigabytes of telemetry I had taken home with me. Definitely strange. I had a lead. What is more, the missing byte was an important value used in serial terminals as carriage return (CR). Carriage Return is the ASCII character sent to the terminal when you press the enter key on your keyboard while connected to a serial line, tracing back to the times where typewriters roamed the earth. It looked as if something was replacing the missing carriage return character for a newline character (0x0A). Then, it all came together rather quickly by checking the configuration of the serial port attributes (Fig. 6.4). Bingo. It turned out the UART peripheral inside the on-board computer CPU was doing some very inconvenient translations of characters by default. Disabling

Fig. 6.4 Funny little configuration to have by default

66

6

A Peek Under the Hood

this “feature” made the glitches disappear forever. It’s hard to explain how ecstatic one feels when catching one of these things. This story in particular had the advantage that the glitch was quite visual, and it appeared repeatedly thanks to the “persistence” and trail the line plots provided. Some other glitches are not as simple to spot or to remember, and they might also be what some bibliography call a mandelbug33 which is a bug that is so complex that it occurs in a chaotic or nondeterministic way. Consequently, a mandelglitch is the hardest of them all: hard to detect, brief, chaotic, quasi-random. At times, mandelbugs may show fractal behavior by revealing more bugs: analyzing the code to fix a bug only uncovers more bugs. All in all, software glitches are a serious problem, and they deserve the same attention as their more famous cousins the bugs. In fact, there should not be a separated terminology, a glitch is a bug, regardless of how brief or harmless it might appear to be. In general, software does exactly what it is told to do. The reason software fails is that it is frequently told to do the wrong thing. That is not because of programmers’ incompetence but because of one of the biggest, if not the biggest, challenge in software development: the barrier between the way we create software and its behavior. Creators like carpenters or sculptors benefit from an immediate connection to what they’re creating. The challenge with programming is that it violates this principle; the programmer, staring at a page of text, is abstracted from whatever it is they are actually making. That’s why software systems are hard and are becoming increasingly harder to understand.

6.4

The Orientation: Attitude Control

Imagine yourself in an empty room. Now picture you are somehow able to stay suspended in the air, like magically floating. Imagine that someone exerts a gentle torque on you, and this torque makes you start spinning. This mysterious person leaves the room right after that, turning off the light. You can’t see or hear anything, as you are spinning in this dark, weird place. How long would it take for you to lose track of what your orientation is? What means up and what means down? Right or left? How quickly would you get absolutely disoriented about where the door, the roof or the floor are? Most probably, not too long. We constantly rely on cues around us to situate ourselves in this world we inhabit. Provided all our senses are available and healthy, we heavily use them on a constant basis to navigate our environment. Visual cues, olfactory, acoustic, and even thermal hints are used to understand where we are located and in what direction our heads are oriented. Another exercise. This time a bit less eccentric than the dark room. Pick a chair in your office, cover your eyes and sit, not before asking a colleague to

33

https://ieeexplore.ieee.org/document/4085640.

6.4 The Orientation: Attitude Control

67

give you a nice spin. But before the spinning kicks in, carefully observe where exactly you are looking at in the office: pick some feature from the landscape; a column, a fern, a poster on the wall, wherever. As you spin—remember you have your eyes covered—try to keep track of the chosen object or spot and how it is supposed to move around as you go and try to estimate where your stopping position will be with respect to that reference point. Chances are you will not be able to precisely predict how the chosen spot has moved during your spinning trip, and when your eyes will be uncovered, you’ll be more than surprised how different the final orientation turned out to be. Well, all these somewhat bizarre examples illustrate what spacecraft must deal with in terms of orientation while in space. The orientation of spacecraft in threedimensional space is more specifically called attitude. While in space, spacecraft are floating in a “dark room” (which is not totally dark, as we shall see). Satellites are spinning one way or another while up there. Spinning is a very natural motion in space. Here on the surface of the earth we deal with friction forces which dramatically change the way we handle and experience changes in orientation. Here, it somehow feels that motion is the exception, not the rule. You have to do some work to move, as if somehow the “normal” state was to be motionless. Well, that’s a bit of an illusion that friction forces create. The philosopher Aristotle was confused by this as well, and it heavily influenced his perspective of physics. Aristotle’s view of motion34 was that it required a force to make an object move or, more simply, “motion requires force”. After all, if you push a book, it moves. When you stop pushing, the book stops moving. But Aristotle was wrong. Very wrong. Still, the reign of Aristotelian physics lasted almost two millennia. Upon the work of many pioneers such as Copernicus, Tycho Brahe, Galileo, Descartes and principally Newton, it became generally accepted that Aristotelian physics was neither correct nor viable. Perhaps somewhat counterintuitively, in the way physics actually works, if you push a book, it shall keep on moving, and moving. And if you were to be set to spin on an office chair without friction forces, you would spin there in eternum. The guiding principle here is the conservation of momentum, linear and angular respectively, as described beautifully by Newton’s First Law of Motion. Ubiquitous external friction forces—which are the product of surfaces coming in contact with each other and dissipating energy in the process—are the reason why everything feels different from that and feels closer to what Aristotle believed. We cannot blame the guy. You almost don’t think about these friction forces unless they are absent. For instance, during some particular days in the beautiful Finnish winter where it is so icy everywhere that you almost don’t get to choose where to go anymore but the slopes along the way decide for you (Fig. 6.5). In space, such friction forces almost do not exist or, more accurately, they are considerably weaker than here on the surface of the earth. Therefore, spacecraft

34

More about Aristotle’s flawed take on physics: https://en.wikipedia.org/wiki/Aristotelian_phy sics.

68

6

A Peek Under the Hood

Fig. 6.5 Good luck running for the tram in Helsinki without friction forces (Creative Commons)

must perform a set of, from a terrestrial perspective, weird operations to keep control of their orientation in space. We stated above that spinning happens very naturally in space. How so? Well, what does it mean to spin in the first place? Spinning means rotating around an axis passing through some internal point of interest of the body. For generic bodies, let’s assume rigid bodies for simplicity—as opposed to flexible, wobbly things—this point is called the center of mass. It shall be quickly clarified that spinning is a subset of a more general motion called rotational motion. Rotational motion is a circular motion around some arbitrary point which can be perfectly located outside of the body, for example the earth rotating around the Sun. In short, a body will spin around its center of mass either due to an external force at a non-zero distance from that center or if its original state of rotation (or the lack thereof) is changed internally in some way (Fig. 6.6). Spacecraft are bodies with arbitrary form factors and uneven mass distributions; they are not perfect, ideal spheres like cows are.35 As they float (spacecraft,

35

The spherical cow is a humorous metaphor for highly simplified scientific models of complex phenomena. Originating in theoretical physics, the metaphor refers to physicists’ tendency to reduce a problem to the simplest form imaginable in order to make calculations more feasible, even if the simplification hinders the model’s application to reality.

6.4 The Orientation: Attitude Control

69

Fig. 6.6 FA and FB are forces applied at different distances from the center O, creating different torques. Credit public domain

not cows) in the “dark room” in presence of very weak friction forces, there are different torques exerted to them which make them change their orientation in time. Such torques are from various sources, both internal and external. Let’s first check the external ones. One external torque which comes up early in a spacecraft lifetime is the torque applied from the rockets which are delivering them to space. In separation systems, the push typically comes from springs loaded specifically for that purpose. If the push (force) happens at a nonzero distance from the spacecraft center of mass—which means a torque if you remember the definition above—there will be spinning. This spin is usually called tip off rate. Other external sources of torques that can make a spacecraft spin are atmospheric drag (there can be thin remnants of the atmosphere up to several hundreds of kilometers in orbit which can create an aerodynamic torque which depends on the object geometry and size/area), and gravity gradient. The gravity gradient disturbance can vary along its flight path, and it is caused by the variation in the gravitational force experienced by the spacecraft due to the differences in the gravitational field caused by the non-uniform distribution of mass in the nearby environment. As the spacecraft moves through space, its distance and position relative to other massive objects, such as planets, moons, and another spacecraft, can change. These changes can cause variations in the gravitational force experienced by the spacecraft and therefore, the gravity gradient disturbance. Additionally, the orientation of the spacecraft can also affect the gravity gradient disturbance. The gravity gradient disturbance is strongest when the spacecraft is aligned with its longest axis pointing towards or away from the massive object that is causing the disturbance. As the spacecraft changes its orientation relative to the massive object, the strength of the gravity gradient disturbance can change as well. Therefore, the gravity gradient disturbance can vary along the spacecraft’s flight path depending on its position and orientation relative to other massive objects in its environment. There can also be radiation pressures associated with the interaction of electromagnetic radiation and extended body surfaces, which of course includes light pressure, associated with the impact of light photons against surfaces. And for planets where there is a magnetic field, such a field can also interact with electric

70

6

A Peek Under the Hood

currents circulating through the wires and devices of the spacecraft electronics, creating a disturbance torque, in a similar manner to how a compass works. As it can be seen, there are multiple external sources of torques trying to make our poor spacecraft spin round like a record, and if we want them to stay looking in a particular direction, for example to take a photo of a specific location or to orient an antenna for communication purposes, something has to be done. And this yields the floor to discuss internal torque sources to counteract the external ones. The way a spacecraft can cope with the fact the universe seems to be machinating to make them spin out of place, is by counteracting or balancing those external effects with equal, but opposite effects. Just like a book stays still on a table because the gravitational pull downwards—its weight—is perfectly balanced by the reaction force provided by the table, a spacecraft can be made to stay still pointing to a place of interest if all its torques are balanced, this means, there is a net zero torque acting on the body, in some specific frame of reference. But before talking about actions, more specifically about actuators (the devices on board the spacecraft responsible for creating internal torques), how do spacecraft know where they are looking to, so they can correct their orientation if they happen to be looking in the wrong direction? By means of sensors, and some math. We said before that space is similar to the “dark room” in the example which opened this section. This is not entirely true. There is—luckily—light in space, and such light comes from stars. Stars are, simplifying it a bit, nuclear reactors burning hydrogen. In that process, a good deal of electromagnetic radiation is generated and radiated, and some of that radiation can be captured by some sensors, for example our eyes, which are like passive antennas and makes it possible for us to enjoy a starry night. Ancient navigators realized that, by eyeballing the stars plus some measurements and calculations, they could understand where they were. Space industry took note of this and developed a set of very smart “eyes”, that is, cameras which can observe the stars, and by contrasting the current observation with a stored internal catalog of stars and their possible rotations, these cameras can infer where the lens axis is looking at with respect to a frame of reference. And if you happen to know how the camera is oriented with respect to the spacecraft body, you can then understand where the spacecraft is looking. These cameras are called star sensors, or star trackers. Star sensors usually run fancy algorithms to compute the multiple orientations of the star images, and to rule out unwanted noise sources which could be misinterpreted as stars. In short, star trackers tend to carry computers and run software algorithms to accomplish such tasks. A problem quickly pops up: star sensors must be able to see a clear sky to be able to understand the orientation. In short, if the spacecraft happens to be spinning somewhat out of control and the star sensor cannot get to see a proper star field, it might not be possible to use it to correct the situation. Additionally, start trackers tend to be susceptible to SEEs, more specifically SEFIs (see Sect. 3.3.1.1 to know more). So, there are other types of sensors as well which can come to help when star sensors go blind or hangs. Spacecraft can measure the surrounding magnetic field—if there is any field present in the first place—and use that information to assess how their position is related to said magnetic field, which can be used to

6.4 The Orientation: Attitude Control

71

infer rates of rotation. The downside of magnetometers (the name of this sensor), is that they are highly sensitive to magnetic fields created by the spacecraft itself: how can magnetometers discern between “good” or “bad” magnetic fields? They can’t. There are some other ways of measuring spinning rates. For example, by using gyroscopes. There are interestingly very different types of gyroscope technologies: from some exploiting Coriolis force using very small micro-mechanical vibration devices, up to more sophisticated ones which use interferometry and lasers, by splitting a beam of light and making the two beams follow the same path but in opposite directions, in a ring. On return to the point of entry the two light beams are allowed to exit the ring and undergo interference. The relative phases of the two exiting beams are shifted according to the angular velocity of the apparatus, which can be sensed and reported. The cost between gyro technologies can be abysmal and a key factor during the development stage. Spacecraft are also very interested in knowing where the Sun is for both orientation and ultimately survival purposes. The reader might quickly guess why: power generation reasons. Solar power remains the most popular energy source for spacecraft. Remember the exercise of the spinning office chair and how easily you can lose track where everything is located as you spin? Same happens to spacecraft. Too much spinning and they may lose track where the Sun, or the Earth is. And this is definitely problematic to keep batteries charged. So, how to know where the Sun is? Readers may correctly say: ok, there are usually solar panels on board of any spacecraft, then why not use them to guess where the Sun is, since those are, well, sensitive to solar light by design? Fair. But wrong. That would mean mixing cause and effect. Solar panels are the ones to be guided to look at the Sun, not the ones guiding. Mainly because solar panels look in very specific directions, hence from the whole sphere of possible rotations of a spinning spacecraft, the direction where the panels are oriented is very unique, so to speak. But, in fact, the reader was not entirely wrong. Solar cells are indeed involved in searching for the Sun, just not the ones in solar panels. Typically, spacecraft are equipped with tiny solar cells spread all across the body, and oriented in a way that their sensitive axes, combined, can create a sort of “sensitive sphere” of sorts. This way, the spacecraft are given “eyes all around their heads”, greatly simplifying the Sun search challenge: those cells which are showing the highest current readings are most likely the ones looking closer to the Sun, to better turn in that direction. The collective of tiny solar cells is called Coarse Solar Sensor—coarse because this method does not provide a terribly accurate position of the Sun, but more like a “good enough” one, which is sufficient during contingencies. Once the panels start pointing perpendicular to the Sun—as they should in order to maximize power generation—survivability is ensured and operators on the ground breath in relief. Of course, this is a reasonable simplification of the overall process. There can be some problems in the process such as shadowing from deployable appendages or booms, Albedo (reflection of sunlight on the earth’s surface), etc. There are some other means of sensing the Sun’s presence, but those are more for bigger, fancier missions.

72

6

A Peek Under the Hood

All in all, spacecraft have a rich set of sensors at hand to use for assessing where the heck they are pointing to. But there’s some computing that must be done. All sensors come with their own particularities, pros, and cons. Therefore, the software running on the spacecraft shall ensure to use them as smartly as possible and that depends on the situation on board. In other words, perform a sort of data fusion. When everything is great and nominal, some sensors might make more sense (pun not intended) than others, as opposed to when things go ugly where more basic and less precise yet more reliable sensors are preferred. The software on board must handle this trade-off continuously. Mind that the position of stars and planets can also be obtained purely computationally—the ephemeris of celestial objects can be calculated by means of reasonably simple formulas. Why take the pain of adding expensive sensors if all you need is a computer and some trigonometry? Basically, because all those computations are theoretical, and require knowledge of time on board as well as orbital geometry and that’s a luxury when things go south. Space engineers design the control laws assuming that, under the worst conditions possible—for instance tumbling randomly with no time keeping on board, no knowledge of position—the spacecraft shall be able to charge batteries, detumble, and go back to business. Now it’s time to act. Once the software on board performs the computations needed to obtain a numeric representation which condenses where the spacecraft is looking to—this numeric representation tends to take the form of an attitude quaternion36 —the control software on board can compute the amount of torque required to correct the current orientation versus what is desired if there is a significant discrepancy between the two. The total torque computed is distributed among the available actuators on board. Let’s dive into what these actuators are and what they do. Actuators are devices which can interact with the physical environment upon demand. This is, we can command an actuator to create a force, torque or equivalent to our plant (the plant being the physical system we want to control). In spacecraft, actuators are typically reaction wheels, control moment gyroscopes, magnetorquers and thrusters. And other types perhaps less frequently used, like yo-yos.37 Reaction wheels are basically flywheels whose rotational speed is opportunistically changed for causing the spacecraft to counter-rotate proportionately through conservation of angular momentum (the space equivalent of your home drill exerting a torque in your wrist as it starts spinning). As wheels spin, they also store angular momentum which can be used to stabilize rotations around the spin axis. A somewhat related actuator is a control moment gyroscope (CMG). A CMG consists of a spinning rotor and one or more motorized gimbals that tilt the rotor’s

36

Quaternions are a type of mathematical object that extends complex numbers to 4 dimensions. Quaternions can be thought of as a way of representing rotations in 3D space, where the scalar part represents the rotation angle, and the vector part represents the rotation axis. 37 https://ntrs.nasa.gov/api/citations/19620006811/downloads/19620006811.pdf.

6.4 The Orientation: Attitude Control

73

angular momentum. As the rotor tilts, the changing angular momentum causes a gyroscopic torque that rotates the spacecraft. A magnetorquer is a glorified solenoid, and their principle of work is extremely simple. They create a magnetic dipole which is directly proportional to the electric current flowing through them and to the amount of turns of the coil, and this magnetic dipole interacts with the surrounding magnetic field, creating a pure torque. Last but not least, thrusters. Thrusters are like small rocket engines. Rocket engines produce thrust by the expulsion of an exhaust fluid that has been accelerated to high speed through a propelling nozzle. The exhaust fluid is usually a gas created by high pressure combustion of solid or liquid propellants, consisting of fuel and oxidizer components, within a combustion chamber. As the gasses expand through the nozzle, they are accelerated to very high speed, and the reaction to this pushes the engine in the opposite direction. Thrusters technically are force-creating devices, but if conveniently placed around the center of mass they can create torques as needed. All in all, to control the orientation in space of a spacecraft, we require a combination of devices: actuators, sensors, a computer running software that can determine the current attitude and compute the corrections needed to reach the targeted orientation (usually called controller), and, well, the plant itself, which is the dynamical system under control. Spacecraft are, fundamentally, cyber-physical systems, which is an glorified term to describe a combination of computerized elements mixed with physical elements, with some of those elements acting as bridges between both domains (see the dotted line in Fig. 6.7 marking the boundary between the cyber and physical domains). This also means that, because controlling a spacecraft orientation requires a fair deal of software and computer-machine interfaces, bugs and human errors can make spacecraft go out of control: there is no way an actuator can discern if something or someone is asking it to act legitimately or not. An actuator will act whenever requested. For instance, the ISS was recently unintentionally steered due to a software bug in a docked module.38 Sorry, a glitch! There are myriads of books written about attitude control. Hundreds of equations are related to the topic. But a good, birds eye’s level grasp of the underlying principles is of paramount importance before diving into the details. Attitude control is one of most critical subsystems on board a spacecraft—it directly impacts power, thermal, structures and mechanisms, and what have you. Almost goes without saying, during the development phase of space systems, it is crucial to count on a variety of simulation resources to create synthetic “dark room” environments for testing and verification of the control software before launch. This is important

38

https://www.theverge.com/2021/7/30/22601752/russia-nauka-module-software-glitch-issspace-station-nasa.

74

6

A Peek Under the Hood

Fig. 6.7 Building blocks of a computer-based control system

because we can’t easily reproduce on the ground the low-friction physics spacecraft experience in orbit, but one must ensure the control algorithms design is sound before launch.

6.5

The Space Sauna: Thermal Control

We are very familiar with using electronics in our everyday lives. Laptops, smartphones, TVs are all around us. We pay little attention to the fact that electronics are designed to work within narrow thermal ranges. For example, Helsinki winters remind me of this every year when my phone (a phone that might not from the highest end but also not the cheapest either) starts rebooting or turns off when I am outside walking with the dog with – 18 °C outside. Electronics need a specific thermal environment for it to work as intended. Spacecraft electronics is no exception. It is the role of the thermal control subsystem to make sure all subsystems operate within their allowable flight temperatures (AFT). The thermal design engineer needs to consider all heat sources the spacecraft will have, typically the Sun, the Earth, and the on-board electronics and subsystems. All these sources or inputs are not steady but vary with the position in the orbit or seasonally. Thermal control manages the spacecraft temperatures by thermally isolating the spacecraft from the environment, except in specific parts where radiators are purposely placed. Thermal control engineering relies on specialized software tools that provide simulations and numerical computations to have a good understanding of the thermal scenarios the satellite will experience during its mission long time before the spacecraft is brought to environmental testing to verify the thermal design. In space projects, thermal control engineering runs for quite a long time based solely on theoretical models and calculations until the chance to verify them in a controlled environment comes. As expected, the thermal subsystem interacts with many other subsystems, particularly with the power subsystem. This results from the need of accounting for all dissipated electrical energy from the equipment around and transferring this

6.5 The Space Sauna: Thermal Control

75

energy to a radiator for rejection into space. Also, batteries generally present a narrow temperature operating range and thus require special attention from the thermal control engineers. The thermal subsystem interacts closely with on-board software since the software to measure and control zones and rooms is typically executed on one the main computers. Thermal engineers must work closely with their mechanical and structure colleagues since fixations of heaters, thermostats, and insulation blankets must be agreed, which also impacts the mass and power budgets. Multidisciplinary engineering is at its best. Heat in space does not transfer by convection but only by conduction and radiation. This means that heat produced by on-board electronics needs to be guided internally by means of conducting it through the proper physical channels toward radiators so that it can be radiated away to space. In the same way, spacecraft can (and will) absorb radiation from the external environment, namely from the Sun and the Earth. This radiation can be either absorbed for practical purposes (for heating things up) or reflected, to avoid overheating critical components. Thermal control design internally discretizes the spacecraft volume in ‘zones’ or ‘rooms’ and makes sure all the equipment inside such areas will stay within their allowable flight temperature (AFT) margins, which are specified by equipment suppliers. Heaters are located in places where the thermal balance changes over the mission lifetime. The causes of such changes are: . Units’ dissipation changes. For example, an important heat load being turned on or off varies greatly over time. . External heat fluxes changes: spacecraft attitude is changed, eclipses. . Radiator efficiency changes: changes of optical properties of radiators due to radiation, etc. Thermal control techniques can be: . Passive: supported by fixed area radiators, thermal blankets, etc. . Active: supported by electrical heaters controlled by thermostats and software. Thermal control typically relies on the next set of devices to accomplish its task: . . . . . . . . .

Electrical heaters Thermistors Bimetallic thermostats Radiator surfaces Thermal blankets (MLI) Insulation materials Thermal fillers Paints Heat pipes.

76

6

A Peek Under the Hood

Fig. 6.8 A generic avionics block diagram

Desired orbits and payloads define which approach (passive, active) is more feasible or what type of hardware and devices to use to thermally control the spacecraft. Active thermal control strategies increase the overall complexity of the architecture, as they require more hardware to collect the necessary thermal telemetry and run the software routines to execute the bang-bang control39 to keep the thermal zones under safe margins. Said software executes in either a dedicated CPU for thermal control—which is rarely the case—or in the main on-board computer. An active and redundant thermal control system may severely impact the overall power consumption of the bus and increase the mass.

6.6

The Avionics

In terms of spacecraft avionics architecture, there are many ways to skin the same cat. Different mission requirements will drive the need for certain data interfaces, computing power and data storage, and the avionics must be designed aligned those needs. When we talk about spacecraft avionics, engineers are very good at reinventing the carburetor. They—alas, we—tend to believe that everything is better if designed from scratch. Which is a practice that is slowly phasing out, thanks to the relatively recent appearance of standard form factors and a slowly growing ecosystem of commercial-off-the-shelf bits and parts for satellites adopting these form factors. A very generic avionics architecture for a spacecraft looks as depicted in Fig. 6.8. The green boxes are different functional chains or blocks that provide some essential capability for the spacecraft. Irrespective of what type of application or mission the spacecraft is supposed to perform, those functional blocks are always present; in other words, you cannot go to space without those functions on board. The payload yellow box encloses the functionality that gives a purpose to the mission.

39

In control theory, a bang–bang controller (also called on–off controller), is a feedback controller that switches abruptly between two states.

6.6 The Avionics

77

Fig. 6.9 AOCS functional chain as a member of the avionics architecture

Some of the typical avionics functional chains or blocks do not need to have a computer inside every time; they can be passive. For example, and as we just saw in the previous section, thermal control can be passive, therefore computing will not be present there, although it still exists as a function. What stands out from the figure above, and we discussed in Sect. 6.2.3 in extent, is that spacecraft avionics needs a great deal of interconnection. This means: the different functional chains must exchange data with each other to couple their functionalities for the global function of the spacecraft to emerge. That data is in the form of commands, telemetry, or generic data streams such as files, firmware binaries, payload data, and so forth. The architecture is recursive in the sense that the functional chains have their own internal composition as well which will, in most cases, also require interconnection. For example, for the attitude control subsystem, the internal composition is exploded and depicted in Fig. 6.9. As the picture shows, spacecraft avionics is an aggregation of data buses.40 A spacecraft with poor interconnection between functional chains will see its performance and operations greatly affected. This is a factor that is usually overlooked by small, inexperienced space companies: low-speed, low-bandwidth data buses are prematurely chosen at early stages of the project, only to find out later that the

40

A data bus is a communication pathway that allows data to be transmitted between different components of a computer or electronic system. It consists of a set of parallel wires that connect the various components, such as the central processing unit (CPU), memory, and input/output (I/ O) devices.

78

6

A Peek Under the Hood

throughputs are insufficient for what the mission needs. Changing interconnection buses at late stages can be costly, both in money and time. With high-speed serial buses and peripheral-rich processors becoming more accessible and gaining flight heritage, there is no real reason why not designing the avionics to be a big interconnection matrix exploiting high-speed serial point-to-point connections. Historically, spacecraft avionics has used hybrid interconnect approaches. Most typically, ad-hoc, daisy-chained topologies, where cables come out from a box and go inside the next one, only to go again out to the neighbor. Legacy spacecraft avionics feature a fair deal of “private” buses; i.e., buses that are only accessible by some subsystems and not from the rest of the architecture. When discussing avionics interconnection, there are two different levels to discuss: . Subsystem level: how a subsystem chooses to connect with its own internal components. For example, how an attitude control subsystem talks to its sensors and actuators. . System level: how different subsystems connect with each other to materialize the “global” function. At either level, the approach chosen has been hybrid. Typically: . Star topology: the subsystem main unit (which usually hosts its main CPU) resides in a box of customized form factor and this box is the central “concentrator” of the subsystem. Everything flows towards it. The box exposes a set of connectors. Then, different harnesses and cables come in and out from those connectors, towards external peripherals. These peripherals can be either point-to-point or connected through a bus, or both (Fig. 6.10). In this type of design, the mechanical coupling between different parts of the subsystem is likely different for the different peripherals; i.e., different types of connectors, pinouts, harness, etc. . Backplane: In this approach, the computing unit and the peripherals share a mechanical interface which allows them to connect to a board (called backplane) acting as the mechanical foundation (see Fig. 6.11). The peripherals connect by sliding in through slot connectors and mating orthogonally to the backplane. The backplane not only provides the mechanical coupling but also routes signal and power lines between all the different modules connected to it. How to route the signals in the backplane is a design decision due to the fact the backplane is basically a printed circuit board like any other. When the backplane concept appeared, it quickly gained popularity among system designers. The advantage of routing signals in a standardized way quickly became attractive. This made it possible for multiple vendors to be able to interconnect their products in backplanes and achieve interoperability. Several backplane standards proliferated, but one stood out as the most popular standard from those years: VME (Versa Module Europe), and it is still in use today in some legacy applications. VME is one of the early open-standard backplane architectures. It was

6.6 The Avionics

79

Fig. 6.10 Subsystem federated architecture with a star topology

Fig. 6.11 A backplane connecting 1 CPU unit and 2 peripheral boards

created to enable different companies to create interoperable computing systems, following standard form factors and signal routing. Among the typical components in the VME ecosystem, you can find processors, analog/digital boards, and

80

6

A Peek Under the Hood

the like, as well as chassis (housings), of course backplanes, power supplies, and other subcomponents. System integrators benefited from VME because: . It provided multiple vendors to choose from (supply chain de-risk) . It provided a standard architecture versus costly proprietary solutions, lowering switching barriers . It gave a tech platform with a known evolution plan . It provided shorter development times . It lowered non-recurring costs. Over the years, and as avionics architectures continued pushing the boundaries in terms of bandwidth and performance, parallel bus based architectures started to become obsolete. In fact, what became obsolete was the design philosophy: the idea of a single bus with slave boards connected to the same lines waiting for the master to dictate the overall operations does not hold to what the modern applications need, which calls for more parallel processing with multimaster architectures. Today, the approach has shifted into using point to point, high-speed serial links and fiber optics to ensure the data rates needed for transferring and processing payload information. Examples of modern standard backplane technologies are SpaceVPX (VITA 78) and CompactPCI Serial for Space. SpaceVPX is a high-speed, ruggedized, and modular open standard for space applications. It incorporates architectural measures to meet the requirements of the harsh environment of space, including radiation, temperature extremes, and highvibration environments. SpaceVPX is based on the VPX (VITA 46) and OpenVPX (VITA 65) standards, which are widely used in the defense and aerospace industries. SpaceVPX is intended to provide a common architecture for space-based electronic systems, allowing for easier integration, maintenance, and upgrades. It is also intended to promote interoperability and reusability of components and subsystems, reducing development costs, and improving time-to-market for spacebased systems. The SpaceVPX standard is maintained by the VITA Standards Organization, a non-profit trade association that develops open technology standards for critical embedded systems. In the SpaceVPX approach, the signal routing in the backplane is defined by the system designer, therefore mission dependent. CompactPCI Serial Space (CPCI-S.0) is another open standard for highperformance, ruggedized computing systems designed for space applications. A tad more popular in Europe, it is based on the CompactPCI Serial standard (PICMG CPCI-S.0) and incorporates additional requirements to meet the demands of space environments, including radiation tolerance, thermal management, and specification for low outgassing. The CompactPCI Serial Space standard defines a modular architecture that allows for easy integration of custom and off-theshelf components, including CPU boards, memory modules, storage devices, and input/output (I/O) modules. The standard supports high-speed serial interconnects, including PCI Express, Serial RapidIO, and Gigabit Ethernet, providing high-bandwidth connectivity between system components. CompactPCI Serial

6.7 The Payload

81

Space systems are designed to operate in harsh environments, including highvibration, shock, and temperature extremes. The standard specifies requirements for radiation-hardened components and materials, as well as for thermal management and EMI/EMC shielding. In CPCI Serial Space, the backplanes have fixed topologies and signal routings, therefore stay the same from mission to mission. The CompactPCI Serial Space standard is maintained by the PCI Industrial Computer Manufacturers Group (PICMG), a non-profit consortium that develops open standards for industrial computing and communication systems. It is intended to provide a common architecture for space-based electronic systems, allowing for easier integration, maintenance, and upgrades, as well as promoting interoperability and reusability of components and subsystems.

6.7

The Payload

All the complexity we explored in the previous sections meets its main purpose here. The payload is the component on board the satellite which gives meaning to it all; it is the VIP passenger on board. But it all starts with the mission. The mission means: what is the spacecraft intended for. The reason why you happen to need a metallic box flying at unholy velocities, far from the ground. Because there are simpler ways of doing things than having to launch something into space, so there must be a good reason one must choose to go for that, and that’s what’s called the mission. The mission defines the application: be it communications, earth observation, surveillance, science, astronomy, or deep space exploration. There are multiple different ways to use satellites, and the mission must clearly state what it aims for. With this information in the bag—which tends to take time to figure out and refine—then comes the part of understanding which sensor or equipment can materialize the freshly captured mission requirements. Zooming out, the mapping appears as rather simple: the mission drives the payload and the orbit, which in turn drives the overall spacecraft design, or the tailoring of the design in case of a multi-mission bus. Here is a slightly expanded list of spacecraft payload types (the list is only notional and not exhaustive): . Earth Observation: These payloads are used to monitor, track, and study the Earth’s surface, atmosphere, and oceans. They typically include sensors, cameras, radars, and other instruments that can capture data in different wavelengths of the electromagnetic spectrum. Earth observation payloads can be used for a wide range of applications, including surveillance, weather forecasting, environmental monitoring, and disaster response. . Communication Payloads: These payloads are used to ensure the flow of data between the spacecraft and ground stations (including end user terminals) or other spacecraft. They typically include antennas, transmitters, processors, and receivers that can manipulate and process signals in different frequency bands. See Sect. 6.2 for more details about how telecommunications work).

82

6

A Peek Under the Hood

. Scientific Payloads: These payloads are used to conduct scientific experiments and research in space. They can include telescopes, spectrometers, particle detectors, and other instruments that can measure and analyze various phenomena in space. Scientific payloads can be used for a wide range of research areas, including astronomy, astrophysics, and planetary science. . Navigation Payloads: These payloads are used for precise determination of position of assets. They typically include hardware that can determine the spacecraft’s position and velocity relative to other objects in space, which in turn may be transmitted to the ground for user terminals to determine their own positioning. . Technology Demonstration Payloads: These payloads are used to test new technologies in space. They can include new materials, subsystems, biological experiments, and other technologies that can be tested in the space environment. Technology demonstration payloads are essential for advancing the state-of-the-art in space technology and enabling future space missions.

6.8

Putting It Together: Assembly, Integration and Test

As space projects mature, a situation starts to build up as the different elements, equipment and subassemblies begin to materialize: it must be all put together. This activity is called assembly, integration, and test (AIT), or assembly, integration, and verification (AIV).41 Acronyms vary in bibliography or from organization to organization, but they tend to refer to the same: the act of putting the system together carefully, step by step, verifying that everything works according to the plan while stressing the system under controlled test conditions. To verify that the system is able to withstand the space environment, a set of system-level tests shall be performed which will replicate, to certain extent, the conditions the spacecraft will face during launch and operations. The facilities needed for these tests are somewhat special, such as test rigs, shakers and vacuum chambers, and seldom owned by the system designers.

6.8.1

Mechanical Tests

For mechanical tests, bus designers must specify a set of verifications through test to demonstrate and assure the compliance of strength and static stiffness of the primary and secondary structures throughout the design lifetime. For elements supplied by third-party providers, the subsystem-level compliance with the environmental requirements must be ensured, otherwise the system designer shall test

41

The word ‘verification’ is more general than ‘test’. According to bibliography, there are several way of verifying things, and testing is just one of them.

6.8 Putting It Together: Assembly, Integration and Test

83

and qualify these items separately in case insufficient evidence from suppliers is provided. The mechanical verifications performed at the system level are briefly described in this section.

6.8.1.1 Sine Sweep Test A sine sweep test is a type of dynamic testing that is commonly used to evaluate the structural integrity and dynamic response of spacecraft components, such as antennas and solar arrays. The test involves subjecting the system to a series of vibrations that increase in frequency over a specified range, typically from a few hertz to several hundred hertz. The typical steps in sine sweep tests are: . Set up the Test Equipment and Environment: This involves using a shaker table or other vibration equipment to apply controlled vibrations to the spacecraft. . Define the Test Parameters: The test parameters are defined based on the specifications and requirements for the system. These parameters typically include the frequency range, amplitude, and duration of the vibrations that will be applied. . Apply the Sine Sweep: Once the test parameters are defined, the sine sweep test is performed by applying a series of vibrations to the system. The amplitude of the vibrations is typically held constant throughout the test. . Monitor the System’s Response: During the test, the system response to the stimulus is monitored using accelerometers or other sensors. This data is used to evaluate the system’s dynamic response and to ensure that it meets the required specifications. . Analyze the Test Results: After the test is complete, the data collected from the sensors is analyzed to evaluate the component’s structural integrity and dynamic response. A sine sweep test is an important step in the development and testing of spacecraft. By subjecting the system to controlled vibrations over a controlled range of frequencies, the system’s structural integrity and dynamic response can be better understood.

6.8.1.2 Random Vibration Test This test shall verify the strength and structural response under loading by introducing random vibration through the mechanical interface. For the best outcome of this tests, the random spectrum shall be notched accordingly to limit vibration levels at certain frequencies during testing which may impose stress levels on selected units. As we saw in Chap. 4, the launch vehicle generates significant levels of random vibration during the launch phase. These vibrations can potentially damage sensitive components and subsystems on the spacecraft. Therefore, it is relevant to perform a random vibration test on the spacecraft to ensure that it can withstand such vibrations.

84

6

A Peek Under the Hood

The launch vehicle defines the random vibration test by specifying the vibration levels and frequencies that the spacecraft will experience during the launch. These levels are typically determined through extensive testing of the launch vehicle and analysis of the data collected during previous launches. The launch vehicle manufacturer provides the spacecraft manufacturer with a vibration specification document that includes the vibration levels and frequencies that the spacecraft will experience during launch. The spacecraft manufacturer then uses this specification to design and test the spacecraft to ensure that it can withstand the vibration profiles; this is a critical part of structural design process. During the random vibration test, the spacecraft is mounted on a shaker table that generates the vibrations simulating the launch environment. The vibration levels and frequencies are gradually increased until the spacecraft has been exposed to the maximum levels specified in the vibration specification document. The spacecraft is then inspected for any damage or anomalies.

6.8.1.3 Shock Test A shock test is an important part of the environmental verification for a spacecraft that will ride on rocket vehicles in atmospheric flight. During launch, the spacecraft experiences sudden, high-intensity shocks due to events such as stage separation, ignition of rocket engines, and other abrupt changes in the launch environment such as the ‘max q’ or maximum dynamic pressure condition which is the point when an aerospace vehicle’s atmospheric flight reaches the maximum difference between the fluid dynamics total pressure and the ambient static pressure. These shocks can potentially damage the spacecraft and its components, so it’s important to ensure that the spacecraft can withstand them. The launch vehicle defines the shock test on a spacecraft by specifying the magnitude, duration, and frequency content of the shock loads that the spacecraft will experience during launch. The launch vehicle manufacturer also provides the spacecraft manufacturer with a shock specification document that includes the shock loads that the spacecraft will experience during launch. During the shock test, the spacecraft is subjected to mechanical shocks simulating the launch environment. The shocks are typically delivered using a shock table, which is designed to reproduce the shock loads specified in the shock specification document. In more NewSpace approaches, the structure might be literally hit with a hammer. The spacecraft is then inspected and tested for any damage or anomalies. 6.8.1.4 Extras There might be some other tests as well, depending on the schedule and the budget. “Classic” space companies may execute acoustic tests which involves putting the satellite in front of a wall of speakers just like the ones used in concerts and blasting a few “songs” which tend to sweep different frequencies to simulate the acoustic environment during launch. You can hear and feel these tests even if you are miles away from the facilities performing them. Deployment tests simulating

6.8 Putting It Together: Assembly, Integration and Test

85

zero gravity conditions are also done in classic space using highly sophisticated rigs or even helium-filled balloons.

6.8.2

Thermal Vacuum Test (TVAC)

Spacecraft designers must perform a Thermal Vacuum (TVAC) test on the spacecraft. This type of testing simulates the space environment by subjecting the satellite to extreme temperatures and vacuum conditions. Here’s a brief, step-by-step guide on how the TVAC test on a satellite is typically executed: . Vacuum Chamber Set Up: The first step in performing a TVAC test shall be to set up the chamber. The vacuum chamber is a large, sealed container that creates a vacuum by removing all the air and other gases from the chamber. The satellite is placed inside the vacuum chamber, and the chamber is then sealed. . Install and Verify Thermal Control Systems: The thermal control systems are responsible for maintaining the satellite’s temperature during the TVAC test. These systems include heaters, radiators, and thermal shrouds that help regulate the satellite’s temperature. The thermal control systems are installed before the TVAC test to ensure that the satellite’s temperature is properly maintained during the test. . Begin Thermal Cycling: The thermal cycling process involves subjecting the satellite to a range of extreme temperatures, from extreme positive (hot case) to extreme negative (cold case). The temperature shall be varied gradually, and the satellite shall be held at each temperature for a set period according to a predefined profile to be defined during critical design stage. . Begin Vacuum Exposure: Once the thermal cycling process is complete, the vacuum chamber is evacuated to create a vacuum similar to that found in space. The vacuum exposure phase is designed to simulate the vacuum conditions that the satellite will experience in space. During the vacuum exposure phase, the satellite’s thermal control systems are used to maintain its temperature. . Conduct Performance Testing: Once the satellite is in a vacuum, the performance testing phase shall begin. This phase is designed to evaluate the satellite’s performance under simulated space conditions. The performance testing includes a range of tests, including electrical, mechanical, and thermal tests. . Monitor the Satellite: Throughout the TVAC test, the satellite shall be closely monitored to ensure that it is functioning properly. The thermal control systems are monitored to ensure that the satellite’s temperature is maintained within the specified range, and the performance tests are monitored to ensure that the satellite is performing as expected.

86

6

A Peek Under the Hood

. End the Test: Once the TVAC test is complete, the vacuum chamber is repressurized, and the satellite is removed. The satellite is inspected for any damage or other issues that may have arisen during the test. TVAC test is an essential step in the development and testing of the spacecraft. As the project progresses into critical stage, and as more information about equipment allowable flight temperatures (AFT) and the thermal control subsystem design matures, a more detailed TVAC profile will be devised.

6.8.3

Software Verification

Besides shaking and cooking the satellite like there’s no tomorrow, the on-board software must be thoroughly verified before launch. For verifying the software, the satellite is connected to a set of simulators which feed the software with the right stimulus to make it believe it’s flying. A variety of procedures will be run to ensure the attitude control, and command and data handling and other parts of the software are working as designed. More about simulating spacecraft can be found in bibliography.42 ,43 The test environments to verify software might be purely digital, or a combination of real on-board equipment connected to simulators and stimulators. These setups are usually called “Flatsat”, engineering models or Hardware-in-the-loop, depending on who you ask.

6.8.4

Concluding and Shipping

The AIT phase ends with a pre-ship review (PSR) which assesses the acceptability of the spacecraft in the light of test data. A largely overlooked area in space projects is related to shipping the satellites to the launch site. Shipping complex goods, and goods which may contain hazardous materials (such as large Lithium-Ion batteries) tends to be a bit more than a headache. There is a fair amount of paperwork and compliance involved, and the regulations for moving such things around the world are strict and complex. It is very easy to overlook shipment matters when you are minding some other “more technical” business, but it is equally, or even more important than anything else. There is little point working hard for years on software, structures, and attitude control systems if at the end of the day the satellite cannot go through the door and get its passport stamped due to shipment or export regulations.

42 43

https://link.springer.com/chapter/10.1007/978-3-030-66898-3_6. https://link.springer.com/book/10.1007/978-3-642-01276-1.

7

Satellites and Machine Learning

In God we trust. All others must bring data. —W. Edwards Deming

Abstract

Spacecraft are not only eye-catching because they go up riding loud rockets spitting fire. Spacecraft are also remarkable data sources. The sea of data they generate is what essentially allows them to be reliably teleoperated. Spacecraft are time-series data goldmines thanks to a network of on-board computers and sensors that create a tide of multivariate information to be consumed. The question is: to be consumed by whom? Only by humans with a functioning brain capable of “connecting the dots”? This has been the approach for the last 64 years since Sputnik 1. Can’t algorithms connect those dots? This chapter dives into how algorithms must be equipped with the nuances needed to understand the dynamic of the processes hidden behind the numbers, and how those numbers and figures relate with each other.

We wouldn’t be very bold if we said satellites are data sources. For sure they are, and that is perhaps one—if not the—reason why we use them in the first place. But the term data can have different meanings. Or, in some other words, spacecraft can generate data of different flavors and for different uses. On the one hand, spacecraft such as observation satellites generate data which is marketable to customers. Depending on the type of mission, this can be a depiction of an area of the earth in the format of an image, or a scientific monitoring of the atmosphere, or the sea. But there is some other type of data generated on board spacecraft which is not directly consumed by customers, although this does not make it any less relevant: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_7

87

88

7

Satellites and Machine Learning

Fig. 7.1 An unannotated graph

Fig. 7.2 Google searches over time about something unspecified

spacecraft health data, also known as housekeeping data. This data may be as simple as one bit indicating the open/close status of a valve or as complex as biometric data of an astronaut. Now, imagine that someone comes and puts this plot right in front of you (Fig. 7.1). And imagine the person showing you this does not say a single word about what the plot represents. Well, that curve could be anything: a measurement of temperature from an industrial process, the inflation rate of a country, the price of some stock, the blood pressure of a panda bear, or the deformation of a beam in certain axis. As you look at it, you can start to see some features. You can see a change of trend at some point—it was more or less flat in the first third, then started to ramp up. You can observe some spikes here and there. Still, this plot says very little: we don’t even know what the axes represent. Let’s add more information to the mysterious plot (Fig. 7.2). Now, we see the x axis appears to be time. By now, you also most likely recognize it comes from Google Trends, which means this plot represents searches over time—in Google terms, “interest over time”—about something. This plot now starts to say a bit more: it represents how much Google users have been googling about something, in a time range. But we remain largely ignorant, as we do not know how many people (is it one person or millions?), and we do not know the topic searched. But we do know more about the axes now, the X axis is clearly time, and the range of this plot goes from 1 Jan 2004 until somewhere in 2018. This is, then, time-series data. Time-series data is a collection of discrete observations, each accompanied by a timestamp which clearly indicates when that sample was obtained. Is it very natural to plot this kind of data versus time, this means, y axis being the value of the variable of interest, and x axis being the time

7

Satellites and Machine Learning

89

Fig. 7.3 Plot is about people searching about dogs

at which the sample of the variable was obtained. Let’s complete the plot now for clarity (Fig. 7.3). The plot is about worldwide Google searches of the animal “Dog”, from 2004 to 2018. Now you could start to believe that you know everything you needed to know about the data which originated the plot. But do you? Well, no. The data is still leaving many questions unanswered. All you know is that people have been more or less searching for the animal “dog” with the same “intensity” from 2004 to somewhere in late 2009, until the trend mysteriously changes. Why did the trend change? Here lies the core of the challenge with time-series data analysis, whether the plot is about google searches on pets, or astronaut biometric telemetry. One thing is to see that a trend has changed, a different story is to know why it changed; i.e., the dynamics behind it. Imagine you feel ill, you grab a thermometer, and you can see that you have a fever. But that’s not enough to solve the root issue; the key is to know why your body temperature has changed. Is it a flu? As we discussed before, Spacecraft, internally, are networks of computers and electronic devices hooked together. And because satellites are teleoperated artifacts, it means that once launched, you can only get to know their status by means of measurement data sent to ground. Operators rely entirely on time-series data collected as the object flies: currents, voltages, temperatures, pressures, different types of counters, Boolean status bits. And here, it is important to observe the following: measurements represent discrete “snapshots” of physical variables that evolve in a continuous capacity. But the actual physical variables themselves, we shall never be able to directly observe. It is only through transducers (aka sensors), which are devices sensitive one way or another to the physical variables of interest, that we can obtain indirect observations of such variables. For example: a temperature sensor does not output a temperature but a voltage, a current or a digital word which represents the temperature. When you check your liquid-in-glass thermometer out of your window to see how you need to dress in a cold morning, what you are reading is the expansion/contraction of a liquid in a capillary all put

90

7

Satellites and Machine Learning

Fig. 7.4 Google searches about Christmas are obviously seasonal

in a convenient scale which makes you believe you are reading a temperature, but you are not, nor you ever will be able to read a temperature directly. You can only “feel” temperatures directly through your body, but not in a quantifiable way. And machines, at least for now, are unable to “feel” like we do. A problem starts to reveal itself: because between our control needs and the real physical variables we need to gauge for control there will always be something, measurement devices such as sensors can—and will—introduce noise and artifacts which are not present in the physical variable being monitored, but how to know? A faulty sensor may interfere with the time-series data by introducing a trend which is not present in the variable it is sensing or add out-of-family values which might look like serious failures. Is time-series data on-board a spacecraft generated from stochastic processes? Well, most physical processes in the real world involve a random element in their structure. A stochastic process can be described as ‘a statistical phenomenon that evolves in time according to probabilistic laws’. Time-series data generated on board is a combination of deterministic processes (a power system, batteries, thermal control, star tracker sensors observing the sky) combined with a random part which stems from the contribution of non-ideal nature of sensing devices, thermal noise, etc. Time-series data can show different types of variations: seasonality, trends, and other cyclic variations. Let’s use Google trends again to exemplify seasonality with an obvious one: Christmas (Fig. 7.4). With little surprise, we see that the term “Christmas” is very popular once per year, unsurprisingly peaking at the end of December. On board satellites, there are plenty of variables which are ‘Christmas-like’ variables, that is, variables with obvious seasonality. What marks a season on a satellite? It depends on the orbit,

7

Satellites and Machine Learning

91

Fig. 7.5 A bit of less obvious seasonal spikes in data

but typically the sunlight/eclipse cycle.1 As the satellite goes from being under full sunshine to the darkest shadow, many variables on board will reflect this, notably temperatures, as the system may experience extreme thermal swings due to this cycle, given that the sun’s radiated heat is absorbed by the structure and then spread across the body through its different conductive surfaces. Areas looking straight to the Sun can experience wild swings of several tens of degrees Celsius in a very short time. Let’s see now a bit of a less obvious example of seasonality, using the term “Football” (Fig. 7.5). There are spikes all over the place, but those in June 2006, June 2010, June 2014, and June 2018 are sensitively higher. What is the reason behind those? You guessed it right: World Cups. Now, we should note that people around the world may like different types of football: American football, Australian football, Gaelic football, etc. All those searches are most likely also part of this plot. If we were strictly searching for what is also called miscalled “soccer”, all those other “footballs” can be considered noise, but they can’t be ignored.2 A man’s signal is another man’s noise. Satellite time-series data can show plenty of ‘world-cuplike’ spikes. For instance, when loads are turned on or off, there can be peaks associated with capacitors suddenly charging (inrush currents), inductive loads, etc. If needed, spikes or other artifacts can be smoothen-out from the data with different signal processing techniques such as filtering, moving averages, etc. But

1

Certain orbits may not experience eclipses at all or may face eclipses during certain times of the year. 2 The plot shows an interesting “negative” spike around April 2020. Most likely related to the COVID-19 pandemic most likely related to the fact most football leagues around the world went to a full stop during lockdown.

92

7

Satellites and Machine Learning

Fig. 7.6 Pythagorean theorem searches versus time

watch out: processing time-series data may also create mirages.3 A proper data scientist must be able to look at the numbers with a good dose of skepticism while supporting her work with the necessary domain knowledge. One thing might be very clear by now: the multivariate nature of satellite onboard telemetry. Very rarely a variable on a satellite is absolutely “self-contained”, this means, its behavior entirely depending on itself, with all the information about it strictly packed in its value and its trends, and nothing else. Self-contained variables are—by far—the exception. Then, understanding data correlation on-board spacecraft is paramount. Let’s see an illustrative example, using the data about google searches on “Pythagorean Theorem” versus time, worldwide, 2004–2021 (Fig. 7.6). It shows a clear yearly seasonality, where every July of every year since 2004 the “interest” about Pythagorean Theorem drops substantially. If this plot were to hold absolutely all the information needed, we could conclude that when it gets hot, everyone stops caring about the Pythagorean Theorem. Clearly this is absurd, and the seasonality obviously responds to the school season being over at that time of the year (Fig. 7.7). On board a spacecraft, time-series data variables depend on a variety of other variables which in turn depend on other variables, and the analysis shall be done taking this dependence into account, otherwise the conclusions drawn might be dangerously wrong. Drawing wrong conclusions on top of bad measurements or

3

Eugen Slutsky showed that by operating on a completely random series with both averaging and differencing procedures one could induce sinusoidal variation in the data. Slutsky went on to suggest that apparently periodic behavior in some economic time series might be accounted for by the smoothing procedures used to form the data. More info: https://mathworld.wolfram.com/SlutzkyYuleEffect.html.

7

Satellites and Machine Learning

93

Fig. 7.7 A probable correlation: Pythagorean theorem searches and school season (note the interesting noise during the COVID-19 pandemic)

misidentifying correlations and covariances with causation can make the situation spiral down into a disaster, at times with terrible consequences.4 Often, humans in the loop can make the situation worse by adding risky time delays before acting; we are very smart, but we take our time to think in the face of ambiguity. Although spacecraft operation remains a very human-centered activity, equipping machines with decision-making power during emergency situations does not come without its challenges as well. If anything, machines can be equipped with failure isolation capabilities which, at least, will minimize the probability of situation worsening or snowballing. Time-series data may show slow, long-term trends which may be difficult to identify. These kinds of trends are particularly tricky on spacecraft telemetry. Such trends can indicate a slow “build up” of a situation leading to a failure. For example, a steadily increasing current consumption of some equipment on board, for instance a radio, could indicate the device is requiring more and more power to function due to degradation produced by TID, which may eventually put too much stress on the power regulators feeding the device if the power drain crosses certain boundaries. Long term trend analysis may be hard to perform as it may require processing very large amounts of data and be deprioritized in favor of more immediate, shorter-term, and easier to identify data features. Slow trends are perhaps the most unsettling type of anomaly to track: hopelessly watching a telemetry variable consistently approaching a critical threshold is not precisely an amusing experience. Now, the big question: can we predict time-series data? The quick answer is: it depends. The somewhat longer answer is that, by knowing the features of a variable (seasonal, cyclic and other trends) and by having good domain knowledge (see next section for more on this) about the process which generates such variable, we can assess with some level of confidence that the variable might continue

4

See a tragic case of actions taken on top of bad measurements: https://pubsonline.informs.org/ doi/10.1287/orsc.2017.1138.

94

7

Satellites and Machine Learning

Fig. 7.8 Life of a Turkey: all is great, and nothing indicates the trend will change; until it does. Source The Black Swan, by Nicholas Nassim Taleb; Wikimedia Commons

evolving one way or another, but we shall never be able to fully predict—this means, have 100% confidence—its behavior in the longer shot because there can always be “black swans”. Let’s use a famous example. Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race. This feeding process goes on for days, weeks, months, and years. If a remote operator were able to monitor the turkey’s weight or quantify its happiness as a telemetry variable, it would be a very boring variable, showing a very “predictable” increasing trend. The operator could even be able to do some hand calculations and assess her predictive skills by estimating how much weight the bird will gain next week. Absolutely nothing indicates the bird’s situation will change anytime soon, and the chubby feathered fella believes life couldn’t get any better. Until some Wednesday before Thanksgiving. The history of a variable or a process tells only a partial story about what is going to happen next. Naive projections of the future from the past can be very misleading (Fig. 7.8). Perhaps somewhat less dramatically than what happened to the poor turkey, spacecraft can grow statistical models where initial “black swans” can become “whiter swans” as more data is gathered. For example, any satellite in a Low-Earth, Sun Synchronous orbit will periodically visit a region called the South Atlantic Anomaly,5 which is a region where the magnetosphere allows for an increased flux of high-energy particles. As it is known (and we spoke in extent in Sect. 3.3), such particles affect the performance of on-board electronics, creating unwanted resets, bit-flips, and other nasty effects (see Sect. 3.3.1 in particular). Any untrained algorithm, as well as an inexperienced human operator, might be puzzled the first

5

The South Atlantic Anomaly (SAA) is an area where Earth’s inner Van Allen radiation belt comes closest to Earth’s surface, dipping down to an altitude of 200 km (120 mi). This leads to an increased flux of energetic particles in this region and exposes orbiting satellites to higherthan-usual levels of ionizing radiation. The exact extension of the SAA varies with magnetosphere conditions. An algorithm preventing from failures during SAA flyovers shall be able to ingest this variable geometry as a configuration input.

7.1 Can There Be Too Much Data?

95

time the spacecraft is affected with SEFIs while flying over there. It could be still somewhat perplexing by the second and the third time. By the fourth time, and because orbits can be predicted with reasonable accuracy, no one should be surprised, and both an algorithm or an operation shall be able to assess if the current geolocation of the satellite will approach this critical area and take proper precautions, for example avoiding critical operations or going into safe mode. Fool me once… In conclusion, spacecraft are not only eye-catching because they go up riding loud rockets spitting fire. Spacecraft are also remarkable sources of data about their own status in the form of a legion of sampled variables sent to ground. This sea of data is what essentially allows spacecraft to be reliably teleoperated. Spacecraft are time-series data goldmines, continuously monitored by a myriad of computers and sensors creating a tide of multivariate information to be consumed. The question is: to be consumed by whom? Only by humans with a functioning brain capable of “connecting the dots”? This has been the approach for the last 64 years since Sputnik-1. Can’t algorithms connect those dots? They can, but such algorithms shall be equipped with the nuances needed to understand the dynamic of the processes hidden behind the numbers, and how those numbers and figures relate to each other.

7.1

Can There Be Too Much Data?

As we speak, petabytes of data are stored only to be ignored forever. It may be telemetry from a gas turbine, sales data from an e-commerce platform, or video footage from a camera pointing to a bucolic street. Neglecting data is a growing problem: we appear to be generating more and more data, but we humans are still instrumental in performing all sorts of data cleansing, wrangling, and tidying for algorithms meant to work with such data to perform better. No matter how sophisticated or complicated our machine learning algorithms might be, they still require a human brain equipped with good domain knowledge to help the algorithm be aware of the nuances they cannot parse by themselves. In the words of Robert Monarch6 : no algorithm survives bad data. But data wranglers and feature engineering experts are not growing at the same rate as data is growing. Therefore, just like an overflowing sink, raw data is filling hard disk drives everywhere. This data surplus appears as a ‘good problem’ to have: it is better to have more data than less data. But is it? Data seems to be showing what economics calls marginal utility. This factor has been illustrated by the famous diamond-water paradox, where an essential element for life like water is comparatively cheaper than an object with less practical use like diamonds. The paradox is explained

6

Robert Monarch is an expert in combining Human and Machine Intelligence, working with Machine Learning approaches to Text, Speech, Image, and Video Processing.

96

7

Satellites and Machine Learning

around the concepts of marginal utility and total utility. Marginal utility refers to the additional satisfaction or usefulness a person derives from consuming one more unit of a good, while total utility refers to the overall satisfaction or usefulness derived from consuming all units of a good. Water has a high total utility because it is essential to human life. However, its marginal utility is low because people have access to enough water to meet their basic needs. Therefore, the additional satisfaction from consuming an additional unit of water is low, and its price is correspondingly low. On the other hand, diamonds have a low total utility because they are not essential to human life. However, their marginal utility is high because they are scarce, and people value them highly for their beauty and rarity. Therefore, the additional satisfaction from consuming an additional diamond is high, and its price is correspondingly high. In other words, the value of a good is determined not only by its usefulness, but also by its scarcity and the additional satisfaction it provides. The diamond-water paradox is a reminder that prices are determined not only by the intrinsic value of a good, but also by the subjective values and preferences of the people who consume it. Data follows a similar pattern. The more data there is, the less the satisfaction of a single ‘unit’ of extra data generated. A century ago, the satisfaction of taking a photo must have been something. Nowadays, we take 5 photos of the same thing, just to be sure. You feel no extra satisfaction as you check those 5 same pictures, besides the preference of choosing the one you hair looks best. During a data drought—in satellites, for instance during failures—every single bit of information becomes a matter of life or death. You would think data is always valuable, as everyone appears to make it look. But data can also show effects intangible assets show, such as sunkenness. That means that certain types of data is difficult to sell, because such data is only worth, if worth at all, to certain audiences. Think of an IoT equipped fridge temperature data stored in the cloud. Would anyone be willing to pay for it? Also, research7 has provided some proof that as data sets grow larger they tend to contain arbitrary correlations. These correlations appear simply due to the size of the data, which indicates that many of the correlations will be spurious. Too much information tends to behave like very little information. And, last but not least, data gets old. For instance, earth observation data rapidly loses value as time passes and as the scenery under observation changes. Try using Google Street when the city imagery available is 10 years old. Only good for nostalgia, but not for navigation as the scenery has most likely changed a lot. In times where Machine Learning and Artificial Intelligence are the buzzwords everyone is wanting to spout in their marketing materials, it is good to think

7

“The deluge of spurious correlations in Big Data”: https://www.di.ens.fr/users/longo/files/Big Data-Calude-LongoAug21.pdf.

7.1 Can There Be Too Much Data?

97

about the practical limits behind data accumulation and the effort—human effort— required to improve data quality for algorithms to perform better. Satellites create invaluable data about their own performance which cannot go ignored, and with more processing power on-board such data can be used to empower the spacecraft to take more autonomous actions and let the human operators on the ground mind more intellectually challenging tasks than monitoring a battery charge on every pass.

8

Operating Distant Machines Floating in Space

The difference between a developer and a sysadmin is that the developer wants the system to work, whereas the sysadmin wants the system to keep working. —Unknown

Abstract

Although most computer-controlled cyber-physical systems share a lot in common—after all, they all carry CPUs, memories, sensors, actuators, operating systems, and application software—the fact their underlying behavior is largely architecture and mission dependent requires creating unique sets of commands and responses to manage this unique behavior. This chapter delves into the challenges present while operating bad-tempered systems over unreliable, noisy channels, and how designers’ choices on commands and response’s become critical to ensure a sound operation.

Picture yourself sitting in front of a computer system. And imagine you want to interact with it. You may want to know its status, or command it to do something you need. What’s next? Well, we’ll have to assume the system offers a User Interface (UI) which includes a mechanism for inputting and expressing what you want to do (a keyboard, a touchscreen, etc.), and a way of displaying outputs (a screen, some LED lights, some audible buzzer). Nothing too complicated or unfamiliar thus far. Now imagine that, in turn, the system is remotely located. You are in a specific longitude and latitude, and the system is diametrically on the opposite side of the globe. Then, you have no keyboard, no screen, no possibility of eyeballing any LED lights whatsoever, let alone hearing a beep.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_8

99

100

8

Operating Distant Machines Floating in Space

What now? We may need to rule out telepathy or the possibility of hiring a psychic. This is, in a nutshell, the challenge of administering remote systems that contain computers. It could be a server, a satellite orbiting Venus, a power plant in a distant location, an ultra-low-power IoT sensor gauging the humidity of a crop in a distant field, or a drone patrolling a border. How do we interact and manage distant, bad-tempered systems when all we have is a noisy, unreliable channel that sends information as a stream of faint electromagnetic energy? The type of tasks we would like to perform on this remote system are: . . . . . .

Analyzing system logs and telemetry identifying potential issues. Applying updates, patches, and configuration changes. Adding, removing, or updating passwords, encryption keys, etc. Tuning system performance. Configuring, adding, and managing memory contents and file systems. Performing tests, commissioning procedures, etc.

And here lies an interesting issue. We’ve talked about “computer systems” so far. But there are different kinds of them. On one hand, a remote server is basically pure software running either a web server, a database, a backend, or similar, without much interaction with physical reality other than needing a power input and perhaps sensing the surrounding temperature for informing air conditioning requirements. On the other hand, we may have what’s we have called cyberphysical systems, that we said is a combination of software plus an underlying collection of special equipment to gauge the environment—sensors—and act upon said environment by means of actuators. Then, when it comes to administering these special kinds of computer systems, we need to interact with their special commands, telemetry, and internal state machines. What is a state machine? Shortly, a state machine is a computational abstraction which defines different modes and transitions between those modes that define the system’s behavior in time. For example, a fuel pump has a reasonably simple state machine (Fig. 8.1). Now, as a cyber-physical system complexity grows, it becomes a composition of a myriad of state machines running in parallel, although there is always a “system level” state machine which aggregates those smaller, local ones and defines the behavior of the system from an external observer’s perspective. When cyber-physical systems adopt operating systems, part of the problem is (kind of) solved. Operating systems come with a rich set of pre-loaded commands and responses (the usual ls, nc, pwd, dd, chmod, or similar), so system developers and operators do not have to reinvent the wheel and can focus on application behavior. Operating systems have undergone a good deal of standardization with notable efforts such as the Portable Operating System Interface (POSIX). But— there’s always a but—most operating systems expect commands and responses to be sent and received by means of the user interfaces of the style mentioned at the beginning of the chapter. Then, when tele operating an system that contains an OS, we must replicate the “keyboard + screen” scenario somehow, even if we are

8

Operating Distant Machines Floating in Space

101

Fig. 8.1 Fuel pump state machine (you probably don’t think of this while you’re topping your car, but it’s what the pump needs to deal with)

remotely located. If you happen to be on a good network, this is just a remote shell. Nothing too fancy, and it works well. Fun kicks in when you are NOT on a good network, and you need to type human-readable text and read text back from the OS. When the channel is bad, shells are a suboptimal way of administering remote operating systems. One solution for this is to define very efficiently packed commands with carefully defined responses. In short, human-readability is abandoned in favor of reducing overhead and optimizing the use of channel bandwidth. But, because the standard commands in operating systems are designed for humans, you still need to parse their standard outputs to understand the status of their execution. There is a reality: making things more “human-eye friendly” tends to bring a good dose of informational redundancy. For example, informing about the quantity zero over a serial line for a human to interpret it takes 8-bits (ASCII code 0x30), whereas a computer can manage with just 1-bit.1 Operating systems and their interfaces to the outside world still remain highly human centered.

1

Actually, with 1 bit, more than a “quantity” you can represent a status (true/false, hi/low, on/off).

102

8

Operating Distant Machines Floating in Space

Needless to say, an operating system cannot predict the behavior that underlies the cyber-physical system it’s installed on. A fuel pump, an electric scooter, a drone, a rover exploring Mars; they all have very different architectures and purposes and therefore they interact very differently with the environment. Even if all those systems run the exact same operating system distribution and version, the system designers must create their own application software and system-specific commands to alter and gauge the execution of the underlying state machine. How to define all this application-specific things? There is no recipe for this; it’s up to the designers and largely system dependent. This also means there is very little standardization when it comes to defining commands and responses to handle remote systems’ time evolution. To summarize, although most cyber-physical systems share a lot in common— after all, they all carry CPUs, memories, sensors, actuators, operating systems, and application software—the fact their underlying behavior is largely architecture and mission dependent requires creating unique sets of commands and responses to manage unique behavior. What’s more, when tele operating these somewhat bad-tempered systems over unreliable, noisy channels, designers’ choices on commands and response’s structure become critical to ensure a reliable operation. Choices that tend to be underestimated until too late in the system life cycle.

9

Making Reliable and Dependable Spacecraft

Reliability is not something you can add later. —SpaceX Founder and CEO

Abstract

System dependability is a design decision, and it requires architectural flexibility while equipping the flight software with the proper decision making power to act and avoid depending on a human operator to decide what to do or make the situation worse. This chapter describes the difference between reliability, availability, and dependability, and how all that applies to satellites.

In high school, during the first classes of programming, it is quite typical to discuss “infinite” loops. These are the classic sequence of instructions which are repeatedly executed as long as a condition remains true. Regardless of if you finished school 25 years ago or last week, if those loops were truly infinite, they should still be running by now. Are they still looping by any chance? No, hence the “infinite” label was a tad too much. A case of fake marketing? In computing—and in general—nothing “loops” forever. Things eventually stop running. First and foremost, because machines require energy from finite sources which will eventually be exhausted. There are pretty good reasons why machines don’t work perpetually. But it can also be that someone just pulls the plug or just presses ctrl+C1 — which is probably what happened to all our “infinite” loops back in school. Or

1

In many command-line interface environments, control+C is used to abort the current task and regain user control. It is a special sequence that causes the operating system to send a signal to the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_9

103

104

9

Making Reliable and Dependable Spacecraft

it can be an internal failure like a power supply short circuiting, an external factor like an electricity blackout. Computers and systems eventually give up. Bad use, defective materials, flawed designs, or simply because we flip the switch off. Chances are that next time we need to use a system, the system will disown us and refuse to do anything useful. We saw in Chap. 3 how susceptible to upsets and disruptions are the chips we put in orbit, so we cannot hope a chip in space to work forever. This opens up the discussion for a set of concepts which are related to the probability of artifacts operating as intended under certain circumstances. Concepts that tend to be used interchangeably: dependability, reliability, and availability. Do they all mean the same? Let’s see. In literature—for example in Dependable computing and fault-tolerance2 — dependability is defined as an umbrella concept that encompasses reliability, availability, safety, integrity, and maintainability. The IEC international vocabulary defines the dependability (of an item) as the “ability to perform as and when required”. Something dependable means that it is reliable AND of high availability. But also, maintainable, and safe. What is reliability, more specifically? Reliability is the ability to perform a required function under given conditions for a given time interval; the continuity of correct service. There are some relevant reliability metrics such as MTBF (Mean Time Between Failures) and MTTR (Mean Time to Repair) which describe reliability of a system in figures, as well as FIT (Failures in Time) number which is the number of failures that can be expected in one billion (10E9) device-hours of operation. On the other hand, availability is the ability to be in a state to perform a required function, under given conditions, at a given instant of time, or after enough time has elapsed, assuming that the required external resources are provided. In short, availability is readiness for correct service. Another take states that availability is the degree to which a system is in an operable and committable state when it is called for at an unknown—i.e., a random—time. Availability tends to be also called uptime, although it is very easy to see uptime mixed up with reliability. High availability tends to be specified and quantified in an unusual unit called nines. Percentages of a particular order of magnitude are sometimes referred to by the number of nines or “class of nines” in the digits. For example, electricity that is delivered without interruptions 99.999% of the time would have 5 nines availability, or class 5. This unit finds application in enterprise computing and telecommunication equipment, often as part of a service-level agreement. Mind that availability and the number of nines a service complies to is a somewhat slippery metric which can be hard to gauge, depending on the scope of the service and how the downtimes are—sometimes opportunistically—counted.

active process. Usually the signal causes it to end, but the program may “catch” it and do something else, typically returning control to the user. 2 https://ieeexplore.ieee.org/document/532603.

9

Making Reliable and Dependable Spacecraft

105

Table 9.1 Availability and downtime as function of “class of nines” Availability (%)

Downtime per year

Downtime per month

Downtime per week

Downtime per day (24 h)

90% (“one nine”)

36.53 days

73.05 h

16.80 h

2.40 h

99% (“two nines”)

3.65 days

7.31 h

1.68 h

14.40 min

99.9% (“three nines”)

8.77 h

43.83 min

10.08 min

1.44 min

99.99% (“four nines”)

52.60 min

4.38 min

1.01 min

8.64 s

99.999% (“five nines”)

5.26 min

26.30 s

6.05 s

864.00 ms

99.9999% (“six nines”)

31.56 s

2.63 s

604.80 ms

86.40 ms

99.99999% (“seven nines”)

3.16 s

262.98 ms

60.48 ms

8.64 ms

99.999999% (“eight nines”)

315.58 ms

26.30 ms

6.05 ms

864.00 ms

99.9999999% (“nine nines”)

31.56 ms

2.63 ms

604.80 ms

86.40 ms

Table 9.1 shows some examples of high availability for different number of nines, with an indication of the “downtime” per relevant unit of time (year, month, week, day) which the number of nines represents. A fundamental difference between reliability and availability is that reliability refers to failure-free operation during an interval, while availability refers to failure-free operation at a given instant of time, usually the time when a device or system is requested to provide a required function or service. Reliability is a measure that characterizes the failure process of an item and the probability of failure, while availability combines the failure process with the restoration or recovery process and looks at the probability that at a given time instant the item is operational independently of the number of failure/repair cycles already undergone by the item. This is interesting because it means that something can be considered reliable and yet of low availability. Let’s see an example using one of the most dreadful machines ever conceived: printers. Are printers available or reliable? You would probably say, neither. But, more accurately, provided we patiently provide them with all the elements they need (toner, paper, electricity, love, and comprehension) and we ensure the right configuration (IP address, drivers, etc.) they may work for a given amount of time, at least as long as all those conditions are met. This is somehow enough to call them “reliable”; remember that reliability is the ability to perform a function for a given period of time. But are printers available? Definitely not. Most of the times when you randomly need them, and you happen to be in a terrible urgency to get something printed—which the machine happens to be able to sense somehow—they’ll

106

9

Making Reliable and Dependable Spacecraft

turn their back on you. Therefore, printers cannot be called dependable devices, for they are not reliable *and* available. Designing for dependability requires a set of engineering measures which not only include using the right materials to ensure constructive reliability, but also ensuring that failures, should they happen, will be sorted out in good time in order to always keep the system in its right state to respond if needed. What are typical examples of dependable systems? A few very quickly pop up: life support equipment, surgery rooms, power grids, aircraft. The implications of these systems being undependable are intolerable. In order to make them dependable, a set of technologies and architectural decisions must be combined: alternative power sources, hot/cold redundancies, cross-strapping circuitry, self-diagnostics, failure detection and handling, etc. But what about spacecraft? Is dependability a typical design driver? The answer is (no pun intended): it depends. Small satellites, for example for LEO missions, tend to be designed for cost-effectiveness, and it is because of this—among other factors—that their dependability performance can be, to say the least, mediocre. Small satellites may be equipped with redundancies on board, but this still does not imply high availability. Why? Principally because they typically require intervention of a human operator to assess on-board anomalies and command the satellite back to nominal state, which can only be done during the discrete windows of opportunity when the satellite is passing over its ground station. In short, LEO satellites’ uptime— this means, being in a state of readiness to generate revenue when requested—is not precisely impressive and rather a function of the independent probabilities of failure and contact with ground occurring around the same time: the only way to minimize downtime would be if the failure happens right at the time when a human eye is looking. Unlikely. Customers of LEO constellations must usually deal with low availability figures of constellation services: it can take several tries for a customer to obtain an acquisition of a location of interest because of satellites not being ready when acquisition request arrives, missing good opportunities of capturing relevant, time-sensitive information. No one would accept having to try several times to take a picture with their phones: a good reason why we take pictures is to capture something which is unlikely to happen again, at least to happen in the same way. Of course, except for people who like to take pictures of their breakfast. On the other hand, telecommunications satellites tend to be designed for highavailability and high reliability, typically in the range of two-nine and a half figure (~ 99.5%3 ), which means having only around 2 days per year maximum of downtime allowed. SATCOM spacecraft are mandated to be dependable due to the fact they typically serve many customers at once due to the multiple-access nature of their payloads. For SATCOM missions, failure can still be an option—space is hard

3

https://artes.esa.int/european-data-relay-satellite-system-edrs-overview.

9

Making Reliable and Dependable Spacecraft

107

after all—but their architectures allocate resources to ensure the system will maximize availability and be able to quickly go back to provide service, should internal subsystems act up. This is done by having automatic FDIR (failure detection, isolation, and recovery) software on board, combined with extensive redundancy in the avionics architecture. In conclusion, system dependability is largely a design decision, and it requires the right combination of architectural flexibility and equipping the flight software with the proper decision making power to avoid depending on a human operator to decide what to do.

TL:DR; Frequently Asked Questions About Space

10.1

10

Q0: Why Launch a Metal Box into Space?

Answer: Satellites have the high ground and the advantage of being able to sweep and periodically visit large areas of the globe compared to, say, aerial platforms. This finds multiple uses, notably Earth Observation (i.e., for taking images of the planet using cameras or radars), or communications where on-board antennas can have a footprint with a vast area underneath. For taking fully advantage of their flight paths, satellites must be able to control their orientation in space to point these payloads—cameras and antennas—to areas of interest. The type of payload used is defined by the mission, with a mission being the reason why you are taking the burden of launching a satellite.

10.2

Q1: What Are the Rules for Launching Something into Space?

Answer: There is a legal framework for doing so. In summary, states are responsible for their space activities and for the international registration of their space objects and must accept liability for any damages they may cause. International space law also requires that each state ensure that their national space activities be conducted in accordance with international law, even when these national activities are conducted by non-state actors such as corporations, institutions, universities, and amateurs. Also, the use of the radio spectrum has to be authorized and coordinated in order to avoid interference.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Chechile, Space Technology, https://doi.org/10.1007/978-3-031-34818-1_10

109

110

10.3

10 TL:DR; Frequently Asked Questions About Space

Q2: How Are Satellites Designed and Developed? Also, Is It Done Differently in NewSpace Versus Classic Space?

Answer: Satellites are developed more or less like any other vehicle (like a car, a drone, an aircraft, or a truck). The design process is separated in stages. During early stages, requirements are collected, candidate architectures are evaluated, make versus buy evaluations are performed where designers choose what to procure and what to make in-house. As design matures, some prototypes are put together and tested under simulated conditions, which can lead to a general rethinking of the architecture if results are non-satisfactory, resetting the process back to a previous stage. The design process is highly iterative, and once “production-grade” maturity is reached, the more fluid stages are fully left behind. The cost of developing a satellite is usually split between “recurrent” (things that have to be done every time we want to manufacture a satellite) and “nonrecurrent” (things done only during conceptual design and development to never be done again). The design process is not dramatically different between NewSpace and “classic” Space. Difference lies in the overall budgets for the materials, the intensity and time spent in the development stages, and the resources available for testing, simulating, and verifying the designs. NewSpace makes extensive use of non-radiation hardened, Commercial-off-the-Shelf (COTS) devices and components. Fundamentally, NewSpace and classic space differ on the risks taken.

10.4

Q3: What’s Typically Under the Hood of a Satellite?

Answer: Mostly computers and cables connecting those computers together. But also, solar panels, batteries, radios, antennas, fuel tanks, pipes, valves, cameras, sensors, and actuators and at times some other more specific or eccentric equipment like pressure vessels or radioisotope thermoelectric generators (RTGs). And of course, a structure which provides the housing that keeps everything together during launch and operations and at the right temperatures throughout the lifetime of the satellite. And lots of software to make the thing perform as a functional, integral unit. See Chap. 6 for details.

10.5

Q4: How Are Satellites Launched?

Answer: By means of rocket launchers which provide the velocity required to reach a stable orbit. It is quite a rough ride. Rockets vibrate profusely and expose the satellites they carry to high shocks and accelerations; satellites must be designed to survive these harsh conditions during launch. Rockets are split in stages, and the satellites sit on the last stage. This stage eventually gently pushes the satellites away so they can become independent and fly on their own. That gentle push can make the satellite spin, which is an interesting way of starting your first day at work in space: tumbling randomly, all alone in space. It is important to clarify that

10.7 Q6: How Do Satellites Orient Themselves in Space?

111

rockets do not always take off from the ground (even though it’s the most common approach). There are air-based launchers, sea-based launchers, and submarine based launchers. Even balloon-based launchers. See Chap. 4 for more details.

10.6

Q5: Ok the Thing Is up in Space, Now What?

Answer: Now comes the part where the satellite needs to be powered on, stabilized (detumbled), and brought up to its fully functional state where it can deliver a service or a product as soon as possible, because time is money. This includes booting computers up, deploying solar panels (if needed), pointing to the Sun to charge the battery, setting up the propulsion system, checking that thermal ranges are nominal, calibrating instruments and more. All in all, making sure everything is functional after the launch. This may also include injecting the satellite into a different orbit compared to the orbit the launcher delivered it in, which is a complex thing because high amounts of propellant is used if the target orbit is too different from the current one, which makes the satellite experience big changes in its dynamic properties (center of gravity, tanks fill factor, etc.). All these activities are called commissioning or LEOP (Launch and Early Orbit Phase). After LEOP is done, the satellite can start providing its nominal service. There are always surprises during LEOPs that engineers have to sort out.

10.7

Q6: How Do Satellites Orient Themselves in Space?

Answer: They determine their orientation (called attitude) by observing a certain set of variables of interest such as position of stars, magnetic field, angular speed, position of the Sun, all measured by a set of sensors. Then, after determining their pointing situation, satellites compute the amount of torque needed to minimize the difference between the current orientation and the desired one and apply the torques accordingly using actuators. To define the amount and type of correcting action, a control “law” is coded somewhere in the on-board software. Typical sensors used are star trackers (which are cameras facing the sky programmed to identify star patterns, i.e., celestial navigation), gyroscopes (to gauge angular velocity), sun sensors (which help to assess where the Sun is) and magnetometers which measure Earth’s magnetic field, among others. The typical actuators used in Low Earth Orbits are reaction wheels and magnetorquers. The former applies a reaction torque to the satellite body by means of accelerating a rotating mass (a flywheel). The latter exploits the interaction (a torque) between a magnetic field created by an electric current applied through a coil and the Earth magnetic field. Thrusters are also used as actuators sometimes. The control law is in charge of computing the amount of torque needed to achieve a certain desired orientation, and it applies such torque by distributing it to the available actuators. More in Sect. 6.4.

112

10.8

10 TL:DR; Frequently Asked Questions About Space

Q7: How Are Satellites Operated?

Answer: There’s a data link between the satellite and the ground, where a radio signal is manipulated (modulated) by a data signal in order to transfer bits of said data across the channel. Links can be bidirectional, with an uplink (from ground to the satellite) and a downlink (from satellite to the ground). The uplink is typically used to send commands (orders or actions) to the satellite, whereas the downlink is typically used to receive telemetry (health data) and payload data (the data which pays the bills). Links can also be unidirectional, as it is the typical case for payload data links, where the satellite broadcasts data to a ground receiver, without any feedback. Space radio links can work at different “speeds” (as in, amount of data transferred per unit of time, or bit rate), but higher data rates do not come without penalties. Higher bit rates require higher bandwidths and higher bandwidths can push the envelope of the overall radio system design. Commercial satellites establish their radio links towards third-party ground stations, which are operated by companies offering the infrastructure (big antennas with tracking capabilities and modems), in a way satellite operators do not have to grow this heavy infrastructure themselves. Schematically, satellite operators connect to the ground stations’ modems (usually by means of a private network over Internet) where they receive the bytes coming from the satellite and send their bytes up to the satellite. The modulation and demodulation is handled by the ground station supplier; this includes tracking the satellite as it flies over the antenna, sweeping the frequency to lock onto the satellite carrier, synchronizing and reconstructing the data from the radio channel, and storing the payload data in local data recorders or, more recently, in cloud-based servers. The way bits are formatted in commands and telemetry tend to be standardized in a way satellites and ground stations can exchange data reliably considering the inherent noisy nature of radio links. The true semantic “meaning” of the data remains proprietary and every operator parses/decodes the data according to their format and using their private tools. A considerable deal of routine operations of satellites are still manual, that is, they involve large teams of human operators working 24 × 7 (Fig. 10.1).

10.9

Q8: What Does the Software on Board of a Satellite Do Exactly?

Answer: Multiple things. Mainly, software acts as an on-board “manager”. It collects, stores, and retrieves health data from all the equipment, it processes the commands sent from the ground, it flags, logs, and handles faults while preventing them from spreading. And, perhaps more importantly, the on-board software handles on-board behavior or “business logic” by means of running a set of state

10.10 Q9: How Do Satellites Generate Power?

113

Fig. 10.1 A ground antenna (photo by Donald Giannatti on Unsplash)

machines.1 Behavior heavily depends on the architecture, the type of mission and the equipment present, although a great deal of software can be reused from missions to missions, differing only in the uppermost “application” layer.

10.10 Q9: How Do Satellites Generate Power? Answer: There are several ways, but by far the most popular is using solar power, which requires solar panels to capture photons from the Sun, converting this electromagnetic energy into electrical power, and storing some of that power in batteries. Solar panels are tricky devices: largely inefficient, which means that large areas are needed to generate power for even small satellites. Since the area needed usually exceeds the area available from the satellite body area, and since rockets constrain the volume a satellite can take during launch, engineers need to “pack” (fold, stow) panels and make them “unpack” (also unfold or deploy) in orbit; which is a risky operation. If panels fail to deploy, the mission can be greatly affected since not enough power will be generated. Solar panels degrade substantially with throughout the mission lifetime, so their ability to generate power at beginning of

1

A state machine is a computational abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition.

114

10 TL:DR; Frequently Asked Questions About Space

Fig. 10.2 A sketch illustrating deployable solar panels

life is not the same as their ability to generate power at end of life, and the design engineers must take this aging effect into account when dimensioning the power subsystem (Fig. 10.2).

10.11 Q10: How Big and Heavy Are Current Commercial Satellites? Answer: It is largely mission dependent. Can be as small as a few kilograms cube, up to 6000 or 8000 kg and several meters wide (Note I am not counting here the ISS, Hubble nor big scientific spacecraft). Currently, the payload heavily influences the on-board resources needed, impacting SWaP (Size, Weight, and Power), but also dictating the overall physical configuration: for example, where solar panels, sensors and thermal radiators need to be situated. This is the more “classic” rationale of space systems configurations, where payload and bus are tightly coupled, and the bus is designed as a one-off. More modern approaches include designing multi-mission buses which can accommodate for different payloads.

10.12 Q11: What Is the Lifetime of a Satellite? Answer: Lifetime is a design subject, a requirement. Therefore, mission dependent. Satellite lifetime is more about probabilities than certainties. Satellite designers can make some decisions during the design process to make sure the probability of the satellite surviving the required amount of time in the specified orbital environment is of certain confidence. Things can still fail due to early failures, which are statistically possible.2 Lifetime design decisions may involve duplicating elements

2

The bathtub curve is a failure rate graph which is used in reliability engineering and deterioration modeling and quantitatively depicts the initial non-negligible probability of “infant mortality” of components and parts of a system.

10.14 Q13: What Is an Orbit Exactly?

115

on-board (adding redundancies), adding shielding, thermally isolating and coupling, and making the software ready to keep the satellite always flying in a safe envelope and to be ready to handle faults in a way they do not propagate. Extensive testing and verification while the satellite is still sitting on the ground are key. Single string (this means, no redundancy) architectures can be used In space with effective risk management, architecture flexibility, extensive testing, use of proven designs, and a rigorous approach to fault protection.

10.13 Q12: What Can Affect the Lifetime of a Satellite? Answer: Notably the space environment, which is wild (see Chap. 3). As we said there, there are high concentration of high-energy particles going around orbits. We are fairly (and luckily) protected at the surface of the Earth due to the atmosphere and the magnetosphere, but satellites do not have this luxury. The semiconductors we fly on-board satellites are delicate microscopic structures prone of disruptions from the environment. Radiation can affect them in different ways, potentially inflicting permanent damage but also causing momentary upsets which, if not properly handled, may evolve into dangerous situations. Sudden software reboots and memory bits flipped are frequent, and more frequent around some specific orbital areas. But radiation is not the only factor that can affect lifetime. Thermal conditions in orbit rapidly change due to the fact most satellites experience an aggressive repetitive cycle of sunlight and shadow (eclipse). Such a cycle can create ample swings in temperatures on board that can damage equipment if unhandled, and since materials expand and contract due to temperature variations, the structure and equipment on board can experience thermoelastic stresses which can greatly affect, degrade, impair, or even kill a mission. Last but not least, software bugs may affect mission success.

10.14 Q13: What Is an Orbit Exactly? Answer: It is the trajectory the satellite follows as it flies around the globe, and it is predominantly defined by the launch, although Earth orbits change and drift substantially as time goes by post-launch. They drift due to perturbations from the environment (Earth oblateness, atmospheric drag, other celestial bodies’ gravitational pull, sun radiation, etc.), but also satellites may purposely modify their orbits on-demand. They do so by applying forces in specific directions and around points of interest of the orbit using thrusters they carry. Note that not all satellites are equipped with the capability of changing orbits. For those which are not equipped with propulsion, their orbits are just what they are, and they go with the flow as perturbations run the show and their orbits happily drift. Note that the orbit chosen may greatly affect an earth observation mission, since it defines the way a satellite will visit certain areas of the globe.

116

10 TL:DR; Frequently Asked Questions About Space

10.15 Q14: Why Are Orbits Crowded and How Is This an Issue? Answer: Not all orbits are crowded, and luckily the Universe is vast. There are, though, some particular Earth orbits which are quite popular and more interesting, commercially speaking. For example, polar low Earth orbits are chosen by the majority of Earth Observation organizations because a satellite orbiting in such an orbit can sweep the globe in a short time by exploiting Earth’s rotation in their advantage. As satellite operators fill the most interesting orbits with their assets, probabilities of conjunction (i.e., crashing or coming uncomfortably close to another object) increases. There is no worldwide space traffic control authority as of today, so each satellite operator is on its own when it comes to securing their property in orbit. If the conjunction risk is against another functional satellite, companies can coordinate a joint course of action privately. Satellites are launched under the promise they will deorbit (as in, reenter the atmosphere) before a maximum specified timeframe. The constantly increasing number of satellites being launched is raising concerns about the risk of a cascade effect called Kessler Syndrome which could render space activities in some specific orbits very difficult, including human spaceflight.

10.16 Q15: Why Are Satellites Assembled in Clean Rooms? Answer: First, to keep particles in the air under control. Dust is everywhere. You just cannot get a dust-free room on Earth; but you can keep the particle count per cubic centimeter comparatively low, and that’s what clean rooms are about. Why do dust and satellites not get along well? First, optics are very sensitive to dust. Payload cameras, star trackers; you don’t want to have their lenses affected by unwanted micrometer-sized particles and put the whole mission at risk; a star tracker with a contaminated lens could be unable to track real stars in orbit. But also, mechanisms and electronics can be affected by air impurities. Second, ESD (electrostatic discharge) may damage satellite electronics beyond repair. What is more, ESD may not necessarily break electronics right away but affect or degrade its performance only to finally give up in orbit. Therefore, clean rooms are strictly ESD-controlled environments. Clean rooms maintain a constant temperature and humidity, which helps to minimize ESD risks. The floor and equipment around clean rooms is specially constructed and chose so that static electricity has a hard time building up. Clean rooms do not only keep air clean with sophisticated airrenovation systems, but it also needs collaboration of the workers. Everyone has to go around clean rooms with special clothes, masks, hairnets, and shoe protection (Fig. 10.3).

10.17 Q16: How Are Satellites Currently Distributed Across Different Orbits?

117

Fig. 10.3 A team in action in a cleanroom (photo by Laurel and Michael Evans on Unsplash)

10.17 Q16: How Are Satellites Currently Distributed Across Different Orbits? Answer: Let’s first categorize the orbits. In general, orbits are categorized in Low Earth Orbits, Medium Earth Orbits and Geostationary Orbits. There are other less common orbits as well, which are out of the scope of this text. . LEO (Low Earth Orbit): The term “LEO region” is used for the area of space below an altitude of 2000 km (1200 mi). Approximately one-third of the radius of Earth. Objects in orbits which pass through this area, even if they have an apogee further out, or are sub-orbital, are carefully tracked because they present a collision risk to the many satellites in LEO. All crewed space stations to date have been in LEO. From 1968 to 1972 the Apollo program’s lunar missions sent humans beyond LEO. Since the end of the Apollo program there have been no human spaceflights beyond LEO. . MEO (Medium Earth Orbit): MEO is an orbit with an altitude between 2000 km (1243 mi) and 35,786 km (22,236 mi) above sea level. The boundary between MEO and LEO is an arbitrary altitude chosen by accepted convention. All satellites in MEO have an orbital period of less than 24 h, with the minimum period (for a circular orbit at the lowest MEO altitude) about 2 h.

118

10 TL:DR; Frequently Asked Questions About Space

. GEO (Geostationary Orbit): A geostationary orbit, also referred to as a geosynchronous equatorial orbit (GEO), is a circular orbit 35,786 km (22,236 miles) in altitude above Earth’s equator (42,164 km from Earth’s center) and following the direction of Earth’s rotation. An object in such an orbit has an orbital period equal to the Earth’s rotational period, and so to ground observers it appears motionless in a fixed position in the sky. The first satellite to be placed in this kind of orbit was launched in 1963. Communications satellites are often placed in a geostationary orbit so that ground-based satellite antennas do not have to track them but can be pointed fixed at the position in the sky where the satellites are located. Note that geostationary satellites’ orbits drift and must be periodically adjusted, which consumes considerable amounts of propellant and affects their lifetime. Weather satellites are also placed in this orbit for real-time monitoring and data collection. Orbits distribution.3 ,4 In the next graph, it can be seen the greatly uneven distribution of satellites across the orbits previously described. In Fig. 10.4, one can observe the distribution of satellites in a wide area between very low earth orbits up to 50,000 km apogee. One can see that there is a big concentration of satellites until 1300 km apogees, then very little amount, until a spike appears somewhere above 35,000 km apogee. These are the geostationary satellites use for telecommunications. In Fig. 10.5 we see the distribution of satellites between very low earth orbits up to 2000 km. We observe a crowd around 550 km (basically Earth observation satellites and Starlink), and then a spike around 1200 km, belonging to the OneWeb constellation. Finally, in Fig. 10.6 we see a zoomed in perspective in LEO, between very low earth orbits and 700 km, showing again the spike out of Sun-Synchronous Earth Observation (EO) satellites and the Starlink constellation.

3 4

Source: https://www.ucsusa.org/resources/satellite-database. The number of satellites in LEO is an approximation and is continuously increasing.

10.17 Q16: How Are Satellites Currently Distributed Across Different Orbits?

Fig. 10.4 Distribution of satellites for altitudes between 0 and 50,000 km altitude

Fig. 10.5 Distribution of satellites for altitudes between 0 and 2000 km (LEO)

119

120

10 TL:DR; Frequently Asked Questions About Space

Fig. 10.6 Distribution of satellites for altitudes between 400 and 700 km (LEO)