Accelerator Technology: Applications in Science, Medicine, and Industry 3030623076, 9783030623074

This book explores the physics, technology and applications of particle accelerators. It illustrates the interconnection

329 42 17MB

English Pages 378 [387] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
Contents
1 Introduction
2 Technology
2.1 Vacuum
2.1.1 Pumping Technologies for the UHV
2.1.2 Pumping Systems and Vacuum Vessels
2.2 Accelerators
2.2.1 Direct-Current Driven
2.2.2 Alternating-Current Driven
2.2.3 Laser and Plasma
2.2.4 Electric and Spatial Efficiency of Accelerators
2.3 Ion- and Electron Beam Optics
2.3.1 Emittance and Betatron Function
2.3.2 Beam Optical Elements
2.3.3 Diagnostic Elements
2.4 Electron and Ion Sources
2.5 Charged and Neutral Particle Detectors
2.6 Targets
2.7 Radiation Protection
2.7.1 Hazards for Man (and Machine)
2.7.2 Avoidance Strategies in Plant Conception
2.7.3 Shielding
References
3 Interaction of Particle Beams and Matter
3.1 Absorption and Reactions of Photons
3.2 Range and Stopping of Charged Particles
3.3 Nuclear Reactions
3.3.1 Cross-Sections
3.3.2 Kinematics
3.4 Depth- and Stopping Dependent Reactions
3.5 Computer Modelling
References
4 Secondary Particle Generation with Accelerators
4.1 Electrons, Atoms, Molecules, and Ions
4.2 Neutrons
4.2.1 Accelerator Neutron Sources
4.2.2 The Specific Energy Efficiency
4.3 Photons
4.3.1 X-ray and Bound Electron Sources
4.3.2 Synchrotron Sources
4.3.3 Free-Electron Laser
4.4 Particles of the Standard Model and Anti-matter
References
5 Technical Applications
5.1 Generation of α-β-γ-n Sources and Activation
5.1.1 Pathways on the Nuclide Chart
5.1.2 Examples of α, γ, β−, β+,n Sources from Accelerators
5.1.3 Comparison to Production by Neutrons
5.1.4 Optimization of Production and Cost Efficiency
5.2 Rare Isotope and Radioactive Decay Tracers
5.3 Material Modification
5.3.1 Doping by Implantation and Activation
5.3.2 Welding, Cutting, and Additive Manufacturing
5.3.3 Surface Modifications and Nano-Machining
References
6 Nuclear Medicine
6.1 Imaging Diagnostics and Their Information Properties
6.1.1 X-ray Absorption Imaging and Tomography
6.1.2 Emission-Computed-Tomography
6.2 Radiation Therapy
6.2.1 External Beam Therapy
6.2.2 Internal Metabolic Radionuclide Therapy
6.2.3 Selectivity from a Physical Perspective
References
7 Material Analysis and Testing
7.1 Ion-, Electron-, and Photon Beam Analysis
7.1.1 Physical Concepts, Detection Limits, Accuracy and Uncertainties
7.1.2 X-ray Absorption and Scattering Analysis
7.1.3 Electron Beam Microscopy
7.1.4 Secondary Ion Mass-Spectrometry
7.1.5 MeV Ion-Beam Analysis
7.1.6 Accelerator Mass Spectrometry
7.2 Neutron Based Analysis
7.3 Mobile Systems
7.4 Radiation Damage
References
8 Energy Production and Storage
8.1 Spallation Fission Reactors
8.2 Nuclear Batteries
8.3 Accelerator Based Nuclear Fusion
8.3.1 Neutral Target Reactors
8.3.2 Ion-Beam Collider Reactors
References
Index
Recommend Papers

Accelerator Technology: Applications in Science, Medicine, and Industry
 3030623076, 9783030623074

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Particle Acceleration and Detection

Sören Möller

Accelerator Technology Applications in Science, Medicine, and Industry

Particle Acceleration and Detection Series Editors Alexander Chao, SLAC, Stanford University, Menlo Park, CA, USA Frank Zimmermann, BE Department, ABP Group, CERN, Genève, Switzerland Katsunobu Oide, KEK, High Energy Accelerator Research Organization, Tsukuba, Japan Werner Riegler, Detector group, CERN, Genève, Switzerland Vladimir Shiltsev, Accelerator Physics Center, Fermi National Accelerator Lab, Batavia, IL, USA Kenzo Nakamura, Kavli IPMU, University of Tokyo, Kashiwa, Chiba, Japan

The series “Particle Acceleration and Detection” is devoted to monograph texts dealing with all aspects of particle acceleration and detection research and advanced teaching. The scope also includes topics such as beam physics and instrumentation as well as applications. Presentations should strongly emphasize the underlying physical and engineering sciences. Of particular interest are – contributions which relate fundamental research to new applications beyond the immediate realm of the original field of research – contributions which connect fundamental research in the aforementioned fields to fundamental research in related physical or engineering sciences – concise accounts of newly emerging important topics that are embedded in a broader framework in order to provide quick but readable access of very new material to a larger audience The books forming this collection will be of importance to graduate students and active researchers alike.

More information about this series at http://www.springer.com/series/5267

Sören Möller

Accelerator Technology Applications in Science, Medicine, and Industry

123

Sören Möller Forschungszentrum Jülich GmbH Jülich, Germany

ISSN 1611-1052 ISSN 2365-0877 (electronic) Particle Acceleration and Detection ISBN 978-3-030-62307-4 ISBN 978-3-030-62308-1 (eBook) https://doi.org/10.1007/978-3-030-62308-1 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Or why you have to read this book: Accelerators often seem like an abstract idea, a toy of scientists that grows bigger and bigger with every billion euros you invest, a technology as far away from our lives as quantum physics. This book shows you the secret players amongst the accelerator field that you might not have noticed so far. The working horses in science, industry, and medicine do not work in a distant fundamental science, but those are improving our lives on a daily basis. The concept of particle accelerators is now about 120 years old. The beginning of this technology was made in 1897 with the cathode ray tubes, later being used in TV devices. The technical climax is currently the Large Hadron Collider (LHC) at CERN, a device of 26.7 km length. Within these 120 years, the maximum energy of the accelerated particles was increased from some keV to several TeV. The incredible increase of a factor of more than a billion compares well to the advances made in the computing power of supercomputers. In between these extremes of particle energy nowadays lays a manifold of devices being used in science, industry, and medical applications worth to learn more about. Efficient and powerful accelerators have become a key technology for a multitude of applications, although often unnoticed. Silicon wafers, the basis for modern micro-chips, are usually prepared by accelerators called implanters in this context. Cancer can be treated more effectively, yet with less side effects than ever before by proton beams or accelerator produced isotopes. In the development of new combustion engines, accelerator-induced radioactivity visualizes wear processes. In material science, accelerators open a view to the atomic scale in the form of electron microscopes. Accelerator development and applications therefore became an import industrial and development factor, exploited by numerous companies worldwide. All of this connects in accelerator technology. Accelerator technology is in many aspects substitutive for classical fission technology. For example in scientific applications of neutrons in Europe, accelerator-based neutron sources will soon be more common than fission reactor sources. In nuclear medicine, already today radioactive isotopes produced by accelerators are more frequently administered to patients than those produced in v

vi

Preface

fission reactors. The advantages responsible for this success are in particular related to the low secondary costs of accelerators: less or even no side activity and nuclear waste are produced and also no fissile fuel is required. Therefore, the laboratories can be compact, close to application and de-centralized, e.g. directly in a hospital. In conclusion, accelerators have the potential to grant nuclear industry and nuclear applications new acceptance after the extensive discussions on their safety, especially in Europe. The knowledge of accelerator technology is exclusively taught in the context of physics related studies, while the knowledge on accelerator applications is part of mechanical, electrical, material, and nuclear engineering, medicine, and biology studies. This book intends to grant an overview over all of these aspects to students, scientists, engineers, and users. In contrast to a physics book focusing on a specific topic, this book will discuss accelerator technology in a broad view, allowing seeing the connections between the different aspects and applications. It combines the technological and physical basics of accelerator applications amongst the accelerator and beam-matter physics and application sides. This is going to be a book for physics interested people with a strong tendency for hands-on work, device layout, and application layout. Of course it is still a book, so expect some equations and fundamental considerations, but even when working hands-on you have to know what you are doing. Technical layouts, technological limits, economic aspects, and radiation protection will be even more intensely covered than physics. This book will present traditional and emerging fields of accelerator applications and, hopefully, inspire the reader to totally new developments. The basics of accelerator and elementary particle physics will be briefly discussed, but we will not go deeply into collider physics or the standard model. There are enough dedicated books on the market for these topics to which the reader is sincerely referred. Instead of focusing on the development of accelerators for fundamental research with ever growing beam energy and luminosity, we will take a look at robust and industrially established technology. These devices cover ranges from some keV to about 250 MeV with beam powers of µW to MW with alternating current (AC) and direct current (DC) driving for accelerating ions or electrons and combine with clever end-stations forming valuable applications. The terms capabilities, efficiency, productivity, and costs dominate these applications, and in contrast to other accelerator-related literature they will be discussed in this book and identified as positive drivers of innovation and progress. By far most of these applications work on the basis of stopping and attenuation of charged particle beams in matter, their nuclear reaction processes with matter and nuclear inventories. By combining these physical approaches with the chain of technologies involved in accelerator applications up to the final products, the book can provide the reader a unique insight one cannot get from reading a dozen specific books on these topics. The introduction of modern computer codes for describing the individual processes amongst application examples will enable developing own ideas. Basics on radiation protection help to keep these ideas safe. Finally, the overview of applications in science, industry, and medicine allows integrating the knowledge to see the big picture beyond the individual fields.

Preface

vii

So if you want to become an accelerator engineer or you are just curious on understanding how your latest PET scan was performed and what it took to apply it, continue and explore this book! Jülich, Germany

Sören Möller

Acknowledgements

I want to thank some of my colleagues for the interesting and inspiring discussions on the different topics. I want to thank Rahul Rayaprolu for his knowledge of nuclear reactions and applications and his support in literature research. Thanks to Tobias Wegener for his expertise in nuclear and chemistry matters. Thanks to Marcin Rasinski for giving me a deeper insight into electron microscopy and providing numerous excellent images. Thanks to Timo Dittmar and Christian Scholtysik for our time in the ion-beam analysis laboratory. Daniel Höschen for his inspiring engineering qualities. Ulrich Rücker and Johannes Baggemann for our joint developments towards neutron sources and their input regarding neutron science. Tom Wauters for sharing his knowledge on RF technology. Thanks to Antje Vollmer for the interesting and fruitful discussion of synchrotron light sources. Special thanks to Christian Linsmeier for his extensive support of my scientific career and accelerator competences. Without more words required: thanks to my oldest and best friends Giovanni Nocera, Florian Schmitz, and Simon Vogt. Thanks to my soccer group around Bernd Düppmann and Markus Towara for maintaining a healthy body for my brain and for our good time outside the soccer court. I thank my mother for her general and unconditional support. Thanks to my father for inheriting his talents and making me strive for even more. Finally yet most importantly, I thank Kathrin Pytel and our families for bearing the weight of my world during the exhaustive write-up work and giving me Max. Jülich, Germany

Sören Möller

ix

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

2 Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Vacuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Pumping Technologies for the UHV . . . . . . . . 2.1.2 Pumping Systems and Vacuum Vessels . . . . . 2.2 Accelerators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Direct-Current Driven . . . . . . . . . . . . . . . . . . 2.2.2 Alternating-Current Driven . . . . . . . . . . . . . . . 2.2.3 Laser and Plasma . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Electric and Spatial Efficiency of Accelerators . 2.3 Ion- and Electron Beam Optics . . . . . . . . . . . . . . . . . 2.3.1 Emittance and Betatron Function . . . . . . . . . . 2.3.2 Beam Optical Elements . . . . . . . . . . . . . . . . . 2.3.3 Diagnostic Elements . . . . . . . . . . . . . . . . . . . 2.4 Electron and Ion Sources . . . . . . . . . . . . . . . . . . . . . 2.5 Charged and Neutral Particle Detectors . . . . . . . . . . . 2.6 Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Radiation Protection . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Hazards for Man (and Machine) . . . . . . . . . . . 2.7.2 Avoidance Strategies in Plant Conception . . . . 2.7.3 Shielding . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

5 6 9 12 15 18 24 38 44 46 49 56 64 71 81 88 95 100 107 113 121

3 Interaction of Particle Beams and Matter . . . 3.1 Absorption and Reactions of Photons . . . 3.2 Range and Stopping of Charged Particles . 3.3 Nuclear Reactions . . . . . . . . . . . . . . . . . . 3.3.1 Cross-Sections . . . . . . . . . . . . . . . 3.3.2 Kinematics . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

123 129 133 138 142 147

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

xi

xii

Contents

3.4 Depth- and Stopping Dependent Reactions . . . . . . . . . . . . . . . . . . 150 3.5 Computer Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

163 165 170 172 177 179 181 184 191 198 203

5 Technical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Generation of a-b-c-n Sources and Activation . . . . . . . . . . . . 5.1.1 Pathways on the Nuclide Chart . . . . . . . . . . . . . . . . . . 5.1.2 Examples of a, c, b−, b+, n Sources from Accelerators 5.1.3 Comparison to Production by Neutrons . . . . . . . . . . . . 5.1.4 Optimization of Production and Cost Efficiency . . . . . . 5.2 Rare Isotope and Radioactive Decay Tracers . . . . . . . . . . . . . 5.3 Material Modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Doping by Implantation and Activation . . . . . . . . . . . . 5.3.2 Welding, Cutting, and Additive Manufacturing . . . . . . 5.3.3 Surface Modifications and Nano-Machining . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

205 206 209 213 215 218 221 225 226 229 231 236

6 Nuclear Medicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Imaging Diagnostics and Their Information Properties 6.1.1 X-ray Absorption Imaging and Tomography . . 6.1.2 Emission-Computed-Tomography . . . . . . . . . . 6.2 Radiation Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 External Beam Therapy . . . . . . . . . . . . . . . . . 6.2.2 Internal Metabolic Radionuclide Therapy . . . . 6.2.3 Selectivity from a Physical Perspective . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

237 241 243 249 254 256 263 268 268

7 Material Analysis and Testing . . . . . . . . . . . . . . . . . . . . 7.1 Ion-, Electron-, and Photon Beam Analysis . . . . . . . . 7.1.1 Physical Concepts, Detection Limits, Accuracy and Uncertainties . . . . . . . . . . . . . . . . . . . . . . 7.1.2 X-ray Absorption and Scattering Analysis . . . . 7.1.3 Electron Beam Microscopy . . . . . . . . . . . . . .

. . . . . . . . . 271 . . . . . . . . . 276

4 Secondary Particle Generation with Accelerators . . 4.1 Electrons, Atoms, Molecules, and Ions . . . . . . . 4.2 Neutrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Accelerator Neutron Sources . . . . . . . . . 4.2.2 The Specific Energy Efficiency . . . . . . . . 4.3 Photons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 X-ray and Bound Electron Sources . . . . . 4.3.2 Synchrotron Sources . . . . . . . . . . . . . . . 4.3.3 Free-Electron Laser . . . . . . . . . . . . . . . . 4.4 Particles of the Standard Model and Anti-matter . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . . 278 . . . . . . . . . 284 . . . . . . . . . 295

Contents

xiii

7.1.4 Secondary Ion Mass-Spectrometry 7.1.5 MeV Ion-Beam Analysis . . . . . . . 7.1.6 Accelerator Mass Spectrometry . . . 7.2 Neutron Based Analysis . . . . . . . . . . . . . 7.3 Mobile Systems . . . . . . . . . . . . . . . . . . . 7.4 Radiation Damage . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Energy Production and Storage . . . . . 8.1 Spallation Fission Reactors . . . . . . 8.2 Nuclear Batteries . . . . . . . . . . . . . 8.3 Accelerator Based Nuclear Fusion . 8.3.1 Neutral Target Reactors . . . 8.3.2 Ion-Beam Collider Reactors References . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

302 306 319 323 327 330 340

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

343 345 351 354 360 361 371

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

Chapter 1

Introduction

Abstract The atomic nucleus and the electrons around it form the basis of matter. Acceleration of particles in natural processes such as supernovae and the hot plasmas of stars represent the origin of all elements except for hydrogen. Here on earth, particle accelerators enable a controlled reproduction of these natural processes in order to control and modify elementary particles and atomic nuclei. The introduction starts asking central questions on the physics of these processes, the required technology, and the use of it.

We work to better ourselves and the rest of humanity (Picard, Star Trek).

The first steps towards our world were made shortly after the big bang when the hot and dense matter cooled down to a level where the hydrogen nucleus can survive. All elements heavier than helium were afterwards generated in stars. About half of the elements heavier than iron are nowadays believed to be synthesized in supernovae. These giant explosions accelerate several solar masses of particles first to their inside where electrons and protons fuse to neutrons forming an incompressible object. This object reflects the impinging matter, leading to the giant supernova-explosion. Neutrons follow the explosion front and react with the elements bred by nuclear fusion forming all the elements and isotopes we have in our nuclide chart displayed in Fig. 1.1. From this extensive set of isotopes, the most decay according to their nuclear properties. Only the stable nuclei survive in the end in a mixture characteristic for this chain of fusion, nuclear reaction, and decay. This nucleosynthesis produced all the atomic nuclei we know. After many years of decay, we end up with the isotopes we find in our earth, but still the numerous particles and radiations emitted from the explosion allow obtaining a thorough understanding of the supernova process from extreme distance. Supernovae work as giant gravitational accelerators, which are capable of producing basically any isotope and elementary particle, but we are left with the thin black line of boxes in the centre of the known nuclide chart in Fig. 1.1. What if we want to make use of the isotopes aside this line or use their information properties here on earth (in safe distance to a supernova)? So far, we have mastered the manipulation of macroscopic matter for example by metallurgy and machining. We were able to separate the elements from ores and other compounds, leading to the © Springer Nature Switzerland AG 2020 S. Möller, Accelerator Technology, Particle Acceleration and Detection, https://doi.org/10.1007/978-3-030-62308-1_1

1

2

1 Introduction

Fig. 1.1 The nuclide chart displays all known nuclear isotopes in the space of protons and neutrons in the nucleus. The highlighted numbers are so-called magical numbers representing particularly stable configurations. Modified version of Table_isotopes.svg: Napy1kenobi, CC BY-SA 3.0, via Wikimedia Commons

formulation of the periodic table of elements. Chemistry allowed us to manipulate the atomic bindings and rearrange them. All of these controls over matter already became part of our everyday life. Now we want to take control of the elementary particles and nuclei by analysing them, controlling their internal state, and finally synthesising any particle in any state to make use of its specific properties. Here on earth we cannot exploit the power of gravity as the supernova does, but we need to apply technology. Accelerators open this door by breaking down the extreme conditions present in supernovae using electro-magnetic technology. Our modern society is based on technologies to make our lives sustainable, long, and prosperous. As such, we have a strong demand for new possibilities in medicine, industry, and science. Accelerators provide the additional degree of freedom of

1 Introduction

3 Projectiles

Electrons Target Atoms

Scattering

Fig. 1.2 The basic situation considered in applications of accelerator technology. Accelerated projectiles hit atoms in a target leading to possible emission of photons, electrons, and heavy particles depending on projectile energy and species. The application defines the focus

controlling elementary particles and nuclei and interacting with them, allowing finding better and more specific technologies. Accelerators, or more precisely particle accelerators, exploit electric and magnetic fields for separating, accelerating and controlling atomic nuclei and electrons. How exactly does this work and how does it allow us accessing elementary particles and the nuclide chart in an industrial fashion? Physical interactions facilitate the control over the constituents of matter. Many different interactions are possible, as depicted in Fig. 1.2, but their number is finite. A large fraction of these interactions was already unravelled by science. We learned most of them follow, at least partially, deterministic rules or at least statistical distributions we can exploit in accelerator technologies. The theoretical part of the book will demonstrate and discuss these physics, teaching how to exploit and combine the physics for mobilizing the full potential of accelerator technologies. What can we do with the new degree of control over matter and the particles? The main focus of this book will be on practical aspects and apparatuses. Hands-on and physics may appear contradictory, but there are fields where they become two sides of a coin. Accelerators exhibit this two-sided coin aspect. The complex apparatuses feature many technical aspects, starting with vacuum, electrical engineering, and safety discussed in Chap. 2. The second part of this book, starting from Chap. 5, forges theory and apparatuses to useful methods. Applications were developed all over our modern society. This book’s subtitle applications in science, industry and medicine describes this in three buzzwords. The three disciplines will be roughly separated, but remember: One of the main messages of this book is to visualize the strong connections between the applications in these three fields through physical and technological overlaps. Every good product starts with an idea, but requires lots of qualified feedback in its growth process. Without feedback, may it be positive or negative, the product will never reach an outstanding quality, whether it is in science, medicine, industry, or personal life. Therefore, feel free to write down your own ideas, criticism and corrections into this book and submit them to the author, but also accept feedback

4

1 Introduction

yourself. The final sentence of this introduction might be one of the very reasons accelerator scientists found and established such a vast range of applications. This search ever continues and we have to know our individual motivation to become part of this continuing success story.

Chapter 2

Technology

Abstract Technology is the engineering concept of theoretical knowledge. Technical implementations of a technology allow for optimizing a technology towards certain application directions. Think of a house heated by gas or oil, both combustion technologies with different technical realisations. Accelerator applications require combining multiple complex technologies into a new one. This chapter discusses these supporting technologies, their limits, and safety aspects. The understanding of the concept of technology enables the reader to select suitable technologies and identify the point where the technical possibilities of a certain technology ends and new ways have to be found.

A man and a donkey are always smarter than a man.

Technology describes the fundamental way to solve a problem. Let us assume, for example, that a quantity E of electrical energy is to be generated. For this purpose, a solar cell with the output power P could be exposed for t hours to the sun. A combustion engine consuming an amount P of gasoline energy per time with the efficiency μ could also be operated with a generator for t = E/(μ * P) hours. Furthermore, a nuclear reactor with a steam generator could be run for t = E/P hours. We see that all of these different technologies enable reaching the same goal in completely different ways. Each technology has its own unique features and problems. In order to clarify the reasoning behind this, let us look at the three examples in detail: The solar cell has an output power P, but it requires the input of the solar hours. At the same time, this power P is limited to a value between 0 and the power of the solar radiation (in practice even less). A combustion engine also achieves an output power P, but there is in principle no upper limit. In contrast to the solar cell, which requires solar hours, the internal combustion engine requires fuel and air inputs, and only achieves an efficiency μ < 1 with these inputs due to its thermal process. The third and final example in this list is the nuclear reactor. Its advantage: It only requires nuclear fuel as input allowing it to be operated in any surrounding. However, due to physical properties, its power has a certain minimum value P > 0. Its fuel requirements are small so efficiency plays a negligible role, but it creates nuclear waste as a disadvantage to the other two technologies. © Springer Nature Switzerland AG 2020 S. Möller, Accelerator Technology, Particle Acceleration and Detection, https://doi.org/10.1007/978-3-030-62308-1_2

5

6

2 Technology

The example of power generation shows that every technology has its justification and its technical limitations. On earth we will never be able to build a solar cell that generates the power density of an internal combustion engine, and at the same time we will never be able to develop an internal combustion engine that can be used in space like a nuclear reactor or a solar cell. We will, however, be able to develop a combustion engine with P = 1 kW and one with P = 1 MW, but presumably we will be able to reach P = 1 GW only with a nuclear reactor. Furthermore, we will be able to develop a solar cell based on crystalline and one based on amorphous silicon or a combustion engine that burns diesel and one burning gasoline. These are the technical features that are possible within the framework given by the technology. The technical solution describes the unique approach within the technology. When solving a problem, it is important to know where the technical and the technological limits are. An equivalent statement would be to know where we reach a solution by improving an existing approach (technically) or we require a completely new approach (technology). This book will address the accelerator technology in terms of accelerator based technologies. The aside on power generation intended to clarify that this technology also has its limitations and unique features. As we shall see, these boundaries are exceptionally broad, so there is great flexibility for the use of accelerator technology. Moreover, in many cases the technology can be well connected to other technologies forming a large set of complex applications.

2.1 Vacuum Accelerated particles will interact with any form of matter. Although this is the basis of all accelerator applications, it also disturbs the preparation of the particle beams in an accelerator like viscous engine oil. In this analogy, it is therefore necessary to dilute this oil in practically all accelerators, that is, to create a vacuum within the accelerator. A good reading for vacuum physics is actually a free book of the company Pfeiffer vacuum (Pfeiffer vacuum GmbH 2013), although the reader should be careful regarding its sales aspect. For the physical understanding and classification of the necessary technical equipment, we first define the concept of pressure p with the common and equal units of hecto-pascals (hPa) and milli-bar (mbar). This pressure p connects via the ideal gas law p=

N ∗ k B ∗ T, V

(2.1)

with the (chamber) volume V, the gas particle count N in the volume V, the StefanBoltzmann constant k B , and the absolute temperature of the gas T. The viscous effect for accelerated particles connects to the gas particle count they pass as we will discuss

2.1 Vacuum

7

later in more detail. Equation (2.1) does not refer to the nature of the particles, hence it is valid for individual components of a gas (for example, nitrogen and oxygen in air) as well as their partial pressures, but also for all components together and the total pressure. From the equation, we can see how the pressure is related to the amount of particles N in a volume: Linear! Table 2.1 shows the relationship between pressure, particle density and other variables relevant for accelerator technology for various pressures from atmospheric pressure to the lowest pressures, such as those in space (extreme ultra-high vacuum). The relevant pressure range spans over 15 orders of magnitude, which are usually divided into 6 pressure ranges. The division of these ranges roughly follows the limits implied by current vacuum technologies. The mean free path, i.e. the average distance between collisions of two gas particles, is the main quantity for understanding the technical aspects of a vacuum. It increases with decreasing pressure and reaches a technically relevant dimension in the range of 10 mm in the high vacuum range (≈10−3 mbar). Starting from atmospheric pressure, gas has to be removed from a closed volume, the so-called vacuum vessel, by pumps in order to reach the high vacuum pressure range. The rate of this pumping is referred to as throughput. Practically, the fine vacuum range is always reached by compression pumps. Compression pumps enclose the gas in a sealed displacement capacity inside the pump. This capacity of volume V is alternately reduced (compression phase), expanded into an exhaust gas line, and increased (suction phase). This process chain can take place in various technical implementations for example in the form of reciprocating pistons, rotary vanes, or screws. The pressure changes result in vibrations in a wide spectral range with amplitudes of a few 10 μm up to a few mm. The amplitude depends strongly on the design type of the pump and on the pressure conditions. Often vibration transfer to the vacuum vessel needs to be mitigated by special damping elements or by shutting down the pumps in critical operational phases. If the mean free path (Table 2.1) is greater than the dimension of the vacuum vessel, the compression process reaches a critical limit. If the pressure is below this limit, compression procedures are ineffective, as shown by the curve of the compressor throughput in Fig. 2.1. In this case, a pressure gradient in the gas no longer drives a flow, since a gas flow is induced by collisions between gas particles. However, by definition collisions between the gas particles and the wall dominate the gas behavior when the mean free path λ is significantly greater than the smallest dimension d of the vacuum vessel (for example a pipe diameter or the capacity diameter of a pump). Collisions between gas particles become decreasingly probable with decreasing pressure. This transition in the ratio of length scales is described by the Knudsen number K n = λ/d, with a transition region, the Knudsen flow, around K n = 1. In most accelerator applications, significantly lower pressures than those at K n ≤ 1 are required. The regime K n < 1 is referred to as a viscous flow (right half of Fig. 2.1), the regime K n > 1 as a molecular flow (left half of Fig. 2.1). In molecular flow, pumps therefore require other technological approaches than in the viscous range. Hence, in order to reach molecular flow (HV and lower), starting from atmospheric pressures, a multi-stage pumping system consisting of viscous

1019 –1015

Extreme ultra-high vacuum (XHV)

Ultra-high vacuum (UHV)

1015 –1010 keV), and atomic nuclei (>MeV), see Fig. 2.4. Imagine the quantity of atoms in one gram of 18 F used for PET analysis (Sect. 6.1.2) of 3.35 * 1022 and the corresponding amounts of current (A), time (s), and power (W) required from an accelerator to generate these many particles! Consequently, lower beam energies with higher beam currents are demanded from these application accelerators. Accelerators for applications accelerate ions and electrons to induce these interactions. Physically, these particle types are very different. The electron is an impartible object, with a low rest mass of 511 keV/c2 and a fixed negative elementary charge –e. In contrast ions are a group of objects, consisting of protons and neutrons and therefore also of quarks and gluons as depicted in Fig. 2.5. All of the isotopes listed in Fig. 2.1 could be accelerated as ions. This divisibility allows mixing of their constituents with target nuclei generating other particles with the same basic constituents (the quarks and gluons), but in a different composition. Practically, we call this fusion and fission reactions, or in general nuclear reactions. Ions feature a mass at least 1836 (for protons) times higher than that of electrons and charge states of several negative up to their individual amount of protons of positive elementary charges. Electrons being impartible always bear the exact same charge quantity and polarity. Ion charge on the other hand can vary due to the number of electrons attached to them. The close binding of electrons and the atomic nucleus makes them appear as a single object (the ion or the atom) with a summed charge q in most technical applications. From the outside, the object appears as uncharged if the amount of protons equals the amount of electrons, although in the nuclear zoom level it is not. Ion charges can change by removing or adding electrons. Due to these differences in particle structure, electron and ion beams find different applications.

Elect ron -

G lu

on G lu

on

s

s

Prot on + Neut ron

Quarks

Quarks

G lu

on

s

s on G lu

Nucleus, ion (+ + )

Singly charged ion (+ )

17

At om

Negat ive ion (-)

2.2 Accelerators

Quarks

Quarks

Fig. 2.5 The atom, ion, nucleus, and electrons of helium. Electrons are impartible, ions and nuclei are not. The atomic nucleus is an ion of maximum charge, but its charge can be changed by binding (orbits) of electrons to the nucleus, leading to different charge states. The picture shows a 4 Helium nucleus consisting of two neutrons and two protons with 0 (double charged ion), 1 (positive ion), 2 (neutral), or 3 (negative ion) electrons

Despite these differences, the acceleration and control of electron and ion beams is quite similar. Similar types of accelerators are used for both, yet taking into account mass, velocity and charge differences for the technical details. Hence the physics and technological aspects of ion- and electron accelerators are largely identical. Strong differences arise when Einstein’s relativity enters the game. Einstein defined the speed of light as the absolute maximum relative velocity between two physical objects. The constant c = 299,792,458 m/s quantifies this speed of light. Accelerators honour Einstein by easily accelerating particles to velocities close to this speed c. Due to their lower mass, electrons reach relativistic speeds quickly, e.g. 3 keV (typical electron energy in an electron microscope) is enough for 10% of c (=0.1c). For ions the relativistic physics remain negligible until about 10 MeV due to their higher mass. We will see in Sect. 2.2.2 how the relativistic physics affects the design and construction of accelerators, in particular of the accelerator types reaching the highest beam energies. Throughout this book, the mass of particles will be given by m and their charge will be q. The ratio of particle speed v to the speed of light c plays an important role for the relevance of relativistic effects. The ratio v/c usually comes by the name β. From this, we can derive the Lorentz factor γ as depicted in (2.5). The Lorentz factor can be understood as the value of relativistic length contraction or time dilatation,

18

2 Technology

but it also increases the particle mass from its rest value m to the relativistic mass mrel . m rel (v) = 

1  2 ∗ m = γ ∗ m 1 − v/c

(2.5)

This relativistic mass increase is the limiting mechanism for the particle velocity, since the faster a particle becomes the higher its mass and consequently the more difficult to accelerate it further. In theoretical physics often c is taken as 1 to shorten equations. This book will avoid such abbreviations and try to only use the regular and practically measurable quantities. This velocity ratio β connects to the kinetic energy E of the particles in the relativistic case via E=

mc2 2 v 2 − mc 1 − /c

(2.6)

The first term on the right hand side represents the total particle energy and the second the energy equivalent of the particle rest mass according to Einstein’s famous equation E = mc2 . The kinetic energy is the determining value for an accelerator, as it is one the one hand easily measured by the applied voltages and the energy unit electron volt (eV) and on the other hand the relevant parameter for physical interactions. The particle mass connects energy and velocity. The reference value for different particles is their rest mass, the value without any relativistic corrections according to (2.5). The main difference occurs between electrons and ions, with a minimum ratio of 1836 (electron to proton mass).

2.2.1 Direct-Current Driven The era of accelerators started with the application of direct-current voltages (DC) on conductive plates. Every charged particle positioned in between two conductive plates will get accelerated by a DC potential towards one of these plates, depending on the voltage polarity and particle charge sign. Cathode ray tubes, as the first accelerators, work on this approach by applying a voltage between an electron emitter and a plate anode with an aperture hole (Fig. 2.6). This simple approach quickly reaches its limits in the achievable beam energy and hence new technologies were required. The main technological challenge of electro-static DC accelerators is the generation of the high electro-static potential and its isolation from the surrounding structures. Remember, every eV of beam energy requires 1 V of potential difference in the device, with applications requiring keV to about 250 MeV. Free electrons, naturally present in all gases and on all surfaces due to the thermal distribution functions and

2.2 Accelerators

19

Particle source

Target

Acceleration Acceleration anodes Deflecting coils

Control Grid Vacuum vessel

UHV Target

Heater

Cathode

Electron beam path

Focusing coil

Fig. 2.6 Basic principle of a DC accelerator realised in the form of a cathode ray tube. The electron source is a hot filament, from there on the electrons are accelerated by a cathode-anode DC potential and directed by a magnetic deflection system onto a target. A similar setup, but with reduced beam diameters is the basis for electron microscopes

cosmic radiation, will be accelerated by these voltages unintendedly. These accelerated rogue electrons quickly gain enough energy to release new charged particles (requiring about 10 eV/particle) from gases and surfaces, an exponential avalanche effect can initiate. This effect depends on the voltage and the distance over which it is applied due to its connection to the mean free path. The plasma created by this effect has a low resistance over which the DC potential will discharge, preventing the efficient build-up of high voltages. The characteristics and resistance of this plasma can vary over orders of magnitude, depending on the combination of voltage gradient and outer conditions, in particular pressure, as demonstrated in Fig. 2.7. The first visible discharge type, carrying also relevant amounts of current is the corona discharge. This type of discharge can be observed on high-voltage land lines and generates the crackling noise often audible around these lines. The resistance of these discharges is still high, hence inducing acceptable loads on the high voltage power supply, but it indicates the onset of a problem for DC accelerators. Exceeding the highest tolerable voltage results in the breakdown of an arc discharge. The resistance of the arc plasma discharges is in the order of metallic conductors, leading to significant power consumption and heat dissipation on the surrounding elements, limiting the achievable voltages of DC accelerators. The voltage gradient limit V Breakdown /d of the arc transition, with its sudden decrease in discharge resistance demonstrated in Fig. 2.7, is described by the Paschen-curve: VBreakdown ( p, d) =

Bpd

ln(Apd) − ln ln 1 +

1 γS E



(2.7)

20

2 Technology

Fig. 2.7 The different regimes of plasma discharges (in 1 mbar Neon), limiting the acceleration potential. After the dark discharge region, two regions of negative differential resistance (higher current with lower voltage/gradient) follow. Several types of DC accelerators make use of this negative differential resistance to filter voltage ripple. Original work by Wikigan, CC-BY-SA-3.0, via Wikimedia Commons

With the gas species dependent parameters A and B, the gas pressure p, the distance between the two charged plates d and the number of secondary electrons (see Sect. 4.1) generated per electron and ion impact on the plate surfaces γ SE . The arcbreakdown voltage-limit of (2.7) strongly depends on the involved transport medium (e.g. gas) between the voltage poles. Tabulated values of the breakdown voltages for different solid, liquid, and gaseous isolation materials exist in handbooks. Table 2.3 illustrates these values are in the order of 10 kV/mm for typical insulators. Plastics and ceramics reach similar values as a vacuum. The problem of vacuum-based isolation lies in the strong dependence of its breakdown limit on the surface conditions of the parts isolated against each other. Roughness results in strong local voltage gradients at the roughness peaks and surface absorbed gases potentially form a relevant gas pressure in the sense of (2.7) upon release. In applications these properties often remain hidden. Gases achieve slightly lower values at standard pressures, but the breakdown voltage in gases is proportional to the gas pressure. At a pressure of 10 bar, the special isolation gas sulphur-hexafluoride (SF6 ) provides breakdown Table 2.3 Breakdown voltages of selected materials Material

PE

PTFE

Al2 O3

Demineralised water

Oil

Air

SF6 @1 bar

Vacuum

Breakdown-voltage (kV/mm)

20

24

17

65

90% electrical efficiency. In conclusion, DC accelerators are energy efficient accelerator types with high industrialisation and numerous products on the market.

2.2.2 Alternating-Current Driven At this point we have to realize it will be technically very challenging to reach beam energies above a few 10 MeV using DC accelerators, therefore other concepts are required to surpass this technical barrier in a more cost efficient way. DC accelerators make use of the accelerating field only once, at maximum twice with the tandem’s charge exchange. If the accelerating voltage could be used multiple times, much higher energies could be reached without extreme voltage levels, therefore circumventing this technological difficulty of DC accelerators, but how to make multiple use of the same structure? According to Coulomb’s law, a particle cannot gain more energy by moving through an electro-static field several times. So how to circumvent fundamental physics? The tandem accelerator provides the basic idea, as it breaks the law by charge exchange. Alternating-current (AC) driven accelerators apply the same trick, but instead of changing the particle charge polarity, they work by changing the acceleration voltage sign/polarity. AC’s change their voltage polarity infinitely often with their given frequency, avoiding the technological limit of voltage isolation. For passing the AC voltage multiple times, the AC accelerators grow larger making the highest beam energy AC accelerators the largest single technical devices build by humanity. We start the conception of an AC accelerator by simply reusing the concept of DC accelerators with its acceleration chamber, the fundamental building block where any accelerator applies the accelerating potential to the charged particle beam. The chamber now converts an AC input power with typical frequencies of 10 MHz to 10 GHz and peak voltages in the order of 100 kV to a directed acceleration (see Fig. 2.9).

2.2 Accelerators

25

Additional technical degrees of freedom arise for AC in the method of coupling power to the chambers, compared to applying a DC voltage between two plates. Depending on the accelerator layout and in particular the AC frequency we can either apply the AC directly to plates or, for higher frequencies, drive resonant modes in a closed resonance cavity (Fig. 2.10) with an antenna or even by transferring energy from a resonant mode in another cavity or a waveguide. The resonant mode inside the cavity works like a vibration on a guitar string with a resonance frequency according to length and properties of the cavity/string. In contrast to acoustic waves, electromagnetic waves can also propagate in vacuum. Usually the so-called transversal magnetic 010 (TM010 ) mode is exploited as it has the lowest amount of nodes in the different axis. For understanding what this means imagine a straight conducting wire with an electric potential/voltage applied between both ends. A current will flow along the wire and induce a circular magnetic field around it. Placing the wire along the beam direction and removing it yields the field distribution of the TM010 mode. Physical language would describe it as a flat longitudinal electric field with a maximum in the centre and zero field at its outer radius (1 node) in combination with a transversal magnetic field, since Maxwell’s laws of electro-magnetism dictates electric and magnetic fields to be always perpendicular to each other. The beam draws the power required for acceleration from this cavity wave. Delivering this power into the cavity requires either a current flow in wires or, in particular for higher frequencies, the power coupling by waveguides. Figure 2.9 shows an example of a TM010 cavity powered by a magnetic antenna and a matching network. The current flowing through the antenna ring generates a magnetic field field with a vector directed through the ring. This matches to the local magnetic field vector of the

L C

accelerat ion Fig. 2.9 Sketch of a TM010 mode with its longitudinal electric field (arrows) and a coupling antenna connected to a power source through an impedance matching network. The matching the power source impedance (e.g. 50 ) to the TM010 coupling antenna using a coil (L) and a capacitor (C) of variable inductivity and capacity, respectively. RF impedance matching calculators can be found on the web for determining L and C depending on source and load impedance. The field oscillates with time but can be considered constant for sufficiently fast (relativistic) particles passing through

26

2 Technology v0

d1= v 1/f

d2= v2/f

d3= v 3/f

drift

accelerat ion

Beam part icle accelerat ion

Fig. 2.10 Schematic of the 2π-mode AC accelerator design with a beam particle in phase with the accelerating voltage (arrows). This voltage originates either from electrostatic plates or from field coils magnetically driving e.g. the TM010 mode with its central voltage maximum. The beam particle first enters an accelerating structure, then passes a drift part until the voltage sign reaches its original polarity at the moment the particle enters the second structure. In the case of constant speed (e.g. close to speed of light) all vi and hence all d i are equal, otherwise the d i have to increase with index. In the π mode both cavities have opposite polarity and d 2 = 0.

TM010 mode as discussed above, enabling coupling of the power to the cavity mode. The L-C network matches the impedance of the input lead to the cavity impedance. High beam energies require frequencies in the GHz range. In this range, semiconductor power supplies reach their limits due to their inherent capacity and finite electron mobility, respectively, limiting their output power at higher frequencies. Vacuum electron resonators, mostly Klystrons, provide high output power up to several 100 GHz with conversion efficiencies of 50–75%. These electron resonators have their own cavity wave mode (not necessarily the TM010 ) requiring a matching connection to transfer power to the acceleration cavity. The connection between the power source and the cavity depends on the modes and the power transmission method. A Klystron actually consists of a DC electron beam flowing through a series of chambers similar to the cavity depicted in Fig. 2.9. The electron beam couples to an input AC wave amplifying it since the electrons bunch according to the wave field. This electromagnetic wave travels through vacuum/air just like the power delivered in a household microwave. Instead of cables, hollow waveguide tubes allow for conducting the power to the accelerator. This conduction in not loss-free, but losses occur on the waveguide walls. A technical advantage lies in the coupling of the power and its impedance matching via geometrical modifications (tuners) in the waveguide instead of an L-C matching network. A series of the acceleration structures depicted in Fig. 2.9 provides the repetitive use of the driving voltage we were aiming at. A series of acceleration cavities form a so-called linear accelerator (LinAC). Of course, DC accelerators are also linear accelerators, but the term LinAC by convention describes the AC type. The alternating voltages in the cavities constantly change amplitude and polarity, as they are literally alternating currents. In order to expose the beam particles to the same voltagepolarity, and hence acceleration direction, in each cavity, beam and AC voltage in every cavity have to be synchronised or in phase, respectively, with the propagating beam. In other words, each particle has to pass each cavity within equal half-periods of the AC.

2.2 Accelerators

27

The space in between two cavities has to equal the time required for the other half-period of the AC to finish, as depicted in Fig. 2.10. The higher the frequency, the shorter this time and the smaller the accelerator, making higher frequencies desirable for AC accelerator design as they allow for smaller structures and higher specific acceleration (MeV/m). The dimension d i indicates this in Fig. 2.10. Ultimately, the skin effect limits the conduction of AC power, since it restricts the current flow to a shallow surface layer, may it be over a wire or the walls of a waveguide. The skin effect reduces the effective conductor thickness with the square-root of the frequency, reaching values of about 10 μm at 50 MHz. Correspondingly the resistivity increases with frequency, representing a technological limitation. Transferring digital information over an USB3 cable might be possible with such thin conductors, but the power requirements of an accelerator require a different solution in order to keep power losses and voltage damping within tolerable limits. A solution would be the use of superconductors since they feature infinite conductivity. In reality this is only half-true, surface resistance Rsurf from adsorbents (see Eq. 2.1) present at their cryogenic operating temperatures and the alternating nature of AC currents lead to small, yet relevant power losses Ploss per cavity area Acav (2.8) on superconductors at high AC frequencies ω and magnetic fields BHF . These different conduction physics in AC accelerating structures lead to a technological disadvantage and generally smaller energy efficiency of AC compared to DC accelerators. 2 Rsurf (ω)BHF Ploss = Acav 2

(2.8)

Being in phase with the AC allows a single ideal particle to travel through multiple acceleration cavities. This particle defines our ideal situation, but a real beam consists of numerous particles spread across a certain volume given by beam size and spread around this central particle. Just like for a society it is important for the beam performance to not only take along the individuals who have the ideal starting conditions, but as many as possible by finding means to support the less privileged reaching the common goals. Due to the alternating nature of the accelerating voltage, the beam needs to adapt to a similar structure, see Fig. 2.11. The beam arranges in bunches which move in phase with the AC trying to catch the maximum benefit from every cavity passage. In contrast to a DC beam, particles can only survive the acceleration if they stay in a certain phase window of the sine wave of the given forward directed polarity. Intuitively, we would propose the peak of the wave in order to exploit the maximum accelerating field. Unfortunately, some of the underprivileged particles will be slightly slower, have an angular deviation in their direction of movement (=longer path), or be just displaced compared to others, hence the beam will have a certain spatial and momentum extent. These beam aspects will be discussed in more detail in Sect. 2.3.1. The non-ideal underprivileged particles will reach the cavity slightly off timing and it is the task of the accelerator design to allow them to participate in the beam in spite of their deviations from the ideal. Slower particles reach the cavity after the AC peak and therefore see smaller amplitude. Smaller amplitude means less acceleration and

28

2 Technology Accelerat or driving AC volt age

time

Const ant w ave accelerat or beam

time/space

50% dut y cycle pulsed beam

time/space

DC accelerat or beam

time/space

Fig. 2.11 The AC accelerator voltage is given by a sine wave (1st row). Correspondingly, the beam particles arrange in the wells of equal voltage sign (2nd row). A DC beam (bottom) in contrast remains unchanged over time and along the acceleration direction. The Y-axis represents voltage amplitude for the sine (top) and the transversal direction for the beams, respectively. Darker colours represent higher beam density

hence the particles become even slower compared to the rest of the bunch. Quite quickly, they will be left behind, the beam current reduces. Thinking how often the bunch will pass the cavities with acceleration voltages of ≈100 kV until it reaches some 100 MeV, it is easy to imagine that the whole bunch will disintegrate prematurely. Mathematically said, the bunch averaged phase-deviation from the ideal phase increases if we aim at the voltage peak. In order to avoid this issue and focus the phase, we have to choose a slightly earlier phase, see Fig. 2.12. On the positive slope of the amplitude faster particles receive less acceleration, since they reach the cavity early equal to a lower voltage, and slower particles receive more acceleration by reaching the cavity later. This focusses the bunch onto its central phase. The acceptance (maximum phase mismatch still focussed) of this phase focussing effect marks an important accelerator design parameter. It says how supportive the accelerator design is to underprivileged particles. The acceptance depends on the position of the ideal phase on the positive slope part, since significant deviations would bring particles into the bad phase regions. Besides this phase focussing, the AC beam also requires transversal focussing exactly the same way as in DC accelerators. In order to synchronise the phase position of the individual bunch’s cavity passages, the beam has several options (modes). The basic choice: It could drift/wait for another half-period through a field-free drift space in a setup where subsequent

2.2 Accelerators

29

amplitude good

crit ical bad

bad

time/phase

Fig. 2.12 Principle of phase focussing. The circles mark different options for the central phase position of the particle beam in the sine wave. A position at zero voltage amplitude allows no acceleration (bad). At the peak of the wave, slower particles defocus (critical phase limit). At the negative slope, all particles defocus (very bad), only at the positive slope phase focussing works to confine the beam in phase space (good)

cavities feature opposite polarity. Besides this so-called 2π-mode also π, π /2, or 2/3π resonance conditions come into application, depending on the technical layout of the accelerator. In these modes, length d i of cavities and drift parts are adjusted to fit the resonance condition. As Fig. 2.10 shows, these dimensions d i depend on the particle velocity, a quantity changing upon acceleration. Electrons quickly approach the speed of light, reaching e.g. 94.2% of light speed c at 1 MeV. Close to the speed of light, the particle mass increases, but the increase in velocity levels out, (2.5), hence the electron speed remains practically constant above a few MeV. Ions on the other hand require at least 1836 times (protons) higher kinetic energy (1836 MeV for 94.2% of c) due to their mass, making their velocity highly variable in our 250 MeV region of interest. Technical realisations of AC accelerators usually try to reduce complexity by using either the same frequency or similar or even the same acceleration structure throughout the accelerator. Consequently, the acceptable range of particle velocities becomes limited, restricting the dynamic range of serviceable beam energies to a window around the design value. At relatively high energies (see above), relativity limits the velocity gain close to c, expanding this beam energy window compared to the low energy range. The particles increase their velocity upon passing a cavity and this has to be compensated to maintain the resonance condition when entering the next cavity. The solution is either to adapt the length of the drift parts in between the individual cavities or to use different frequencies (given a fixed phase relation between them). Several different types of these so-called high-frequency systems (sometimes also RF-systems) were developed with solutions based on waveguide or cable + antenna coupling and different solutions for compensation of variable particle velocities. When using a single frequency, the optimal spacing ensuring the resonance condition is called the Alvarez structure, see Fig. 2.13. In this structure drift and cavity length increase proportional to the particle velocity. LinACs for higher energies (see e.g. 4.3.3) apply numerous of these structures, while the high energy simplifies the design as soon as the velocity increase levels off in the highly relativistic regime.

30

2 Technology

D1

D2

D3

AC

Fig. 2.13 The Alvarez LinAC structure works with a single AC frequency and variable drift and cavity length Di to compensate for the increasing velocity

Besides the geometrical solution of the Alvarez structure, a LinAC construction can also exploit physical solutions. In all the discussion of this section, we discuss standing waves. It was never mentioned directly, but the hills and valleys of the AC are always assumed to rest inside the cavities. Jumping one frequency period further will always result in the same voltage at a given position. With the aspect of bunching and particle velocities close to the speed of light, in principle particles can also surf on the AC wave peak. These travelling wave acceleration structures require the phase velocity (the movement speed of the hills and valleys) to be equal to the beam particle velocity. The phase velocity is not equivalent to the photon velocity (=c), but it can be influenced/reduced by the propagation geometry. Figure 2.14 shows a schematic example of such an acceleration structure with four cavities. We started by making multiple use of a single voltage, but also a single acceleration cavity can be used multiple times for accelerating a single beam particle. The technical realisation of this physical concept comes by the name cyclotron. In the cyclotron particles start from a source in the centre of the cyclotron with basically zero energy. The particles accelerate over the radius in the form of spirals by an AC voltage applied between two neighbouring sectors of the circle as depicted in the left image of Fig. 2.15. While particles are inside one of the sectors, a vertical magnetic field forces them onto circles. Due to the freedom of orbit in the cyclotron and the centripetal force, a faster particle will choose a larger orbit. The gap between

HF in

HF out

Fig. 2.14 A travelling wave acceleration structure. A wave is coupled in via waveguides. It travels along the individual chambers which are separated by apertures with exits in the last cavity. The size of the cavities and apertures determines the phase velocity of the travelling accelerating wave

2.2 Accelerators

31

hill

valley

ac

+

Ac

Magnet 2

ce v o ler lt a a t g e ion

Magnet 1

ce

a

le ra t in g vo lt a ge

r Pa

-

t ic le pa th

Fig. 2.15 Schematic top-view of a classical two-magnet cyclotron (left) and a modern isochronous cyclotron with 3 magnets (right). The modern magnets have spiral structure to increase edge angles a with radius. For particles approaching the speed of light, the axial field strength has to increase (Fig. 2.16) to compensate for the relativistic mass increase. The defocussing effect increases with radial field gradients, requiring stronger edge focussing at higher radii. In this example, magnets constitute the hill part and the valleys are field free, leading to the dashed particle orbit. This cyclotron operates in 4/3π-mode with the opposite voltages across the valley parts forming the accelerating cavities

two sectors/magnets is the accelerating cavity, where the voltage between the sectors accelerates the particles. The movement time inside the sector corresponds to the phase synchronisation discussed above. Faster particles have to take a longer path, hence particles always pass the gap within the forward acceleration phase. Particles are extracted to the application at the outer radius with their final energy. The name cyclotron derives from the frequency of revolution of charged particles in a homogeneous magnetic field (inside the sectors). This cyclotron frequency f cyclotron given in (2.9) is independent of the particle energy, as faster particles move in larger circles, compensating their increased velocity by longer path. It only depends on the vertical magnetic field strength B and the particles charge q to mass m ratio. Cyclotrons exploit magnetic fields to guide the beam in a circle, allowing exploitation of the same acceleration cavity over and over again, while integrating drift and acceleration cavity into the same structure. f cyclotron =

qB 2π m

(2.9)

The isochronous cyclotron shown in Fig. 2.15 (right) represents a technical layout variant of the general term cyclotron. Besides other cyclotron types it has established as the state-of-the-art of cyclotron design. The term isochronous takes the constant frequency of the cyclotron idea to the next level by including also corrections for relativistic mass increase. Independent of its energy, isochronous requires a particle

32

2 Technology

vert ical direct ion

Upper m agnet defocussing field

neut ral field

Lower m agnet radial direct ion

Fig. 2.16 Side-view of an isochronous cyclotron. The beam moves in (x) and out (.) of the paper plane (here positive ion). The magnet shaping densifies the magnetic field lines towards to outside (=higher magnetic field B) compensating for relativistic mass increase. As the Lorentz force acts perpendicular to movement and B-field direction (right-hand rule), the bend field lines lead to a defocussing effect/force as depicted by the arrows with dotted lines for positive ions

to always require the same time for a revelation. This sounds obvious given the independence of cyclotron frequency from energy or velocity of the particle, (2.9), but accelerators bring the particles to extreme speeds, easily reaching significant fractions of the speed of light in a fragile system of for example the phase focussing. Approaching the speed of light, the mass of the particles becomes velocity dependent as noted in (2.5). The trick of isochronism is to compensate for this mass increase by increasing the magnetic field B accordingly in the radial direction via a shaping of the magnetic sectors as depicted in Fig. 2.16. Now, not only the centripetal force defines the faster particles orbits, but also the radius dependent magnetic field strength. This radial increase in magnetic field leads to a problem of beam divergence. Vertically straight field lines leave the beam shape unaffected, but bent field lines induce force components in the vertical direction, see Fig. 2.16. The Right-hand rule (3 perpendicular fingers) directly visualizes the situation for a single particle. The field index n quantifies this effect (2.10) through calculation of the change of the axial (vertical) magnetic field BA over the radial direction r multiplied with the radial position R over the absolute local field strength B. nR = −

δ BA R δr B

(2.10)

Without compensation of this defocussing, the beam will become too large and hit the vessel boundaries removing them from the beam. The highly integrated design of cyclotrons prevents the installation of any additional elements for focussing the beam, we have to find a solution with the parts already present. Let us continue the thinking of the radial field index above. The effect vertically defocusses the beam, but what happens radially? An underprivileged particle with a position slightly outwards from the ideal position in the bunch will experience a stronger magnetic field compared to a particle more in the cyclotron centre, deflecting it more to the inside. On the other hand, a particle displaced slightly to the inside will experience a weaker magnetic

2.2 Accelerators

33

field, deflecting it less and bringing it closer to the bunch centre. All in all this describes a radial focussing! Apparently, focussing and defocussing go hand in hand: We have to find an option with axial focussing and radial defocussing. Earnshaw’s theorem describes the physical basis for this impossibility to focus in two directions at once. The technology required is the so-called edge focussing. This focussing method was an important aspect for the development of the isochronous cyclotron, but understanding it still remains difficult, even after decades. Edge focussing requires an angle a = 0° between the normal of the magnet edge and the beam forward direction at the magnet edge. The bend magnetic field lines connecting the poles of the magnet outside of its gap have a field component perpendicular to the beam direction if a particle enters this outside region displaced from the central axis and at an angle to the edge normal. With the right-hand rule a vertical Lorentz force towards the ideal beam track arises as explained in Fig. 2.17. The combination of radial focussing by field gradients and vertical/axial edge focussing leads to an overall focussing in both axes, as required for stable beam operation. This kind of magnetic field layout represents an integral part of accelerator design and has a particular importance for AC accelerators, since these reach higher energies compared to DC accelerators. The greater principle of combining different field gradients, each having focussing and defocussing aspects, for generating an overall focussing effect is called strong focussing. Manufacturers exist to deliver 250 MeV isochronous cyclotrons on this basis with a superconducting wiring on a conventional ferromagnetic iron core, reducing the cyclotron diameter to ≈5 m, orders of magnitude smaller than a 250 MeV DC accelerator. The design of these devices follows the spiral structure depicted in Fig. 2.15 (right). The pathways travelled by the particles in circular accelerators are quite long. Still in this example of a 250 MeV

Bx By

N

S Fig. 2.17 In the depicted situation, two particles enter a sector magnet in x direction (particle coordinate system) at zero edge angle. The ideal beam sees only the By field. The displaced beam sees the two magnetic field vector components of the bend outside field Bx (no effect since parallel to charge movement) and By (deflects in the same direction as in the magnet centre). Only for a non-zero edge angle (magnet N and S change thickness in the direction of the paper plane) an additional Bz component (bending of field lines into the paper plane) is present, inducing a vertical (axial) force required for edge focussing

34

2 Technology

beam, accelerated by a 100 kV cavity potential in a 3-fold cyclotron (Fig. 2.15) with 5 m diameter the beam travels about 6.5 km to reach its final energy. Accordingly, adjusting focussing and axial magnetic field in these devices represents a challenging, yet important part of the setup. As a consequence of this peculiar combination of forces, frequencies, and travelling times, the cyclotron, and AC accelerators in general, are fixed to a certain beam energy and particle type in order to fulfil the resonance and stability conditions. For ions, a limited flexibility exists, if the ratio of mass over charge (q/m) of the ions is maintained (e.g. 2 H+ and 4 He++ ). The same voltage will accelerate the particle with higher charge proportionally, keeping the cyclotron frequency constant (2.9). Small differences of this ratio to the design value, allow for acceleration with other driving frequencies in the same device, for example 1 H+ and 2 H+ are possible in many commercial cyclotrons. Electrons differ in mass drastically from ions, leading to significant differences not only in magnetic deflection, but also in relevance of relativistic effects and Bremsstrahlung. Although electrons could be accelerated in a cyclotron, their low mass would quickly lead to speeds close to c and the requirement of extreme radial field gradients nR for compensation. Furthermore, power losses by Bremsstrahlung, (2.11), limit the technically reasonable beam energies of electrons in accelerators. In the accelerator context, the Bremsstrahlung emitted upon deflection of beams is called synchrotron radiation, distinguishing it from others origins and spectra of Bremsstrahlung as we will see later. cq 2 ∗ P= 6π ε0 r 2

E mc2

4 (2.11)

All charged particles emit Bremsstrahlung upon acceleration and deceleration. The forward acceleration in the cavities can be neglected compared to the acceleration resulting from a curved particle trajectory of radius r in a circular accelerator. The emitted power P of a single particle depends on the particle energy E, the speed of light c, the particle/electron charge q (=e for electrons), the vacuum permittivity ε0 and the particles rest mass mc2 . The strong inverse mass dependence in (2.11) highlights the difference between Bremsstrahlung losses of ions and electrons of a factor 1013 for protons. The power loss scales equally strong with beam energy, resulting in a hard limit of maximum electron energy for a given device size. Thinking about other designs of circular accelerators different from cyclotrons makes us appreciate the extremely practical, because highly integrated design of the cyclotron accelerator type. For reaching energies above some hundred MeV, cyclotrons reach their limits even for ions due to the increasing relativistic effects. The amount of material and the size of a cyclotron would become technically unfeasible, other designs are required. For different circular accelerator concepts, we have to start a new way of thinking and go back to the basic cavity concept of Fig. 2.10. Equation (2.11) also highlights the importance of making the system larger, to increase the curvature radii. Let us think about the Large Hadron Collider (LHC) at CERN with its

2.2 Accelerators

35

26.7 km long acceleration ring. Accelerating protons to 13 TeV (and lead ions to 1148 TeV) certainly required a different way of thinking, besides a large bag of money. At a certain dimension (at the very most the LHC dimension) it becomes unfeasible to construct a single-block type accelerator such as the cyclotron. Separation of the functions integrated into the cyclotron block opens up extra technical design freedom. We discussed the different ingredients of focussing, acceleration, particle source, compensation of relativistic velocity effects, drifts, and the trajectory deflection. Accelerator language names these separate ingredients functions. Each function can be fulfilled by a particular apparatus, such as a dipole magnet induces deflection or a cavity induces acceleration. Combining several such functions to a team/group, represents the unit cell concept. Each unit cell receives a beam, does something with it and releases it to the next unit cell. The design problem reduces to developing the functional elements and tuning them together to form a working unit cell, similar to the object oriented programming. If we tune the unit cell in a way that the incoming beam matches the outgoing beam, all unit cells constituting an accelerator can be identical. This symmetry further reduces the accelerator complexity since only a fraction (say 1/8) of the accelerator needs to be designed, while the remaining parts are just copy and paste. The same way of thinking works not only for circular accelerators, but represents a fundamental concept of thinking for all technical designs. The minimum unit cell we could think of is the combination of deflection, horizontal, and vertical focussing. This technological approach leads to the Synchrotron accelerator type. This separate function accelerator depicted in Fig. 2.18 requires besides the deflection, focussing, defocussing unit cell additional functional modules. Acceleration cavities are required, but a synchrotron accelerator covers only a part of the total acceleration, not from zero energy at the particle source to the final energy as in a cyclotron. The compensation/limitation of velocity span requires a functional element in the form of sets of different types of accelerators feeding each other with increasing beam energies. In a DC accelerator, the beam extracts directly, but in an AC accelerator with more or less closed path (see Sect. 2.3.1) and resonance condition, the extraction becomes a technical challenge. Cyclotrons with their open spiral shaped beam path can make use of internal targets, i.e. targets at a certain radius, but external targets offer practical advantages and flexibility. In a synchrotron, we cannot extract the beam at a certain radius, because there is only a single radius. The rules of the closed path have to be broken. Changing the charge state of ions using electron stripping (moving towards to lower right corner in Fig. 2.5 by means discussed in Sect. 3.2) in foils or gases is very efficient and allows for the highest beam currents, but requires negative ion beams. If the charge state has to be maintained, so-called kickers (functional element) or septa (thin metal sheet mostly in cyclotrons) periodically change the beam track with a sudden field ramp using fast magnets or electro-static deflection plates to selectively extract particle bunches. Figure 2.18 shows a unit cell comprised of a deflecting and a focussing module. These functions can be fulfilled also by a single device we already discussed, the “cyclotron magnet” depicted in Fig. 2.16 with its field index n = 0 focusses radially and defocusses axially. In the isochronous cyclotron, a positive radial field gradient

36

2 Technology acceleration cavity functional element

deflection magnet

power supply

focussing magnet dynamic magnet (injection) pre-accelerator

extraction particle source

power supply

unit cell

Fig. 2.18 The synchrotron is made from several functional elements and unit cells. The ring is feed by a charged particle source and a pre-accelerator such as a LinAC. For connecting the injected beam to the circulating beam an accurately timed dynamic magnet applies a track switching, like connecting rail tracks. The beam circulates by deflection magnets and quadrupole focus magnets. An acceleration cavity ramps up the beam energy while the magnets strength follows to keep the orbit. Finally the accelerated beam extracts to a target or stays in the ring for particle production

was required for compensation of relativistic effects; in a synchrotron the separation of functions eliminates this restriction and we could inverse the field gradient (radially decreasing field). The magnet would still deflect in the same direction, but an outside oriented field curvature leads to arrows pointing towards the central axis equal to axial focussing (compare Fig. 2.16). Alternating field gradient signs enable a net focussing only by dipole magnets. These alternating gradient focussing dipole magnet arrangements represent a technological option for systems with separated functions. The field gradient nR determines the focussing strength with |nR | >> 1 (typically ≈±20) considered as so-called strong focussing. Calculating nR with real values and (2.10) demonstrates strong focussing with alternating gradient dipoles works only for large accelerators, otherwise technically unfeasible magnetic fields would be required. The technical exploitation of the alternating-gradient technology allows for significantly reduced beam sizes and according reductions in technical investments for large accelerators. Nevertheless, large synchrotrons mostly use alternating quadrupole magnets instead of the combined function dipole magnet. The LHC for example maintains a beam diameter of typically GeV/m become possible, exceeding the current technological limits of AC accelerators by at least an order of magnitude. The AWAKE device (see related publications under https://awake.web.cern.ch/publications) as a pilot project of plasma acceleration technology uses a 200 MeV proton driver beam in a 10 m long Rb plasma heated by a co-axial laser beam for accelerating a 15– 20 MeV electron witness beam to a few GeV. By using even higher energy protons (Bracco et al. 2014), this technology opens up an option for producing electron beams in the order of 500 GeV, effectively mitigating the physical (Bremsstrahlung) and technical (acceleration gradient) problems of accelerating electrons to high energies by connecting them with proton acceleration technology. The witness beam currents remain small (nA in the above example) since only a small part of the driver beam energy can be converted to witness beam energy, since the load induced by the witness beam onto the wake-field would continuously reduce its velocity which in turn broadens the witness beam energy spectrum via dephasing. The whole setup with its two supporting accelerators, the Laser beam, and the limited efficiency may be a tool only for reaching higher electron beam energies for fundamental particle research, but it is also only a pilot device. Besides acting as all in one or as acceleration cavity Lasers can also be used for other functions in AC or DC accelerators. Electron accelerators, for example in FEL accelerators (Sect. 4.3.3), already make use of laser based electron emission in ultra-short pulse electron sources, combining the advantages of both technologies. The compact source region defined by a focussed laser beam leads to high beam quality (low emittance) and the Laser pulses result in bunches shorter than possible with any other method. The release of particles with the Laser surface interaction process connects the charged particle pulse length to the Laser pulse length. Electrical switching and bunch compression in classical accelerators has typical pulse length limits in the range of ≥100 ps. Laser source deliver orders of magnitude shorter pulse length. Short pulses are of particular interest for time-resolved investigation of very fast processes with the help of accelerator-based analytics, see Sect. 7.1.

44

2 Technology

In conclusion, Laser accelerators and plasma cavities could develop to valuable tools of accelerator technology and in a few special scientific case they already are, as discussed above. Extensive developments are still required towards application as established tools, in particular for the Laser accelerators it appears the optimal physical scheme is yet to be found. Whether these tools have the potential to substantially reduce accelerator sizes depends also on the supporting aggregates, e.g. Lasers, and their efficiency. In the next section we will discuss more details on the technological differences between the three accelerator classes discussed. Expecting the same beam “product” from a Laser accelerator as from a classical accelerator is like expecting an electric car to be identical to a combustion car. The Laser accelerator technology is simply a different technology and niches which tolerate its deficiencies (e.g. the broader energy spectra) and value its advantages (e.g. the short pulses) will be found.

2.2.4 Electric and Spatial Efficiency of Accelerators The efficiency of a process is, in particular in its application in a production environment, the determining factor for its usefulness and practicability. As such, the tool accelerator needs to efficiently deliver charged particle beams. The measure of efficiency strongly depends on the particular application and its boundary conditions. Besides electrical/energy efficiency also dimensional compactness or specific acceleration, respectively, is a decisive factor. Larger accelerators require larger buildings and vacuum systems, inducing secondary problems. Both efficiencies also directly couple to the cost of an accelerator system and its operation and therefore also the specific costs of its service or product. The three main groups of accelerators (DC, AC, Laser) presented in this section, each with a set of sub-groups of devices, are very diverse in their different efficiency aspects. We take quantitative look into the efficiency. DC accelerators offer the highest electrical efficiency, with typical application values between 70 and 98%. Behind the scenes, DC accelerators also rely on AC frequencies for transformation and electrical/galvanic separation in their initial stages, but the technological restriction in this part are significantly less relevant than for pure AC accelerators. On the other hand these accelerators suffer from the technological difficulty of discharge breakdowns, limiting their specific acceleration. Table 2.3 demonstrated theoretical limits of 80 kV/mm = 80 MV/m, but discharge avoidance means the device grows with beam energy not only in length but also in diameter. Practical difficulties result in only some single MV/m for real devices. Electrical discharge phenomena, most prominently the corona discharge, contribute a base level of electric losses only dependent on the acceleration voltage. For low beam currents at the MV level, e.g. in analytics, the effect dramatically limits the efficiency. For applications below MV the losses become negligible and only the voltage transformation and rectification limits the efficiency. AC accelerators are a compromise between electrical and spatial efficiency. Modern solid-state generators for the MHz range achieve electrical efficiencies in

2.2 Accelerators

45

the order of 25–70%, already substantially better than old vacuum tube amplifiers with values of about 10%. This comparably lower energy efficiency is contrasted by extremely improved spatial efficiency. A modern, superconducting cyclotron can deliver 250 MeV ion beams with a device of about 5m diameter. In this example we achieve an effective specific acceleration of 50 MV/m (although of course the protons travelled a much longer path), but values in the order of 10 MV/m are becoming standard for constant wave devices and up to 100 MV/m become possible with pulsed operation. The technological limitation for AC accelerators lies rather in the electrical resistance and quality factor, compared to DC accelerators. Power losses inherent to the transport of AC with high frequencies due to the skin effect and other AC physics represent a technological disadvantage of this accelerator type. For a given conductor, the effectively conducting skin layer thickness reduces with the square-root of the AC wavelength. At the same time higher frequencies (=shorter wavelength) reduce the spatial dimension of an AC accelerator due to a proportional reduction of the resonance length (Fig. 2.10). This reduction in resonance length increases the specific acceleration. Higher specific acceleration leads to secondary benefits for the size and cost of beamline, vacuum, and beam focussing. The competition of these scalings in the space between performance and costs could be worse, but only in science, where only specific acceleration (=performance) is relevant, due to its impact on the maximum achievable beam energy, the compromise is easy to resolve (reaching e.g. one GeV with 10 MeV/m = 100 m length). In applications, not only the performance but also the costs and competing technological options have to be considered before deciding for an AC accelerator. Unfortunately, not even superconductors can solve this technological issue completely, as finite conductivity effects occur. A large interest in efficient AC generation at high powers and frequencies for wireless communication will further improve the energy efficiency of AC accelerators, potentially lifting them to the values of DC accelerators, but due to the different loss mechanisms (corona vs. skin effect) energies up to some MeV will probably always be the domain of DC and energies >20 MeV the domain of AC accelerators. Laser and plasma accelerators represent the other end of the efficiency scale. The technological limits of AC and DC accelerators do not apply to Laser and plasma accelerators, but other limits arise. The devices are extremely small, with field strength of up to some 100 GeV/m. The technology offers completely new applications with a potential for table-top sized accelerators, not considering the extensive aggregates required though. This compactness is on the cost of a low electrical efficiency. Already the initiating laser beams are produced with a maximum of ≈30% energy efficiency. On top of that a relevant amount of energy is lost to thermalisation in the beam interaction zone. In Laser accelerators about 10% of the Laser energy contributes to the generation of the required fast electron population. Another large factor is lost to unusable parts in the charged particle energy spectra, which are, so far, by orders of magnitude broader than in AC and DC accelerators. In the end, some 10−3 % of electrical efficiency remains for producing a beam similar to what is known from AC or DC accelerators. It has to be admitted, that this technology is still under evaluation and development with high potential in new physical acceleration

46

2 Technology

schemes and increased Laser powers, but the technological aspects of its efficiency losses will probably only be weakened. The physical processes behind Laser and plasma accelerators are not fully understood, therefore their specific technological limits might change in the course of development. With the low efficiency, also only lower average beam powers are reached compared to AC and DC accelerators.

2.3 Ion- and Electron Beam Optics Charged particle beams, this means ion and electron beams, are ensembles of individually moving and interacting objects. These ensembles can be a few or very many individual particles, each having its own properties, trajectory and energy. A certain similarity or range in particle trajectories and kinetic energies allows us to call them a beam, yet there is no clear limitation in these ranges. Similar to a group of people bunched together in a tram (Fig. 2.22), the individuals in this beam ensemble still slightly differ in their properties from the ensemble-averaged values, which we actually call the beam properties. To a certain extent, this individuality is tolerable, yet there is no tram without a solid cabin giving technical limits to it. The cabin is required to restrict the individual movement, keeping the members of the ensemble bunched together. While it would be a catastrophe to lose some people travelling in a tram, certain losses are not critical in particle beams and actually losses are completely unavoidable as a result of the fundamentals of statistical distributions of particle properties. Nevertheless, particle beams need a confinement otherwise, too many particles are lost unused introducing secondary problems. Since charged particles will neutralise and disappear upon contact with solid walls, charged particle beams require a contact free confinement by electro-magnetic fields and a vacuum beam tube diameter significantly larger than the beam diameter. The description and layout of this confinement is the subject of beam optics. The description of the beam movement either considers the movement of single particles with individual properties or the beam collective with certain distribution functions. In both cases, longitudinal motion describes the components in the beam or acceleration direction, respectively, and transversal motion depicts the perpendicular plane (towards the vacuum vessel walls). A beam is naturally not a point like entity, but for designing the beam path we require a track. Similar to a tram we need a closed track connecting start and destination in an ideal fashion. This track defines the beam centre and, by definition, the ideal particle having ideal starting conditions follows this ideal path. A description of beam optics firstly covers this ideal particle and then considers if and how the non-ideal particles follow the ideal path and how many of them will be lost. Electric and magnetic fields induced by so-called beam optics such as magnets and deflectors confine the particles in the transversal direction. The longitudinal direction is not always relevant, in particular for DC accelerators, or it is mostly confined by the accelerator itself as discussed with AC accelerators in Sect. 2.2.2. The confining fields keep the overall energy stored in the beam constant. If a charged

2.3 Ion- and Electron Beam Optics

47

particle enters a magnetic field its kinetic energy remains constant, but it is only redistributed to different directions. Static electrical fields work in a similar fashion, as we already learned in the beginning of Sect. 2.2.2, Coulombs law tells us a repetitive passage through electro-static fields cannot change the beam energy more than a single passage. For these reasons the forces behind the confining fields/devices are called conservative. The beam particles oscillate in the transversal directions around these confining fields since conservative forces cannot change or remove the transversal kinetic energy and typically also no other strong friction or damping mechanism exists, respectively. Therefore, sine/cosine-like equations describe the single particle trajectories in an accelerator. The respective equation of motion in the transversal directions x and y is a function of the position in the accelerator s (particle source: s = 0) as described by (2.15)  x, y(s) = A β(s)cos(φ(s) + φ0 )

(2.15)

With phase φ, phase-shift φ 0 , an amplitude factor A, and a function β describing the beam optics acting throughout the beam path. The coordinate s is the relative longitudinal coordinate along the ideal track of the beam in the accelerator system starting from any point defined as s = 0, for example the charged particle source. For straight tracks this equals a Cartesian coordinate system, but for curved tracks this coordinate transformation reduces the mathematical complexity along with introducing a truly straightforward quantity. With sufficient understanding and adequate equipment, the accelerator design limits β to avoid values of x and y become larger than the vacuum vessel dimension at any point s. The β function condenses the combined effect of all beam optical elements to a single mathematical function. A physical connection of charged particle beam optics to the geometrical light optics will ease understanding the mathematical concepts but also allow for an improved practical understanding of how to construct an accelerator system. Designing β is the field of beam optical elements. Measuring it requires analytical elements, which will be discussed as the last point of this section. These analytical elements actually determine x(s) and y(s), hence also the α factor can be determined. The above statements on conservative forces leave us basically no freedom for influencing a given α, making it a beam quality factor. In the next section, we will come to know it as the emittance. In the end, the difference between the ensemble properties and those of the individual particles is just a matter of perspective. The ensemble bears the power, making it more relevant for applications. The single particle motions are still buried behind the ensemble values we are interested in. The superposition of all single particle motions forms the beam envelope, its outer boundary. Consequently, the general motion described by (2.15) applies equally to a single particle and the ensemble. The ensemble represents the integral over all individual properties, namely the individual amplitudes α and phase shifts ϕ 0 , averaging out the individual oscillations. While an AC accelerator allows only for limited ranges of phase shifts due to the resonance conditions, a DC accelerator implies no phase restrictions, leading to a

48

2 Technology 0.4

σ=1 Probability

0.3

σ=3 0.2 0.1 0 -5 -4 -3 -2 -1 0

1

2

3

4

5

Dimension Fig. 2.21 The normal distribution with two different standard deviations of σ = 1 and σ = 3. The standard deviation describes how broad the distribution is. For example in a particle beam with larger standard deviation less particles will be found in the centre and more towards the outside. 68.27% of all particles can be found within –σ to +σ, 95.45% within −2σ to +2σ, 99.73% within −3σ to +3σ …

complete smoothing of the oscillations in the ensemble view. Consequently, the technical freedom we bear for forming the beam to our applications needs relates to constructing the β function and finding a way to start with minimal amplitudes α (see Sect. 2.4). A beam comprises a large amount of particles and if few of these are lost it will be negligible for the application. Naturally, the large numbers (1 A equals 6.25 * 1018 particles/s) make beam optics calculations also a statistical problem. Hence, we can never describe all single particles with an ensemble description, but only a certain share of them. The normal distribution describes the statistical distributions of charged particle beams in most cases. Figure 2.21 plots two different instances of this statistical function. The dimension axis could be any properties, for example the transversal dimension of the beam, its energy distribution, or the distribution of starting points of individual particles with respect to the beam centre. It is important to note: The normal distribution is based on exponential functions, therefore we can always find particles outside a certain boundary. When describing beam properties we usually refer to the so-called 1σ ensemble. This refers to all ensemble particles having a property within a range of ±1σ of the central/ideal value. In other words, only 68.27% of the beam particles are within the given value. If we want to describe a larger fraction (more σ ’s) of the beam, the range of the value will increase accordingly with a mathematical connection given (ideally) by the normal distribution (see Fig. 2.21).

2.3 Ion- and Electron Beam Optics

49

Fig. 2.22 Central beam optics terminology in a train. The train/beam moves along the longitudinal axis on a track defined by the beam optics. Its volume or passenger capacity is fixed and represents the phase space occupied by the beam. The density of seats in this volume represents the inverse beam emittance

2.3.1 Emittance and Betatron Function The particles travelling together in a beam all occupy their individual position in the so-called phase space. The degrees of freedom relevant for the movement of the beam in the electro-magnetic environment of the accelerator define the dimensions of the phase space. The three positional and the three momentum vector components relative to the ideal beam particle span the phase space. The position in the phase space completely describes a single particles movement in a given accelerator with known β-function. The superposition of all individual particles in the phase space yields the beam ensemble phase space corresponding to its size in real and momentum space. The phase space can be understood as a tram moving along the ideal track, see Fig. 2.22. The tram moves mostly in the forward direction, but the passengers inside the tram can still move in all directions and stand or sit wherever they want during the ride, in the given boundaries of the tram cabin size. The combination of this information with the knowledge of the layout of the beam track allows calculating the complete evolution of the particle movement and beam properties throughout the accelerator system. The problem of beam behaviour reduces to knowing two quantities we will discuss in the following. The phase space cannot be filled to an infinite extent by the particle beam, all particles have to sit in a certain volume of the phase space tram, the confined volume. This is easy to understand with a limiting case example: A particle can sit in the spatial centre of the tram (ideal) at the starting point, but if it has a transversal momentum component it will diverge from the ideal path, increasing its distance to the beam centre. For large transversal momenta it will leave the beam by annihilating with the surrounding vessel walls before reaching the final target. A limited transversal momentum has to be acceptable (less than it takes to leave the tram cabin before reaching the target), otherwise the beam would be unable to contain (the statistically distributed) particles. The same holds true for a particle with zero momentum difference sitting outside the confined phase space. The accelerator system has to be able to cover deviations from the ideal particle for all quantities within its technical limits.

50

2 Technology

The size of the confined phase space volume (i.e. the tram cabin size), e.g. for the transversal momentum, will be defined by the properties of the beam optical system. Before the technical aspect, we first take a look at what we are talking about. Phase space can be defined in real space quantities by the position offset (x, y) and divergence (x , y ) of an individual beam particle in both transversal directions (perpendicular) to the ideal direction of motion (s). In the case of bunched AC beams, the coordinate relative to the bunch centre, and the deviation from the ideal momentum (given by the beam energy and species) have to be considered additionally. For DC beams this coordinate has no meaning since DC beams are by definition longitudinally constant. We could also define the phase space along other quantities, such as transversal momentum instead of divergence, but our choice should be oriented along measurable quantities. Our phase space now becomes a 6-dimensional space consisting of 3 space and 3 momentum-like dimensions forming a vector as given in (2.16). ⎛ ⎜ ⎜ ⎜ ⎜ X =⎜ ⎜ ⎜ ⎝

Offset in x Divergence in x = x  Offest in y Divergence in y = y  Offset in s Relative momentum offset

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

(2.16)

Each beam particle has an individual vector X, leading to an individual track. Figure 2.23 demonstrates this at the example of a deflecting dipole magnet passed by an ideal and an offset particle with divergence x  . This individual X represents a single point in the 6D phase space. The beam optical and acceleration devices guide all particles of a beam along the accelerator system. This guidance depends on X. The smaller the values of X compared to the beam energy and device size, the more similar the individual tracks will be and consequently the smaller the beam diameter. The individual rows of X can change along the particle track, but since the rows are coupled (divergence leads to displacement and vice-versa) via beam optical

dipole m agnet

o

t f fs e

t icl

p ar

x

id

e x'

t p ar eal

icle

Fig. 2.23 Sketch of the difference of ideal particle to other beam particles with non-zero X. Nonideal particles start at an arbitrary point with displacement x and angle x  to the ideal particle. In an optical element such as a dipole this leads to different deflection.

2.3 Ion- and Electron Beam Optics

51

elements, the length of the vector X is characteristic for a particle and conserved by beam optical elements, if normalised properly. For example a particle with a high transversal momentum will necessarily also reach higher transversal displacements in its oscillating movement, similar to a pendulum oscillation amplitude is connected to its initial displacement. At the maximum displacement both beam particle and pendulum have the smallest (transversal) momentum due to the conservative nature of the movement controlling fields in both cases. Mathematically speaking: The beam optical system lets the particles rotate in phase space around the X = 0 vector during their movement through the accelerator system. The superposition of all individual tracks equals the beam envelope, the quantity of interest for the application. Up to now we have no idea why and how the individual particles start with different X, or in other words, we have no idea of the distribution function of X. The large number of particles in a beam and the central limit theorem allow for an educated guess, though. The normal distribution usually describes the distribution of X very well, leading to a normal distribution of individual points X in phase space. Coming back to the analogy of the confined phase space and the tram wagon: People can leave and enter the tram at each stop. A particle beam has only two stops: It starts at the source and ends on the target, where it is used for an application. Consequently, this does not allow any new passengers, or charged particles, to enter the tram after it has left its source. In physics, this is called Liouvilles Law, stating that the density in the momentum space can only be increased if the density in real space is reduced and vice-versa. Consequently, beam focussing reduces the beam dimension but increases its angular divergence and vice-versa. Adding new particles to the beam would increase both momentum and real space density and is therefore physically impossible. Besides the statement that it is impossible to enter or leave the tram during the journey, the tram also has a fixed amount of seats per wagon. A particle beam has a similar quantity called emittance. The emittance states the distribution width of momentum and position (vector X) of the individual particles difference from the beam average. This equals the phase space volume covered by the beam. A larger difference equals a lower density of particles in the phase space, a given amount of particles distributes over a larger phase space volume. A low emittance corresponds to a high density of particles in the confined phase space making it generally desirable. High emittances lead to lower beam density, larger devices, and reduced performance. The emittance can also be understood as a beam temperature. Similar to the ideal gas law equation (2.1), a high temperature corresponds to a lower particle density at a given pressure. Figure 2.24 illustrates a 2D extract of the phase space representing only the first two rows of X (2.16). A certain amount, say 1σ, of all beam particles can be found within the enclosed area. At the given position s in the accelerator system this 1σ of particles feature a maximum angular deviation x  max and a maximum displacement x max from the ideal track. These values define the beam envelope. Focussing the beam would rotate the phase space ellipse, with a minimum size reached with an upright ellipse (a focal point or beam waist).

52

2 Technology

2 x'max

x' [mrad]

1 xmax

α

0 xmax -1

-2

-2

-1

0

1

2

x [mm]

Fig. 2.24 1D phase space volume of a beam with maximum radius x max = 1.5 mm and maximum divergence of x  max =1.5 mrad. The phase space ellipse form depends on the position in the accelerator system, but its enclosed area remains constant

From the beam ellipse we can calculate the phase space volume which corresponds to the emittance ε for a certain share of n σ ’s (n = 1 equals 68.27% for each axis, see Fig. 2.21) of the particles via the following equation:  2 x  2 − x 4 tan2 (α) εxnσ = n 2 xmax max max

(2.17)

The emittance is a measure for the deviation of the particles phase vectors from the beam average, hence increasing the beam average makes the individual deviation less dramatic. It is a bit like a train moving at relativistic speeds in one of these popular science books about Einstein’s relativity theory, just that moving at relativistic speeds is very possible for a charged particle beam. From the laboratory point of view, the time dilatation slows the beam particles relative motion. In the co-moving frame of the beam nothing changes in between the particles, hence this effect is called Adiabatic cooling (no exchange with the surroundings). If our tram in Fig. 2.22 moves faster, it reaches its target faster and we have less time to reach the cabin boundary with a given transversal velocity, a larger transversal velocity becomes acceptable. The normalized emittance cancels this relativistic effect by considering the particle velocity v: v/c v/c √ = rx r y εNorm =  2 2 1 − (v/c) 1 − (v/c)

(2.18)

In addition to this passive method of reducing emittance, also active methods were developed to reduce beam emittance. Usually the application defines the beam energy, hence the possibilities of adiabatic damping bears no technical freedom. Three main

2.3 Ion- and Electron Beam Optics

53

classes of methods were invented, but due to their complexity they are seldom found outside science. In medicine, industry, and most scientific applications, the emittance problem is literally addressed at its source. The concepts of active emittance cooling are specific to the beam species and accelerator type. Good elaborations of the details can be found in beam optics literature, e.g. (Hinterberger 2008). For the cooling of ion beams, the ions are directed through a bath of cold electrons. When jumping into the bath, the ions have to have a very small relative velocity difference to the bath, otherwise it is water skiing. Therefore, the bath is actually a beam of fast, yet low beam energy (due to small electron mass) electrons. If the temperature of this electron beam is lower than the ion beam temperature, thermal equilibration cools the ion beam. Obviously, this bathing cooling method is not applicable to electron beams, since these are already the lightest charged particles. Electrons on the other hand easily reach highly relativistic velocities where they effectively emit bremsstrahlung (=synchrotron radiation) upon acceleration (2.11). Bremsstrahlung occurs naturally in bending magnets, but it can also be deliberately applied as we will see in Sect. 4.3. The emission power of this Bremsstrahlung strongly depends on the kinetic energy and hence the faster part of the thermal spectrum releases more energy than the slower part, inducing a cooling effect, the radiation damping. Finally yet importantly, the phase space nature of emittance enables reducing emittance not only from the velocity side, but also from the spatial deviations from the ideal beam path. The so-called stochastic cooling exploits this by actively kicking particles which are off track back into their ideal path. The emittance limits many accelerator applications to a certain extent in minimum device size and maximum beam power density. The conservation of emittance makes it an important yet hard to optimize quantity. Still we can change its technically relevant result, the beam envelope size. This important connection is described by the betatron function β(s). The betatron function does not describe whether the particles have to move left or right, it rather connects the abstract idea of the 6D phase space with beam dimensions in real space. Figure 2.25 displays an exemplary betatron function in a series of beam optical elements and the s-coordinate resolved transversal (β x and β y ) betatron functions resulting from these optical elements. The beam envelope then derives from the given betatron function, the emittance, and (2.19). Beam optical elements define the betatron function, allowing tuning between size and angular divergence of the beam (smaller size = larger divergence due to conservation of emittance). Hence, a significant aspect of accelerator development relates to the construction of low emittance charged particle sources (see Sect. 2.4) and the conservation of emittance. xmax (s) =



ε ∗ β(s)

(2.19)

Finally, we have all concepts at hand to describe and influence the technically relevant quantity, the beam dimension. As demonstrated in Fig. 2.26, a beam tube can only fit a certain beam size as described by (2.19). Considering the beam particle offset in the transversal direction to follow a normal distribution (Fig. 2.21), a share of the beam will always be lost to the vessel walls. The beam tube acts as an aperture

54

2 Technology

Fig. 2.25 Horizontal and vertical betatron function β and dispersion D in a synchrotron accelerator unit cell. The filled upper rectangles represent horizontally focussing and the lower rectangles horizontally defocussing quadrupole magnets. The open rectangles are dipole magnets. The betatron function clearly shows the connection of focussing in one and defocussing in the other direction already discussed for cyclotrons, but also typical for quadrupole magnets. Reproduced from Hinterberger (2008), page 280 with permission by Springer

Lost d

x Max > d

x Max < d

Fig. 2.26 Transversal beam dimension in a beam tube of diameter d with a focussing element in the centre (grey rectangles). On the left hand side a significant part of the beam lies outside the chamber limits. This part is lost, reducing the beam emittance to the acceptable limit, but also reducing its current. Focussing reduces the beam diameter, but it increases the divergence requiring another focussing element after 2 times its focal length

cutting down the beam to fit inside. This means a loss of beam current, but a relatively larger tube reduces the losses. The accelerator system acceptance states the maximum emittance the beam system can transport with a given betatron function (solving (2.19) for ε). Usually there are only a few critical points where the betatron function is largest or the beam tube diameter is smallest. A deliberately placed aperture limits the x max , fixing the location of this cut of the phase space to a technically adapted device. For example, a beam tube diameter d = x max will cut everything outside the 1σ range (100% − 68.27% = 31.73%) of the beam distribution, if x max was specified

2.3 Ion- and Electron Beam Optics

55

with a 1σ emittance. Higher emittances or smaller beam tubes will further increase losses to the system boundaries/vacuum vessel walls. Since ε only defines a certain share of the beam particle ensemble, losses are practically unavoidable. This has quite important implications for radiation safety, see Sect. 2.7, and long beam-lines. The losses also provide us means to measure the emittance. A variable beam tube diameter, for example with movable aperture sizes or by moving a wire through the beam, allows measuring the beam emittance by recording x max and the beam current lost to the analytical device. Several points are required to be able to solve the equations due to the many parameters required for outlining the phase space. Measurements at several points on the other hand require knowledge of the transfer functions (the effect of beam optics) connecting the different points. More details will be discussed in the following sections. In this section, we discussed only the simplified so-called linear approximation of beam optics. For example, we assumed a dipole magnet to feature only dipole effects, neglecting possible multipole components (quadrupole, sextupole, octupole…). Furthermore, space-charge effects originating from beam ensemble self-interaction due to intra-beam scattering were neglected. Finally yet importantly practical aspects have an important impact on beam optics. Alignment errors and stray fields induce additional deflecting components and track offsets leading to non-ideal behaviour of the beam optical elements. A beam optical calculation allows for an assessment of the possible magnitude of these effects. Larger scientific accelerators foresee correcting optical elements in the beam path. In accelerator applications the accelerator device length are often limited and higher order beam optical aspects only reach a certain relevance for very small or large beam diameters or high beam densities/currents. The above discussion of emittance and betatron functions shall grant the reader the grace to accept the things that cannot be changed, the basis to change the things that can be changed, and the wisdom to distinguish the one from the other. The beam optical treatment in this book intentionally remains superficial. Only years of studying and practical experience enable to contribute to the state-of-the-art of beam optics or any other field, but a basic understanding already allows appreciating an expert’s work and understanding layout concepts of accelerator systems. The mathematical complexity explodes for beam optics of real accelerator systems, requiring anyways computer codes for evaluation. Numerous codes exist, ranging from older scientific variants such as TRANSPORT over newer version. Also commercial products exist, for example the specific code Simion. Besides this, also a few general electro-magnetic simulation tools such as OPERA or COMSOL allow for charged particle electro-magnetic simulations. These finite-element based codes are rather suitable for assessing the true properties of a single optical element rather than a whole accelerator.

56

2 Technology

2.3.2 Beam Optical Elements Having defined the mathematical properties describing a beam, the next task is to establish a control of the track and the shape of the beam. Basically, all accelerator applications require the beam to be directed onto a target, changed in diameter, or transported over long distances without relevant beam current losses. Consequently, we require devices for deflecting and de-/focussing or in general manipulation of charged particle beams, the so-called beam optical elements. Until now no details on the beam particle species were mentioned. The particles can be electrons, protons, heavier, or multiply charged ions or even some less common particles from the standard model. Differences between charged particle species restrict to quantitative differences in input data required for this formalism. Regarding control of the beam, the main differences between all these different charged particle species lies in the differences in particle mass and mass to charge ratio (e.g. -1838 for electrons to protons). This factor comes into play when the beam direction or dimension is to be changed by a force, e.g. the Lorentz force. Heavier particles require stronger forces and/or longer application of the forces compared to lighter particles. Physically more complex aspects such as changes in ion charge or bremsstrahlung are strongly connected with particle species, but are not covered in this purely beam optical treatment. The name beam optics originates from a similarity of mathematical formalism with geometrical/ray optics of light. In geometrical optics, rays travel in straight lines through homogeneous media and deflect at optical elements/lenses. The term focal length completely describes a lens, making its behaviour concrete. Because of the aforementioned geometrical aspects, all optical elements can be treated independently. From these conditions, a matrix-formalism for the description of optical elements derives. The effect of an optical system is completely described by knowledge of the initial beam properties in the form of a vector or matrix, the matrices of the optical elements comprising it, and a matrix multiplication of the beam vector with all optical element matrices. This formalism was adapted for charged particle beams. Similar to light optics, this formalism only approximates the reality, neglecting not only self-interaction effects such as space-charge or intra-beam scattering but also higher order aspects of the optical elements originating from finite length, technical imperfections, and so on. A correct treatment therefore requires computer codes. Nevertheless, geometrical beam optics is a good approximation and allows understanding several important features and physics of charged particle beam optics. In the last section we defined the particle property vector X with six quantities according to (2.16). Extending this concept to the beam ensemble requires a larger mathematical object. There are two equivalent ways for obtaining this object. Either we extend the 6D single particle vector X to a 6 × 6 matrix M Beam describing the beam as an ensemble of X. This matrix describes the 6D phase space ellipsoid of the beam with the determinant of the matrix representing the phase space volume. Every beam optical element changes this matrix, but conserves its determinant, which equals the emittance. Alternatively,

2.3 Ion- and Electron Beam Optics

57

we have to find a function describing how the beam envelope changes throughout the accelerator system, similar to the considerations done with Fig. 2.24. We can find 3 functions, the so-called Twiss parameters but for the details of this Ansatz the reader is referred to specific literature. As discussed in the last section the 6D phase space volume represents an important quantity representing the population of the individual beam particles. Multiplying the single particle vector X or the beam matrix M Beam at position s = 0 with optical element matrices R yields the vector or matrix after passing the element at position s, or in other words the translated quantity: X (s) = R(s) ∗ X (0)

(2.20)

MBeam (s) = R(s)MBeam (0)R T (s)

(2.21)

with RT the translated matrix to R. Many beam optical effects are easier to understand in the single particle picture with its relative coordinates and, due to the mathematical equivalence of treating the whole beam or its independent constituents, we will stick to the single particle view. The simplest example of an optical element is the drift-tube of length L with the transfer matrix RDrift . Equation (2.22) tells us the drift-tube effect is given by the off-diagonal elements, which are all proportional to its length L. These elements state the spatial growth/shrinkage of the beam via its divergence and momentum width. The momentum width p describes how much the particle momentum differs from the ideal particle momentum corresponding to the accelerator design energy. These size changes are proportional to the tube length L, as geometrical optics dictates by the straight path rule. ⎛

RDrift

⎜ ⎜ ⎜ ⎜ =⎜ ⎜ ⎜ ⎜ ⎝

1 0 0 0 0 0

L 1 0 0 0 0

0 0 1 0 0 0

0 0 L 1 0 0

0 0 0 0 0 0 0

0  2 1 L 1 − v/c 0 1

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

(2.22)

Applying (2.20) and (2.22) to an arbitrary particle yields the particle properties after a drift of length L: ⎛

x + Lx x y + L y 

y

⎜ ⎜ ⎜ ⎜ RDrift ∗ X = ⎜ ⎜ ⎜ ⎜ z + pL 1 − v/ 2 ⎝ c p/ p0

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

(2.23)

58

2 Technology Upper coil

a Beam window

Lower coil

Upper coil

b Beam window

c Upper coil

Beam window

Lower coil Lower coil

Fig. 2.27 Three construction variants of a dipole magnet. The window-frame magnet (a) offers a compact construction for large beam windows and high magnetic field homogeneity. The H-magnet (b) eases the coil winding compared to the window-frame but requires more magnet iron (grey). The horseshoe magnet (c) eases installation with one side being open, but the open side increases the field inhomogeneity resulting in less ideal behaviour

The multiplication of (2.23) describes a growth of beam diameter in x and y direction by the drift length L and the divergence x  and y . The longitudinal (z direction) length grows with the momentum spread, if the beam is not too relativistic. In the limit case of v = c the contribution to the longitudinal spread of the beam cancels out, because different momentum only changes the effective mass in the relativistic limit, not the velocity itself. The first active beam optical element is the steerer, deflection, bending, or dipole element, respectively. Figure 2.27 depicts several technical options for a magnetic dipole element. In its basic functionality these elements change the direction of the beam by a certain angle corresponding to a curvature of radius r 0 inside the element. Technically, an electro-static deflector (a construction equivalent to a parallel-plate capacitor) or a dipole magnet can induce the deflection; hence accelerator physicists often call it just dipole. The deflection depends on all elements of the particle vector X and the construction of the deflection element (Fig. 2.23). Charged particle beams naturally feature certain energy and spatial width’, the phase space volume discussed in Sect. 2.3.1. The deflection element generates different deflections for different X. In other words, the dipole deflects particles, for example, from the left and the right end of the beam to different end-points. Already this simple element demonstrates the importance of looking beyond the “zero-th order” of a single particle. Depending on whether the deflection element works by electro-static or magnetic fields it acts differently on charged particles with different energy and momentum, see the Lorentz force in (2.24). F L (E, B) = F E + F B = q(E + v × B)

(2.24)

A homogeneous dipole magnet (no radial field gradient n or edge focussing) for deflecting in the x-axis by an angle α is represented by (2.25). Here r 0 = ρ 0 is the ideal track radius in the element centre. The magnet acts as a drift of length L in axial/y-direction, hence column 3, 4, and 5 of the matrix look exactly like the ones

2.3 Ion- and Electron Beam Optics

59

from the drift-tube (2.22). The cos(α) dependence on offset and divergence (first two diagonal elements) tells us, this optical element induces different deflection for the ideal particle and particles with non-ideal coordinates (non-zero X). From (2.24) we can directly see a momentum offset p = 0 (=velocity v = 0) will lead to different magnetic forces F B and hence different deflection. For offset or divergent particles, longer or shorter path’ through the magnet will arise, leading to different effective deflection angles α for these particles. The larger the deflection angle α, the stronger the effect. ⎞ ⎛ ρ0 (1 − cos(α)) cos(α) 0 0 0 ρ0 sin(α) ⎟ ⎜ sin(α) ⎟ ⎜ −sin(α)/ρ0 cos(α) 0 0 0 ⎟ ⎜ 0 ⎟ ⎜ 0 0 1 L 0 ⎟ RDipole = ⎜ ⎜ 0 0 0 0 1 0

⎟ ⎟ ⎜   2 ⎟ ⎜ −sin(α) −ρ0 (1 − cos(α)) 0 0 1 ρ0 sin(α) − v/c ⎠ ⎝ 0 0 0 0 0 1 (2.25) The technical freedom in the design of dipole magnets regarding edge angle and field gradient plays an important role for accelerators, in particular for cyclotrons (Sect. 2.2.2). For the functioning of the cyclotron, the dipole elements literally require a cutting edge. The edge focussing was a requirement for the working of the isochronous cyclotron with its radial field gradient. Chamfering the entrance and exit edge in a way that particles with transversal offset see a different path length inside the deflection element leads to a focussing effect. In addition, also the distance of the magnet pole shoes can be varied in the direction perpendicular to the beam. This induces a field gradient inside the deflection element, which in turn changes the path dependent deflection. The focussing characteristic is pre-defined by the construction and therefore relatively inflexible, but the dipole magnet is the only single magnetic beam optical element able to focus a beam in both transversal directions. Quadrupole magnets represent the single function element for beam focussing. A quadrupole features four magnetic poles with the opposing poles having the same field-polarity. Their ideal matrix RQuad involves the strength factor of the focussing magnetic field given by k (2.26) with the quadrupole tip field strength B0 , the radius of the open diameter r ap (=distance centre to quadrupole tip), and the magnetic rigidity Bρ which is equivalent to the particle momentum p divided by its charge q. k=

B q B0 = ∗ rap (Bρ) rap p

(2.26)

With (2.26) the transfer matrix of a quadrupole element of length L and strength k reads

60

2 Technology

RQuad ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ =⎜ ⎜ ⎜ ⎜ ⎜ ⎝

√ cos k L

√ √ − k sin k L 0 0 0 0

sin

cos

√ √

kL

√k kL 0 0 0 0



0

0

0

0√ cosh k L

√ √ k sinh k L

k

√ cosh k L

0 0

0 0

sinh





kL

0 0 0 0 1 0

0 0 0

0  2 L 1 − v/c 1

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

(2.27) A single quadrupole magnet focusses only in one direction. In (2.27) the x-axis features a negative contribution for the x  component (2nd row) and a positive for the y component (4th row), namely it reduces the divergence in x and increase it in y. In order to focus the beam in both transversal directions x and y we require two quadrupole magnets rotated by 90° against each other (=inverted current flow direction). The four non-zero x and y components switch for the 90° rotated element. Each quadrupole will focus in one direction and defocus in the other direction. The sum result is a focussing in both transversal directions, since the defocussing effect is slightly smaller than the focussing effect (sin vs. sinh), see (2.27). This focussing system is called a quadrupole doublet or also the FODO (focussing, drift, defocussing, drift) structure as shown in Fig. 2.28. Thinking in the form of optics we can derive the focal length f in the focussed axis of a quadrupole magnet from (2.26) and (2.27), giving us a practical estimate for layouts of focussing systems. Equation (2.28) tells us the focal length f shrinks with stronger, longer, and smaller open diameter quadrupole magnets and grows for heavier particles.

0° Quadrupole

Drift

90° Quadrupole

Drift

Fig. 2.28 The quadrupole doublet or FODO structure. The upper line shows the horizontal, the lower the vertical plane. Particle beams naturally feature a non-zero divergence, hence require beam optics for confinement. Focussing, defocussing, and drift parts form the charged particle equivalent of an optical lens. The FODO structure as a part of a unit cell allows for a confined or loss-less transfer of the beam over long distances, e.g. several turns around a synchrotron storage ring

2.3 Ion- and Electron Beam Optics

61

rap p 1 1

√ ≈ = f =√ kL B0 L q k sin k L

(2.28)

The minimum beam spot radius achievable with a quadrupole and a given beam type depends on the beam emittance ε and the quadrupole focal length. Analytical applications require small spot sizes for scanning a sample surface with high spatial resolution and production application reach higher beam power density with smaller beams. As emittance (remember its units: m (length) times rad (angle)) is conserved, the focussing basically squeezes the phase ellipse in Fig. 2.24 as much as possible into the upright position. This decreases its size, but increases its divergence x  due to conservation of the surrounded area, leading to a natural connection between small spot size and large divergence which equals a short focal length. To derive the minimum spot size we create a FODO/doublet structure by multiplying the transfer matrices Rdoublet = RQuad−Hor ∗ RDrift ∗ RQuad−Ver ∗ RDrift

(2.29)

Equation (2.29) assumes we start with a certain beam size x max and divergence x  max (according to Fig. 2.24) at the FODO entrance, both identical quadrupole magnets (both focal length f ) focus in different transversal direction and have a distance d between their centres (d ≥ size of quadrupole magnet) and a distance s of the last magnets centre to the focussing point. We assume a diagonal beam matrix M Beam equivalent to a beam waist at the FODO entrance for simplicity and consider 2 and the second diagonal element only the x-direction with the first matrix element xmax  2 xmax  xmax (s) =

2 xmax



 2 d 2 d 2 1− + xmax d + s 1 − f f

(2.30)

We could continue in finding the minimum for s followed by solving for d or f , but this becomes only mathematically exhausting. The main conclusions can already be drawn from (2.30): The minimum spot radius x max (s) is defined by the beam emittance (see Eq. 2.17) and the magnet strength. Stronger magnets will reduce f , but we have to keep the distance d close to f in order for minimizing (1 − d/f ). Technical reality adds up, since if we decrease f by building longer magnets (L in Eq. 2.28) we will increase the lower limit of d accordingly, but d has to be minimized too. Small spot sizes therefore require high magnetic field strength and small beam apertures (which is equivalent to smaller beam currents). If we would enter the FODO setup with a minimal divergence (x  max ≈ 0) the aperture changes the emittance by defining x max , but the beam current reduces via the ratio of the areas of initial beam to aperture weighted by the beam shape (e.g. normal distribution). The technological limit of spatial resolution (=small spot sizes) comes down to technological limits of the focussing strength and the beam quality. Additionally, practical devices never feature

62

2 Technology

only pure quadrupole fields, but due to manufacturing and alignment, aberrations increase the real minimum spot size above the one of an ideal lens discussed here. All lenses steer: A general rule for understanding practical aspects of beam optics. A beam entering a quadrupole lens off-axis will see a dipole field aspect due to its proximity to one of the pole-pairs: The more off-axis, the stronger the dipole aspect. This dipole aspect steers the beam in addition to the intended focussing. In extreme cases, the beam shape becomes distorted, in critical cases resulting in a sickle shape. A slight misalignment remains practically unavoidable; hence the steering effect belongs to a focussing lens like cheese to pizza. The next section will discuss how to become aware of this and how to find information for correcting the problem. So far we discussed only magnetic optical elements. In applications, these are mostly found with ion beams of at least some MeV, since magnets offer higher deflection strength required for heavier and faster particles. Electron applications usually feature lower energies, which in combination with the lower particle mass allows for applying electro-static elements. The workings are similar to magnetic elements, but electro-static elements deflection strength decreases with particle energy (as electrostatic fields change energy but electric breakdown limits their voltage potential) compared to the deflection strength decrease with particle momentum/rigidity p/q (momentum/charge) experienced with magnetic elements (2.24 and 2.26). We will see an example highlighting the difference in the next section. Their relevant technical advantage is their low power consumption, since electro-static fields are generated by static charges (=zero current), while magnetic fields are generated by moving charges (=current). The electro-static equivalent to the dipole magnet is the parallelplate capacitor like deflection plate. Electro-static focussing requires the so-called Einzellens, see Fig. 2.29. The Einzellens focusses in both transversal directions via a deceleration-acceleration structure. The two outer cylinders are grounded, while the focussing voltage is applied to the central cylinder. The bend electric field lines in combination with the changing particle energy and the energy dependent deflection strength leads to a net focussing/defocussing of equivalent strength in both transversal directions. Arranging the many different beam optical elements in a useful manner represents a science on its own, but a set of standard configurations have established for the common tasks of focussing and beam manipulation. Figure 2.30 depicts an example of a standard configuration of an electron microscope. Modern electron and ion

V

Fig. 2.29 Sketch of an Einzellens with grounded outer and biased inner electrode rings. The central electric field (lines) first decelerates the beam, broadening it. When leaving the lens, the beam is accelerated and focussed by the same potential

2.3 Ion- and Electron Beam Optics

63

Fig. 2.30 Example of a common beam optics system with two apertures and three lens systems. This system is common for analytical devices, e.g. electron microscopes. Varying the lens properties/settings allows for a high degree of flexibility in the beam properties on the target/sample indicated by the 3 configurations. Towards the right, the focal length of the final lens reduces, decreasing the minimum beam spot size on the sample (2.30) at the expense of requiring a higher focussing strength. The divergence symbols indicate the required electric and/or magnetic field strength in the lens. When focussing with quadrupole magnets, each lens has to be understood as a doublet

microscopes use these systems with electrostatic (3 Einzellenses in this example) and magnetic components up to about 1 MeV. At higher energies, magnetic elements are employed, but the technical concept remains the same, just the devices are exchanged. The variability of the system allows for several different operation modes. The left mode starts by forming a parallel beam from which a cut-out comes into the lower part. Here a defocussing followed by a focussing allows for a minimum spot size, since the stronger the focus, the smaller the minimum spot size. The second and third settings produce an intermediate beam-waist, allowing for a variable and potentially higher beam current after the second aperture. The lower part either defocusses and focusses for a maximum resolution or double-focusses for a more parallel beam on the target in order to decrease angular beam spread equivalent to an increased depth of field. Depending on the analytical requirements, the electron microscope can be set for maximum spatial resolution, depth of field, or sample current/signal intensity by varying only the focal length’ to one of the presented settings. Laboratories applying higher energy charged particles often use several endstations for different purposes sharing a common accelerator and particle source. Each chamber is optimized for a specific question or task. A combination in one lab allows for reduced down-times and a better exploitation of the expensive accelerator, reducing the overall costs. The ion beam analytical laboratory depicted in Fig. 2.31 uses a DC accelerator for material analysis. A central dipole magnet switches the beam to the individual analytical stations on the left hand side by varying

64

2 Technology Channeling RBS

1.7 MV Tandetron ARTOSS

Duoplasmatron

µNRA

Fig. 2.31 Example of an ion beam analysis laboratory with a Cockcroft-Walton DC accelerator, dipole magnets (blue) and quadrupole magnets (red squares) connected to three different analysis end-stations. The central large dipole magnet can be set to different deflection angles α to supply the beam to the individual experiments/end-stations. Reproduced with permission by IAEA (Mayer 2019)

the magnetic field strength and direction. Each end-station has its own beam optical system optimised for small spot sizes, heavy ions, or high beam current/transmission.

2.3.3 Diagnostic Elements Accelerators are complex and often rather large devices with tight tolerances. The installation and alignment of beam optics consequently becomes time consuming and delicate. The first tests after start-up aim at fine tuning the alignment of beam optical components to enable beam transport from source to target in the form of a non-zero current on the target. In cases of intense focussing, long beam path, or high currents, beam optics have to be aligned to the ideal beam path within a few μm over distances of several metres. Forces induced by the magnetic fields, creep of materials over time, or temperature changes can furthermore alter alignments dynamically during operation. An example: Laboratory temperatures vary by a 1–3 K over the day due to sunlight and working activities, without temperature stabilisation. The thermal expansion of steel of 17 μm/Km then results in a height variation of a steel beamline support of 1.5 m height of up to 51 μm, a lot for microscopic applications with sub-μm beam spots. Therefore, the beam position and shape have to be frequently monitored and adjusted and sufficient amounts of correcting elements have to be foreseen to be able to handle these effects, if beam optics, end-station, and accelerator rest on different supports. Diagnostic/analytic elements provide the eyes and ears to the accelerator operator by measuring the actual beam position and its 6D phase space properties in the form of the beam vector X (2.16) or at least parts of it. The beam energy is usually known from the accelerating voltages, but a more precise determination might be required. Regarding physics, this translates to finding physical mechanisms to make

2.3 Ion- and Electron Beam Optics

65

these quantities accessible for a measurement. Technically, we have to find feasible solutions and align them to an accuracy such that the ideal physical mechanism we thought of is reproduced to a sufficient accuracy. The measurement, in most cases, only indirectly determines the sought after quantity, therefore a physical model is required to recalculate the measured quantity. Following this, the accuracy in determining the quantity is limited by the accuracy of the model (technical challenge of alignment …) and the measurement accuracy as dictated by the laws of error propagation (discussed in Sect. 7.1.1 e.g. in (7.3). An example: We measure the current density ρ B of a beam using a Faraday cup. Do not worry its physical concept will be discussed quickly hereafter. We need some kind of ampere-meter for measuring the current I, say for the range of μA to mA, since we expect I D ≈1 mA/mm2 and we want a 1 mm spatial resolution. Here standard ampere-meters offer an accuracy of 0.1% or 10−3 , respectively, plus some measurement range dependent fixed uncertainty, say for a range of 0–1 mA we have I = 0.1% ± 0.1 μA. Measuring a current of I = 1 μA therefore leads to an uncertainty of 10%, due to the fixed uncertainty contribution! Accordingly we design the Faraday cup to feature a sensitive area (opening) large enough to collect a current I of 0.1–1 mA (magnitude of lowest relative uncertainty). Technically, this sensitive area has a manufacturing diameter tolerance of say d = 1 ± 0.01 mm (H7 tolerance for drill-holes). In total this (simplified) example allows us to measure the beam current density with an uncertainty I D given by  ρ B =

4I π d2

2

+

−8I d π d3

2 (2.31)

If the actual ρ B = 1 mA/mm2 , the device would measure a ρ B of 1 ± 0.02 mA/mm2 (=0.98–1.02 mA/mm2 ) according to the error propagation used in (2.31). The manufacturing tolerance of the Faraday cup entrance dominates the uncertainty in this case. Reducing the uncertainty would require better tolerances or a larger sensitive area. We could increase the opening diameter at the expense of spatial resolution to reduce the relative manufacturing uncertainty, as a cost efficient method to reduce the relative uncertainty. As soon as the manufacturing tolerance becomes small against the current measurement accuracy, we will reach a technical limit. Increasing the integration time of the ampere-meter would allow for better noise filtering reducing its uncertainty, at the expense of its time resolution. Buying more precise (and expensive) ampere-meters and manufacturing tools would also improve the device accuracy. In conclusion, the technical design of this Faraday cup, and actually all diagnostic devices, comes down to finding an optimal point of minimum uncertainty with respect to all uncertainty contributions, the aims of the measurement, and the available resources. The Faraday cup, named after the concept of the Faraday cage, is a tool for accurate beam current measurement. Its design allows suppressing the emission of secondary

66

2 Technology

electrons released upon impact of a particle beam on solid matter (Sect. 4.1). For positively charged ions these electrons provide an opposite current flowing in the opposite direction, hence increasing the apparent beam current. For electron and negative ion beams, the effect reduces the apparent beam current. Secondary electrons are released by collisional energy transfer from beam particles to bound electrons at the impact surface. The cup suppresses this emission by catching the electrons in a cage around the impact location, as depicted in Fig. 2.32. The ratio of length to open diameter represents an important layout parameter. The geometrical suppression efficiency derives from the ratio of the half-space solid angle over the opening solid angle weighted with the emission shape. This emission shape follows approximately a cosine of the angle to the surface normal (emission mostly towards surface normal). An inclined target surface therefore further improves the geometrical efficiency by pointing the surface normal to a side-wall. By applying a positive bias voltage of typically some 100 V on the cup against the vacuum chamber ground or a surrounding cage, the suppression efficiency further increases. The secondary electron energy spectrum is broad, but its maximum lies at a few ten to hundred eV. Only the combination of geometric and electrostatic suppression catches close to 100% of the secondary electrons, enabling a true beam current measurement. Besides the beam current, the Faraday cup yields also positional and dimensional information of the beam via its own position and open diameter. More precise information is the domain of beam profile and beam position monitors (BPM). Their technological goals are determining position, dimension, and divergence (the 6D phase space) of the beam. In large circular accelerators deconfining resonances are avoided by online tuning of the beam according to BPM data. In applications, BPMs visualize the beam, enabling adjustment of the optical elements for maximum beam transmission in particular for daily start-up or after technical changes. In any case we aim at providing a high level of reproducibility of the beam conditions at a relevant position (where the beam is used) while the conditions change at other non-relevant positions (e.g. by thermal drifts, maintenance …) in the accelerator system. BPMs feature a particular broad range of technological options. The simplest realisation of a BPM is a white paper which blackens upon beam impact, see Fig. 2.33. The reversible, yet more costly, alternatives are scintillator

Beam

Bias and current measurement Fig. 2.32 A Faraday cup, measuring the beam current by geometrical and electrostatic secondary electron suppression. The small arrows represent secondary electrons emitted in the open half-space

2.3 Ion- and Electron Beam Optics

67

Fig. 2.33 Left: 100 μm diameter 3 MeV 3 He beam spot visualised by heat induced changes in a millimetre paper. Right: Beam spot (≈1 × 0.2 mm2 ) visualized by scintillating ZnS particles (2e+H+

1E-17

e+H2*->2e+H2*+ e+H2*+->2e+2H+

1E-18

1

10

100

1000

Energy [eV]

Fig. 2.40 Ionisation cross-section of hydrogen versus electron impact energy. The graph shows the cross-section for the ground state (1s) and excited states (*) of the atom, the molecule and the molecular ion. The ion source receives gas (H2 molecules) and has to convert these stepwise to molecular (H2 + ) and atomic (H+ ) ions. A maximum in splitting cross-section of the H2 molecule lies at about 50–100 eV, corresponding to typical discharge voltages in the ion sources. Data from Hydkin (Reiter 2019)

78

2 Technology

In hydrogen ion sources, as the most important example, first the H2 molecule needs to be separated by the electron impact. A set of different species are generated from this first step. Atoms (H), atomic ions (H+ , H− ), molecular ions (H2 + , but also H3 + ) and excited variants of each species populate the plasma along with the free electrons after this step. Figure 2.40 demonstrates the importance of intermediate excited states in reducing the energetic barrier for further ionisation indicated by the two orders of magnitude difference in ionisation cross-section between the 1s ground state and the 3s excited state of the H atom. Interestingly, in most ion source setups the initial ionisation of the H2 molecule limits the whole ionisation process, requiring discharge voltages of >50 V (→ >50 eV electrons) for effective ionisation. Noble gases such as helium have simpler ionisation schemes since no molecular configuration is involved, allowing for direct ionisation. A possibility for reducing the conflict of excitation and ionisation is the spatial separation of the different interaction steps by staging the plasma temperature for example in an extended high temperature low density plasma zone with efficient dissociation and a second lower temperature but higher density plasma zone for atomic ionisation and ion extraction. The Duoplasmatron picks up this idea of separate function plasma zones (Fig. 2.41). Its modern derivative, the multi-cusp source (Fig. 2.41 right), represents the state-of-the-art with improved confinement by applying the Halbach array. Permanent magnets outside the whole plasma chamber introduce a close mesh of arcs of magnet field lines, leading to a drastically improved confinement of electrons as depicted in Fig. 2.38. Additional magnetic filter fields

Fig. 2.41 Basic sketch (left) of the Duoplasmatron ion source. The plasma expands from the filament chamber towards the anode, forming plasmas in two regions. The plasma focusses in the lower region, leading to increased ion density. An extraction voltage draws the ions from the plasma and accelerates them. Basically all charged particle sources whether electron or ion source consist of a production feature (emitter or plasma), an electrostatic beam focus, and an acceleration stage similar to the Duoplasmatron source. Right: the multi-cusp source adds a Halbach array (cusps) around the chamber, reducing transport losses and staging reaction zones (Kuroda 1997), Reprinted with permission by Elsevier

2.4 Electron and Ion Sources

79

improve the plasma staging for further increased efficiency or specialisation to negative ion extraction. The Duoplasmatron physics were intensively studied with some interesting general conclusions for the maximum extracted ion current density ρ B (Lejeune 1974). The input power in the form of the discharge voltage and current linearly increase ρ B up to a hydrogen pressure dependent limit. At this point, the ionisation degree reaches a threshold and additional power will be invested rather in the plasma temperature than its density, resulting in a detrimental scaling with input power. The neutral gas pressure of the plasma discharge represents the main degree of freedom for tuning between plasma temperature and density in a given design with values usually in the fine vacuum range (Sect. 2.1, Table 2.1). Increasing pressure/gas input rate will reduce the ionisation degree and allow for further ρ B increase up to the power handling limit of the source. The magnetic field strength increases ρ B with a square-root proportionality due to the reduction of transport losses with increasing magnetic field strength. With an optimised source design for heavier elements, the source typically generates lower ρ B due to their higher mass and correspondingly smaller flux (2.40). These heavier elements can also be extracted as multiply charged ions, but as the higher ionisation steps typically require greater electron impact energies, the conservation of energy demands even lower ρ B . Last but not least, the dimensioning of the plasma via the source geometry affects ρ B . A narrower bottom plasma, see Fig. 2.41 left, produces higher ρ B due to increased compression of the power from the top plasma, but at a certain ratio (typically 1:10) a maximum is reached since shrinking the bottom plasma also increases the transport losses due to the reduced wall-distances. The injected resource gas flows through the source exit into the accelerator system together with the extracted ions. The gas pressure in the beam line should be as low as possible with typical values in the UHV range. A recovery of the lost neutral gas is difficult due to the mixing with other pumped species, leading to a gas consumption of the ion source. The recovery represents an interesting aspect when rare isotopes (e.g. He3 or 18 O) are used, considering usually 90%) beam optical system (Sect. 2.3) yields lower activity than a high source output with a low transmission on the high energy side. A high transmission equals only little interaction with apertures and vacuum tubing, resulting in less beam-on and inventory radiation. Nevertheless, the system will not be a straight tube and the tube diameter in combination with the local acceptance (2.19) will tell us how many σ of the beam (the current to the wall) are lost at a specific position. Take a design as depicted in Sect. 2.3.2 Fig. 2.30 or Fig. 2.31 and think about where particles collide with the walls. This will be beam optical elements, apertures, dumps, and samples as critical points. The ultimate limit lies in the statistical nature of the beam resulting in a certain fraction of the beam hitting the vacuum walls as depicted in Fig. 2.26. Let us consider an actual example. A 16 MeV DC accelerator is set up for continuous deuteron ion beam operation with 1 mA beam current. In the frame of the device planning, the beam tubes material needs to be selected. Commercially either the stainless steel 316L or the Aluminium-alloy 6082 are available for standard CF type vacuum tubing. Both alloys contain a set of elements and impurities. From the ion source emittance and beam-optics, we expect a flux of 1 μA (10−3 of the beam) lost to the vacuum tubing. A nuclear inventory code (Sect. 3.5) allows for calculating the expected nuclear inventory to optimize the device layout for minimum activity. Both materials reach significantly different activities after 1 year of assumed nonstop device operation at maximum power. 316L reaches an activity of 2.8 * 1013 Bq, while Al6082 only reaches 1.4 * 1013 Bq. In fact, the iron component in the aluminium alloy provides the largest contribution to the materials activity as it does

2.7 Radiation Protection

109

also in 316L, due to the long half-life of 55 Fe of 2.74 years. The nuclides of lower importance differ in both materials. While for Al6082 the second highest activity related to tritium with a 31% contribution, this contributes only 2.7% to the 316L activity. 54 Mn originating from activation of Cr represents 27% of the activity in the stainless steel 316L, but only 1% in 6082. Both materials lose about 97–99% of their activity and dose rate after 1 year. The biggest difference is the radiation dose rate level, which lies for 6082 a factor 10 below the level of 316L, due to the different nuclides present in both materials. The selection of proper materials (and their purity/quality) represents a major point of radiation protection in accelerator applications. The example shows Aluminium based vacuum components reduce the dose rates by a factor 10 in the investigated case, extending the safe range of beam energy and power and extending the possibility for workers to enter and maintain the device. Materials are important for radioactive isotope inventory, where the isotope specific dose rates come into play, but also for beam-on radiation in beam dumps, tubes, and analytical components such as Faraday cups. The emission of bremsstrahlung by electron beams increases for heavier elements, while for ion beams neutron emission, and with that the radioactive isotope producing nuclear reactions, can be avoided completely with heavy elements, at least up to some MeV (e.g. 5 MeV for proton beams on Ta). At higher energies, detailed calculations become necessary as demonstrated in the example above since the combination of energy, projectiles, and materials defines the possible products and activity (Sect. 5.1). Distance The key approach to increasing distance between source and staff is remote control. Remote control allows for practically infinite freedom in the position of the operator. With modern computer systems the operator could even be on the other side of the world. Nevertheless, regulations and common sense require staff to be on-site to be able to handle problems. Finding the balance in this interplay of normal and off-normal operation and understanding where and when human interaction and manual work is required is the key for efficient and safe distance concepts. On the downside, increasing levels of remote control become increasingly costly. Above we discussed the difference between stainless steel and aluminium vacuum tubing. Vacuum technology is not designed for remote installation, but in applications, standard parts, designed for manual installation, are used. Standard parts offer lower cost at higher quality than custom parts, a difference in cost that adds up to the remote control costs. What if a pump breaks or a vacuum leak appears? These unforeseeable situations have to be foreseen in a plant conception. Paradoxical, but otherwise the whole system becomes, physically speaking, unstable. Divide and rule: The concept of separating a larger task into independent subtasks helps in this respect. By concentrating different radioactive activities such as a cyclotron, the beam optics, the vacuum pumps, and the patient treatment or target, respectively, in separated rooms (often called bunkers due to the thick walls) of a building represents an ideal situation. Each room/laboratory layout can follow

110

2 Technology

Fig. 2.55 Tweezers, a simple yet efficient way of increasing distance to radioactive material

the specific requirements, while nuclear inventory present in one room loses its importance for maintenance or operation (e.g. patient treatment) in another room. The same applies for transport of radioactivity and the disposal of defective parts. Human handling of nuclear inventory always bears the risk of contamination (sticking of radioactive isotopes to body or equipment) and loss of material. Generally, the legislation rightfully distinguishes between enclosed radioactivity (in a rigid container so it cannot be touched) and open radioactivity (touchable with the potential for contamination). Depending on the form of the radioactive materials and the tasks different systems apply. For low levels of activity grippers and tweezers, Figure 2.55, allow for increasing the distance to radioactive samples by a few 100 mm, decreasing the radiation dose to the hands by orders of magnitude compared to finger handling. Isotopes with primarily α- and β-activity bear mostly incorporation risks. In this case a fume hood reduces the risk through the constant airflow directing released radioactivity away from the worker. With increasing activity glove-boxes increase this barrier efficiency via a hermetic sealing between radioactivity and worker. Isotopes with relevant photon emission require additional distance compared to the short-ranged contamination and incorporation risk. In this case, hot-cells with mechanical manipulators and lead-glass windows are required. Rabbit systems and conveyors inside the hot-cell allow dropping solid materials into shielded containers for further transport. In particular for medical applications, piping for gaseous and liquid products (e.g. 18 F for PET) directly connect the isotope production target with the chemical processing plant in the hot-cell, requiring no further human interaction with the accelerator exposed part. Complex mechanical operations such as tightening a screw or aligning devices require complex robotic arms. Extreme examples of robotic remote handling arms are the systems for nuclear fusion reactors such as JET and the upcoming ITER. The arms feature many joints to move around the donut shaped vessel with total length of several ten metres in order to access functional components and replace them, see for Fig. 2.56. Operating these large scale arms requires new forms of man-machine interaction such as virtual reality to find, inspect, and hit the tiny parts to be replaced. The extreme costs and the technological difficulties to run such a system in a highly radioactive vacuum environment (considering e.g. greases for the joints) are justified by the reduction in down-time necessary to enter the device, or probably by being the only possibility to enter the devices at all.

2.7 Radiation Protection

111

Fig. 2.56 3D model of the MASCOT system designed at the JET fusion experiment. Two robotic arms allow replacement of complete wall modules inside the reactor. Similar systems are planned for future nuclear fusion reactors in order to reduce down-times in case of accidents and regular maintenance in spite of the extreme activity of the nuclear inventory. Reproduced from Imperial College London (wwwf.imperial.ac.uk/blog/student-blogs/2016/03/04/jet-is-cool)

Exposure time Modern accelerator systems allow for a complete remote operation, reducing the exposure to maintenance periods with deactivated accelerator (besides the treatment of patients in medical applications). This situation avoids the exposure for the beamon radiation via entrance control and safety interlocks. These systems switch-off the accelerator if doors to the accelerator room are opened or critical radiation dose rates are reached in certain locations. The exposure to activation induced nuclear inventory remains. Many of the produced isotopes have only short half-life, therefore the waiting time before entering a facility with radioactive inventory becomes important. Figure 2.57 illustrates such a situation where already after minutes 90% of the initial activity disappears, while after 1 day 99% of the activity decayed. Knowledge of the produced isotopes and their properties becomes a critical point in avoiding relevant exposure. Upon entering a room of relevant nuclear inventory a planning and strict execution of work reduces the exposure time. A single person executing the work reduces the exposure to others. The German principle of supervising every working person with three non-working persons just multiplies the exposure time by four. Colleagues should be ready for help in case of problems but wait in safe distance or in a rotating duty scheme. If waiting is not sufficient for avoiding exposure, or urgent interaction is required, control systems and detectors help identifying critical locations and assessing the possible exposure/working time before reaching critical dose levels at these locations. So-called Electronic Personal Dosimeters (EPD, Fig. 2.58) fit directly on the clothing of exposed staff. These devices measure the dose and dose rate and visualize them to the affected person. Specific dosimeters exist even for fingers and eyes to assess exposure time in manual work. The directional and radiation type sensitivity of these devices is not flat, leading to an uncertainty of the obtained results, but their local

112

2 Technology

AcƟvity [Bq]

1 century

1 year

1 month

1E+07

1 decade 1 genera on

1E+08

1 day

He-6

1 min

1E+09

1 hour

1E+10

1E+06

Co-57 Co-56 Mn-54 Fe-55 Co-58 Fe-59 Cr-51 Be-10

1E+05 1E+04 1E+03 1E+02 1E+01 1E+00 1E-03

1E-01 He-6 N-13 Al-25 Si-31

1E+01 Li-9 Ne-23 Al-26 Cr-51

1E+03 Be-10 Na-22 Al-28 Cr-55

1E+05 Decay Ɵme [s] B-12 Na-24 Al-29 Mn-51

1E+07 B-13 Na-25 Al-30 Mn-54

1E+09 C-11 Na-26 Mg-23 Mn-56

1E+11 C-14 Mg-27 Si-27 Mn-57

Fig. 2.57 Calculated activation and decay of the Nova-ERA neutron- source beryllium target disc (mass = 6.5 g) after 2000 h of irradiation with 1 mA, 10 MeV, and 4% duty cycle. Already after 1 min the activity decreases by a factor 10 due to the decay of 6 He. Over the years the dominant nuclide changes from 6 He over 56 Co to 10 Be. From Mauerhofer et al. (2017) published under CC BY 4.0

Fig. 2.58 An EPD used for direct personal dose rate estimation of photon (γ) and electron (β) doses. Correct placement for catching, technical properties and orientation towards the radiation source represent the main difficulties in application

2.7 Radiation Protection

113

information is the only mean to know the personal exposure time and the related risks.

2.7.3 Shielding In the last section, general strategies for avoiding human exposure to radiation were discussed, but the last A was missing, so far. Among the four A’s shielding (Abschirmung) represents the passive technical solution, while the others are rather of organisational nature. Shielding enables the use of standard equipment and non-radiation exposed staff by passively reducing the dose rates, hence the additional costs of shielding pay off even for technical devices. Shielding offers the further advantage of not only protecting humans but also important technical equipment such as electronics or detectors without excessive distance to the point of interest, since it further decreases the radiation dose rate beyond the pure distance scaling of (2.49). Figure 2.59 depicts such a shielding configuration for a sample observation system. The avoidance of a direct line between radioactive source and sensitive subject allows for placing shielding to further reduce the received dose. The physical basis of shielding is the interaction of radiation with matter. We will discuss this in more detail in chapter 3, so the reader might want to come back to this section later. The four different types of radiation (neutron, ion, electron, and photon) interact physically very different with matter, hence we will discuss these four cases separately. Photons I (x) = I0 ∗ e−μx

(2.50)

The shielding of photons emitted from beam-on or decay radiation follows simple physics. Photons interact with the electrons in the shielding material. The more electrons between you and the radiation source the stronger the reduction of photon dose rate. More electrons mean more atoms mean more material thickness and density

Mirror

Camera Lead shielding Radioactive sample

Fig. 2.59 Principle of a shielding setup for protecting an observation camera. A metal mirror does not suffer from photon radiation, hence it can be placed in high radiation areas. This 90° geometry typically reduces the received radiation dose by >90%. The principle can be stacked for further reduction. The maze-like entrance structures of radiation protected labs follow the same principle

2 Technology

F-18 Dose rate @300 mm [Sv/h]

140

1400

120

F-18

1200

100

Co-57

1000

80

800

60

600

40

400

20

200

0

0

10

20

30

40

0

Co-57 Dose rate @300 mm [µSv/h]

114

Lead shield thickness [mm] Fig. 2.60 Shielding thickness dependent dose rate of 1 mg of F-18 (medical tracer) and Co-57 (protons on iron) at 300 mm distance to a point source. 13 mm of lead absorb 90% of the F-18 radiation, while Co-57 requires only 0.1 mm. The Co-57 dose rate (μSv scale) with 0 mm shielding is even 39x higher than the F-18 dose rate (despite the Sv vs. μSv scale). The required shielding for F-18 is rather thick, due to the high energy (511 keV) photons emitted from the positron annihilation of its β+ decay. Co-57 emits mostly photons and electrons below 140 keV, requiring less shielding thickness

equalling more shielding. This leads to the so-called mass attenuation coefficient μ and the exponential decay of photon intensity I with shielding thickness x according to (2.50). The mass attenuation coefficient depends on the shielding material and the photon energy, leading to different situations for different photon spectra as depicted in Fig. 2.60. The figure demonstrates the high importance of the photon energy, as the lower energy radiation from Co-57 requires significantly less shielding thickness compared to the one of F-18. On the downside, photons can never be absorbed completely, in contrast to all other radiation types. Every electron represents only a certain absorption probability for photons. The shielding cuts down a certain percentage of the dose, but a part of the radiation always remains, since (2.50) reaches zero only asymptotic. Therefore, often the tenth value (90% absorbed = radiation reduce by factor 10) is given for shielding materials. Lead is the standard material for photon shielding as it features a high density at a relatively low price, but also iron and concrete are applied. Tungsten represents the best shielding per volume, but at much higher costs. We have to consider the photons discussed here (and generally considered in radiation protection) have at least a few keV of energy and up to about 10 MeV due to limits of nuclear decay physics. The transparency known from visible light has no relevance for these photons, as the absorption/attenuation coefficient strongly depends on photon energy. In the energy range considered here the absorption physics becomes simpler, combining only a few processes as depicted later in Fig. 3.3. Their efficiency drops by a factor 105 from 1 keV until about 1 MeV and stays mostly constant from there on.

2.7 Radiation Protection

115

The attenuating processes do not depend on the electronic structure of the material, therefore the shielding material can be transparent in the visible spectrum while opaque for high energy photons (and vice-versa). Transparent shieldings are typically oxides of heavy elements such as PbO (lead glass). Lead glass windows allow for manual operations with radioactive products for example for preparation of nuclear medicine products or scientific samples in hot-cells. Electrons Electrons, as charged massive particles, show a completely different behaviour than photons. Electrons feature a relatively well-defined range in matter. In contrast to photons they do not disappear one after the other, but like a driving car which ran out of gas they continuously lose kinetic energy on the way through the shielding until they stop completely. Unfortunately, they raise dust on their way in the form of Bremsstrahlung. This secondary radiation represents the complication of electron shielding as the photons feature a broad energy spectrum up to the electron energy with all the implications raised above. The use of light materials with low stopping power (Sect. 3.2) reduces the amount of photons raised. This requires a second shell of heavy elements for absorbing the emitted photons as stated above. Consequently, X-ray sources, as used for example in medical imaging (Sect. 4.3.1), do the exact opposite in order to produce intense radiation. Ions Due to the higher mass of ions, Bremsstrahlung hardly reaches relevant values, rendering shielding secondary photons unnecessary. Problems with the shielding of ions start above some MeV, when nuclear reactions (Coulomb barrier) and negative Q-value reactions become possible. Starting with (p, n) reactions, this results in the production of neutrons. These reactions produce high beam on neutron dose rates and nuclear inventory with its photon dominated dose rates at the same time. The shielding of ions themselves remains uncritical due to their relatively short range. Protons penetrate more than 1 mm of most materials only above 20 MeV and reach up to 69 mm in iron at 250 MeV. All heavier ions reach even shorter. This low range leads rather to high thermal loads, requiring special beam dumps/targets (Sect. 2.6) for handling the deposited power, than requiring radiation shielding. Neutrons Shielding of neutrons represents the most complex task among all particle types. It requires up to several metres of shield for neutrons of some 10 MeV. As an extreme example, the European Spallation neutron source ESS requires a 5 m concrete shielding for the neutron energies up to 2 GeV. Basically, neutrons combine all the above mentioned shielding issues at any neutron kinetic energy. We start our considerations at the upper range of neutron energies above about 10 MeV and go down from there on. The collisional stopping of neutrons is extremely inefficient at these energies compared to charged particles or lower energy neutrons. Nuclear reactions are relatively important, but their cross-sections are typically lower than for thermal neutrons, see Fig. 4.7 in Sect. 4.2.1. In this range nuclear reactions even

116

2 Technology

produce additional neutrons via the dominant (n, xn) reactions, further increasing the neutron dose rate. Consequently, we cannot reduce their number, but only their energy via these reactions. Lead represents an efficient element for this task since it has good cross-sections and is still relatively cheap (shielding requires a lot of material). The multiplied MeV neutrons present after this shielding step bear even increased dose rates due to increased number and higher quality factor (Table 2.5). The lower energy on the other hand allows for more efficient neutron stopping (a.k.a. moderation). Hydrogen bound in water, concrete, or polyethylene (PE) is the ideal material as it is cheap and features a similar nuclear mass as neutrons resulting in efficient energy transfer (just like all billiard balls have the same mass). Only if the neutrons leave the MeV region to slower velocities, capture reactions (n, γ ) become efficient for absorbing the neutrons, reducing their quantity. Unfortunately, at these energies only reactions with zero or positive energy output remain possible (positive Q, Sect. 3.3.2). This energy can only be released in the produced γ’s, leading to a shower of secondary photons with high energies. The absorption of these photons requires another shielding layer as discussed above. Boron, namely its isotope 10 B, has a high (n, α) cross-section for thermal neutrons with Q = 2.79 MeV, making it one of the few exemptions from this rule since it emits α’s instead of γ. This special behaviour makes Boron a common additive to neutron shielding materials, in particular concretes. In summary, the best neutrons shield features several layers of different materials mostly containing hydrogen and lead, but the complex multi-stage processes make an easy estimation difficult. Only sophisticated transport codes such as GEANT4, FLUKA, or MCNP allow for optimal shielding designs. Even better is avoiding high-energy neutrons from the beginning by choosing lower beam energies. The multitude of radiation types, their combined occurrence, and the problems with high energy neutrons faced at accelerators above about 10 MeV scream for a unified shielding solution covering all radiation types in this case. Special radiation protection concretes deliver this solution by being a homogeneous mixture of hydrogen, light, and heavy elements. Additionally, they are easily cast into shapes, provide structural functions, and have a good price tag. Their elemental composition combines neutron/particle stopping/moderation and shielding. On the downside, concrete shielding require larger thickness and a higher overall weight as it is not as optimized as specific shields. Legal Framework Any radiation legislation has to consider the presence of natural radiation in our environment and enable the application of nuclear technologies. Over the years many common materials such as tungsten were found to be long-lived but radioactive with improving detection technology, but a sudden control of these materials just because of technological progress would be impractical and pointless. Furthermore, some landscapes feature higher natural radiation levels, for example the evaporation of radon from the soil of the black forest in Germany or the increased cosmic radiation at high elevation mountainous landscapes. Finally yet importantly, also humans carry around radioactivity, especially after nuclear medicine treatments, but you cannot restrict all of them to their private space. Due to personal rights, you even cannot

2.7 Radiation Protection

117

imprison patients until they are decayed-off. A practical radiation protection law has to arrange all of these aspects with everyday life while still providing safety and freedom in the required situations. For these reasons and the strong impact of physics onto the legal implementation rather similar legal frameworks have developed all over the world. Radiation protection laws are definitely one of the few exceptions where physicists respectfully accept the lawyers’ achievements and lawyers respectfully accept the physicists’ accuracy of description. In Germany a very strict and quantitative set of rules exist in two forms. The general legislation comes by the name Strahlenschutzverordnung (StrSchV) and Strahlenschutzgesetz (StrSchG) with some details on transportation regulated in the European Agreement concerning the International Carriage of Dangerous Goods by Road (ADR). These laws cover all usage cases and required documents and licenses, except for the operation with nuclear fission fuels for which a separate set, the Atomgesetz (AtG), exists. Interesting aspect which is somehow symptomatic for the understanding of nuclear physics in Germany: The nuclear power legislation and also nuclear power plants go by the term atom, although the technology relies on the nucleus not the chemical entity atom. The following will discuss legislation at the example of Germany during the write-up phase of this book in 2017. The Free-handling limit forms the main solution developed to allow for a practical handling of irrelevant radioactive quantities and natural radioactivity. In the language of the physicist it is understood as a background level. In this case, legislation definitely had to adapt to the physical reality of measurements, which will not allow for the detection of arbitrarily small quantities, but will allow for the detection finite quantities. Free-handling limits also cover the different risk potentials of different isotopes by isotope specific limits. In the end (in German legislation) a large table for basically all existing isotopes develops, from which Table 2.8 shows a small extract. The first two rows of Table 2.8 contain the activities allowed to be included in materials before a handling of these materials according to the radiation protection legislation becomes mandatory. The numbers in each row vary significantly, accounting for the individual risk and specific dose rate of each isotope. Free handling means, a license for handling is not required, but you still have to be aware of the radioactivity. Recycling and release limits are much lower, since the materials enter the regular Table 2.8 List of selected nuclides with their corresponding free-handling and waste recycling limits within German legislation (Strahlenschutzverordnung 2001) Nuclide

3H

14 C

18 F

40 K

55 Fe

99m Tc

137+ Cs

185 W

235+ U

Free-handling (Bq)

109

107

106

106

106

107

104

107

104

Free-handling (Bq/g)

106

104

10

102

104

102

10

104

10

Recycling (Bq/g)

103

80

10

-

104

102

0.6

700

0.8

(T)

The superscript “+” indicates also daughter nuclides are included. The numbers roughly relate to the human health risks of the emitted radiation, but in fact they also represent a good compromise between practical, analytical, and economic considerations

118

2 Technology

world where nobody is aware of the radiation. Let us consider the example of socalled tritium lamps which contain tritium and a scintillator to produce visible light over many years e.g. in clock faces. As long as the activity of the contained tritium stays below 109 Bq or 106 Bq/g (total device) the lamp could be freely handled in any radiation protected site without notification or license of the authorities. Using more tritium, for example for powering a fusion reactor, requires a specific license given only with reasoning. In order to leave the site with the tritium lamp its activity has to be proven to be 3 mSv/h can occur are called Closed/off-limits areas. These areas cannot serve as regular working space. Consequently, entrance must be avoided except for special emergencies. Off-limits areas require additional barriers and warnings to separate them from the rest of the controlled area. These areas can be temporary due to beam-on radiation for example in an X-ray imaging patient room (think of a CT scan applying 25 mSv in 30 min as presented in Fig. 6.7). These three classes of areas should not be confused with the concept of a room. Shielding around an X-ray tube, or a concrete wall around an accelerator target encapsulates the higher level area such that it can be potentially situated even in a normal laboratory room with a tape marking its borders. The radiation protection officer (Strahlenschutzbeauftragter) is the captain of these areas. The legislation defines a set of ranks with increasing competence for handling of radioactive material and radiation emitting devices up to a full competence for handling, operation, and installation of devices. Higher ranks require higher levels of education and working experience in respective sites. The license for operation and installation of accelerators represents the highest level in the German system Fig. 2.61 Warning sign found on controlled area entrances with a text stating “controlled area, caution: radiation”

120

2 Technology

requiring the highest levels of technical or academic education in addition to two years of practical experience at such devices. Independent of the rank, the officers always bear personal liability for all incidents under their supervision. Following the management law “No responsibility without competence” the officers also receive an unbreakable right of command (even before his/her boss) for all devices and persons in their site and a protection against dismissal. This brings us to the concepts of licensing of devices and laboratories. Licensing of laboratories requires the staff and public doses to be within the limits for the requested type of area as mentioned above. For proving this to the authorities the device properties, the shielding, the total amount of nuclear inventory and the operational schedules have to match. Continuous monitoring ensures this during the later operation. Of course, the nuclear inventory changes with decay and possible production via charged particle irradiation, but licensing requires this to be considered in advance or the device operation needs to stop if the limits are reached by increasing activation. An example: We want to set up a laboratory using X-ray tubes for material analysis. The schedule foresees 50 working weeks with 40 working hours on a 5 days per week basis. This results in 2000 hours of potential exposure of the employees within the lab. Dividing the limit of radiation exposed workers of 20 mSv would result in a maximum local dose of 10 μSv/h. At this point the 4 A strategy of Sect. 2.7.2 starts by considering the required source strength of the X-ray devices. This value derives from the discussion of analysis methods in Sect. 7.1. Typically, a competitive analysis will require source strength exceeding these limits. The next level of thinking considers a shielding around the devices. Practical reasons will limit shielding thickness. Consequently, our lab layout will also consider keeping the employees at a distance to the sources. A credible distance concept requires electronic interlocks at doors or entrances, ensuring the staffs distance during beam-on to the legislation. Many common devices, such as electron microscopes or X-ray scanners, feature only low-levels of radiation, not worth the costs of employing a radiation protection officer and maintaining dedicated closed rooms. These kind of established technologies have to stay within limits of beam energy and power to be operated without these requirements. Below 30 keV charged particle energy and with ≈ 10 MeV) or more products with increasing projectile energy. These reactions add the complication of interconnected spectra for all outgoing particle properties, in contrast to the kinematics of the two-body reaction featuring only a single solution at each product angle. In any case, the whole situation has to fulfil momentum and energy conservation, which allows calculating the respective product parameters with the knowledge about the four masses and, in the case of two-body reactions, any four other parameters of the situation (leaving 1 unknown + 1 equation = unambiguous solution), see Sect. 3.3.2. The mathematical flexibility implied by these equations forms an important aspect of our physical understanding and also technological exploitation of beam-matter interactions. As we will see in the course of this chapter, everything interacts with everything, even with the vacuum, but the mathematical formulations allows us tailoring and identifying the reactions. This additional information compensates for the lack of information provided by detectors (Sect. 2.5). The same equations apply for material analysis, isotope production, or patient treatment, just with different unknowns in the equation system. The interaction of beams and matter covers a wide range of specific physics. Not all of them are fully or even partially understood. In view of applications we

126

3 Interaction of Particle Beams and Matter

divide the level of knowledge into three categories: Theoretical, semi-empirical, and empirical understanding. Full theoretical models requiring no external input, except for fundamental constants, so-called ab initio models, are the highest level of understanding. Think of a treasure quest. Theoretical understanding equals a situation where you have a full map containing all the information on what the treasure looks like, how much gold it contains, and the mm resolved path this would allow walking to the treasure blind with only your feet (or technology) limiting the amount of success. If you know there is a treasure somewhere, but you only have a plain path drawn on a handkerchief, without coordinates, scales, or the like you have a semiempirical understanding. It tells you which turns to take and the dangers lurking on your path, but you do not know where to start or how long the way will be and which dangers wait on your path. The same applies for beam matter interaction. Some crosssections, such as Rutherford’s, were understood to their fundamental physics and a theory was found accurately describing them. Others have been investigated deeply and mathematical relations were found empirically, but certain constants, factors, or limiting cases cannot be covered by existing semi-empirical models. A few cases, such as the radioactive decay, were broadly investigated experimentally, but due to the lack of understanding no type of extra- or interpolation of data is possible. In this lowest level of understanding we only have an empirical qualitative estimate of the order of magnitude and the influence factors of the process, but we do not even know if this covers the full space of possible pathways of the process. Rutherford understood the nature of the target structure in his gold foil experiment by the match of the cross-section calculated from his hard sphere model and the agreements with the experimental results. Interestingly, most of the α-particles actually passed the gold foil in Rutherford’s experiment undisturbed. We can quantify the interaction probability w using (3.2) and the gold atomic density ρ by multiplying with the Rutherford cross-section σ and the foils thickness d. w = σ (E) ∗ ρ ∗ d

(3.2)

The interaction probability remains in the percent range, even if we infinitely increase the gold foil thickness beyond Rutherford’s thin foil. From common sense, but also from a mathematical limit consideration, it becomes obvious we didn’t completely understand the situation: An infinite target thickness should yield a 100% reaction probability for each projectile, otherwise they would fly through the target, a situation empirically non-existent. By increasing the foil thickness beyond 10 μm we would come to know that the α-particles will already get stuck in the foil way before Rutherford’s reaction probability even has a chance reaching 100%. Therefore, at least a second mechanism stopping the α-particles has to exist besides Rutherfordscattering. The question arises which interactions were missing in Rutherford’s description, since a particle beam will not stop by itself, just like a spaceship will not stop by itself in the vacuum of the universe. So far we skipped considering 98.7% of the particles in the gold foil, the electrons attached to the gold nuclei forming the atom. Gold has 79 times more electrons than nuclei (= nuclear charge Z), which can undergo the same 2-body reactions with the

3 Interaction of Particle Beams and Matter

127

Fig. 3.2 An α-particle source (≈ 5 MeV α) placed at the bottom emits α’s into a cloud chamber. The bright tracks indicate individual particle tracks. The particle range in the cloud chamber’s alcohol mixture slightly differs for each particle due to statistical effects of the stopping, even a 50% longer spike is present. Reprinted from physicsopenlab.org CC-BY 4.0 license

projectiles as the nucleus, although with different kinematic parameters. Everything relies on the ratios of the interaction cross-sections and the results of the interaction. There are actually significant amounts of interactions with the electrons contained in the gold foil, but due to the strong mass difference of electrons and α-particles the energy transfer remains small in each interaction. In a reasonable approximation, the electrons act like an aether continuously slowing down the α-particles or, in general, charged particles, passing through them. Imagine it like walking through IKEA’s Småland ball pool with the force required to push away the balls from your way slowing you down. The ratio of ball to human mass even approximately resembles the electron to α-particle mass. Rutherford’s experiment was designed in a way to minimize this slowing effect by staying in the limit of a thin target. Upon increasing the foil thickness we slowly leave the thin target regime and the energy-loss of the α-particles becomes visible/measureable. At a gold foil thickness of about 10 μm all α-particles will stop inside the foil and Rutherfords experiment would not yield any measurable quantity in the forward direction. In the backward direction the situation will also change, as the reactions from different depth will add up. The thickness related to this so-called thick target limit strongly depends on the beam and target properties. The situation gets nicely visualized in a cloud chamber in Fig. 3.2 with no α-particle reaching the top end of the cloud chamber. Particle beams do not see distance when passing through matter. Of course, for the beam optical aspects of divergence and direction remain relevant as demonstrated in Fig. 3.2, but here we focus on the beam-matter interaction since the distances are relatively short (e.g. the 10 μm foil). In order to understand the way particle beams see matter let us consider the following three situations: A beam gets fired onto a solid metal, the same metal but as a metal foam with vacuum in the pores, and, last but not least, the same metal foam but with air inside the pores. The situations are

128

3 Interaction of Particle Beams and Matter

Fig. 3.3 Illustration of a beam (black line) in a solid (grey), a porous solid with vacuum inside the pores, and one with gas inside the pores. At the end of its range, the beam spreads out due to the statistical nature of scattering. In its interactions, a beam only sees matter, not distances. In vacuum, the beam travels undisturbed, like a space ship. In matter, the intensity of interaction depends on the matter density, hence gas volumes show lower interaction rates than solids

depicted in Fig. 3.3. In the first case, the beam travels a depth X into the material until its energy is dissipated. In the porous material the beam will lose energy when passing the metal, but in the pore’s vacuum no energy is lost, hence the total range extends by the porosity aspect. In the third case, the pores also contain matter (gas), but the relatively low density of gas yields only a very small influence. The range in the pore exceeds the range in the metal, but gas still contributes to the energy-loss. Section 3.2 discusses the quantification of the beam stopping and its mathematical treatment. Usually the projectile (photon, electrons, ions) beams density is insufficient for simultaneous reactions of several projectiles with single targets and reactions between projectiles are negligible due to the low relative speeds (= low emittance). Also target densities of normal matter are too low for reactions with multiple targets. This allows treating each beam particle individually in the so-called binary collision approximation (BCA). Therefore, the interaction of a beam with a target equals the sum over all the independent individual reactions. The BCA constitutes an important basis for our understanding and quantitative computer modelling of beam-matter interaction. This may sound trivial, but the exchangeability between the individual particle and the ensemble (beam) picture will become an important tool for understanding and mathematical treatment of beam-matter interactions. In order to understand and work with something you have to give it a name. In addition to these energy-transfer reactions nuclear reactions become possible at higher projectile energies. Naming of nuclear reactions follows international conventions. The naming needs to include the target, projectile, and products. The reaction of a 12 C target with a 3 He projectile resulting in a proton and a 14 N product reads for example 12 C(3 He, p)14 N. For describing a class of reactions or shortening the naming, the same reaction could also be named (3 He, p) corresponding to a naming scheme (projectile, light product). (p,n) describes a typical reaction with two products, a 2-body reaction. Correspondingly (p,2n) and (p,n+4 He) describe reactions with three products, (p,3n) with four products and so on. If we want to describe a class of reactions, leading for example to the same element, we can introduce a variable x in the form (p,xn) with x = 1 to infinity to discuss reactions producing only neutrons.

3.1 Absorption and Reactions of Photons

129

3.1 Absorption and Reactions of Photons We first take a step back away from the massive (mass > 0) charged particles produced in accelerators in order to improve our understanding of the interaction of particle beams with matter. Mass-free (or more correctly rest-mass-free) particles like photons necessarily have to travel with the speed of light, hence they cannot be slowed down. Logically for these particles, an energy-loss mechanism by friction is not possible, but energy can only be transferred via a reduction in quantity or intensity, respectively. This directly leads to a differential equation with an exponential decay solution of the photon beam intensity I in depending on the distance d passed in matter, equivalent to a constant absorption probability for each individual photon per passed matter particle. I = e−dμ I0

(3.3)

Due to the lack of the friction mechanism, photons typically achieve the longest attenuation length μ and hence range (distance d) for a given kinetic energy of the considered species (e− , ions, neutrons). The attenuation length increases with increasing photon energy. We already saw the technical effect of this fundamental physics in the detector Sect. 2.5 in Fig. 2.44 with the required detector thicknesses being largest for photon absorption. Also in radiation protection, Sect. 2.7.3, this and the exponential decay law lead to thick shielding requirements for photons. Having said photon beams only lose energy by a reduction of intensity is actually not entirely accurate. For lower energies, scattering dominates the interaction of photons and matter. Scattered photons are absorbed and instantly reemitted in a different direction and hence cannot be considered as the same particle or part of the same beam population. A set of mechanisms exists for the interaction of photons with electrons. Scattering processes (approximately) conserving the photon energy dominate the photon matter interaction for lower energies up to the binding energies of electrons to atoms (e.g. 13.6 eV for H). This class of elastic process retains coherence (= phase relation) with the original beam. A prominent example among this is the Rayleigh scattering which leads to the blue sky, since the processes crosssection is inversely proportional to the fourth power of the wavelength, scattering more blue than red or green light in the observer’s direction. With increasing photon energies or shorter wavelength, respectively, the elastic scattering energy transfer increases. With sufficient energy transfer, remember our scattering partner binds to atoms, the electrons gain enough energy to leave their binding state. Starting with this energy, the elastic scattering becomes incoherent, since the electron receives part of the energy. This so-called Compton scattering follows a probability distribution given by a cross-section called the Klein-Nishina formula, (3.4).   2   E 1 Z 2 e2 E  E dσ = +  − sin2 (θ ) d 2 4π m E E E

(3.4)

130

3 Interaction of Particle Beams and Matter

This differential cross-section describes photon scattering from free resting point charges (electrons or ions), a situation only approximately true for electrons in atoms, with mass m, charge Ze, and the scattering angle  between incoming (energy E) and outgoing photon (energy E  ). This formula only approximately describes the situation, but allows understanding the basic trends with an analytical description. The scattering cross-section decreases with the photon energy, hence Compton scattering becomes less efficient for high energy photons. The cross-section also decreases with increasing energy transfer and scattering angle. For photon energies small compared to the electron rest-mass me c2 only negligible energy transfer occurs; the low energy limit of Compton scattering yields the coherent elastic scattering process discussed above. In all cases, (3.4) leads to a continuum of scattered photon energies, similar to Bremsstrahlung where electrons penetrate matter. In parallel to the incoherent scattering Einstein’s photoelectric-effect, the ionisation of atoms or the freeing of bound electrons, respectively, takes place. Equation (3.5) describes the cross-section of this inelastic process. It requires photon energies above the binding energy of the electrons to their atoms. Energy in excess of the binding energy will end up as kinetic energy of the released electron. All atoms above hydrogen (H) feature several electrons, each with different binding energies. The ionisation of the electrons from the innermost shell, called the K-shell, requires the highest energy in the order of a few 10 keV for heavy elements. The higher a binding level in the atomic shell, the lower its binding energy due to the core charge shielding of the inner electrons. Each binding state represents an independent instance of the photoelectric-effect, leading to so-called absorption edges at the given binding energies. dσ = Constant ∗ Z 5 ∗ E −3.5 d

(3.5)

The free spot of the released electron will quickly be reoccupied by another electron. The involved binding energy remains the same, but now has to be released in the form of a photon. In particular for the inner binding shells, also bound electrons from other higher shells can reoccupy the free position. These inner conversions emit photons with an energy given by the difference between initial and final binding state. A table of possible conversions arises, from which the innermost shells (K and L) are given in Fig. 3.4. The absorption of photons with an energy equal to the binding energy or higher can only lead to a complete release of the electron. The process can be triggered not only by photons, but also by charged particles as we will see later. At the highest photon energies, new inelastic scattering processes add up to the interaction list. Up to here all photon interactions involved scattering with more or less free electrons. The quantum nature of the bindings implies certain specific energy limits. With photon energies above 1022 keV a new interaction process with the nucleus becomes possible. This interaction converts the photon energy to matter/mass. All physical processes have to conserve energy, momentum and quantum numbers. Consequently, for producing massive particles only matter antimatter pairs can be produced. With a rest-mass of 511 keV/c2 , 1022 keV and more

Fig. 3.4 Chart of the first four X-ray absorption/emission energies along the periodic table of elements. Generally emission and absorption of photons are connected in physics, hence all of these energies relate to absorption edge for photons passing through the elements and also emission lines when exciting the atoms via particle beams

3.1 Absorption and Reactions of Photons 131

132

3 Interaction of Particle Beams and Matter

allows for the production of electron-positron pairs. The process requires a nucleus to balance the momentum. Balancing the momentum with an electron requires slightly higher energy due to decreasing momentum per energy for lighter particles (= same amount of momentum transfer requires more energy transfer), favouring a nucleus as partner. Positrons being the anti-matter equivalent of the electron cannot survive in normal matter, quickly leading to the emission of two 511 keV photons from the annihilation of the positron with a random electron (the reverse process). At even higher energies, a disintegration of nuclei, so-called photo-nuclear reactions, becomes possible, releasing neutrons and other heavy particles. In the accelerator application context, typically photon energies are between 1 keV and 10 MeV. Plotting a graph of all the discussed processes demonstrates the diversity of effects of photons in matter, see Fig. 3.5. The underlying data are well documented for the whole periodic table and are available online e.g. from the XCOM database (Berger et al. 2010). The sum of all processes leads to a mostly exponential decrease of attenuation with photon energy up to 1 MeV, from where on it stays constant. We have seen all processes tend to break down photon energy to smaller and smaller chunks. Some of these chunks have discrete energies due to quantum effects and some continuum distributions. For example the high energy processes convert a 2000 keV photon to an electron-positron pair with some kinetic energy. The particles annihilate to two 511 keV photons, which could then most probably Compton scatter to photons 1E+4

Coherent sca ering Incoherent sca ering Photoelectric absorp on Pair produc on in Nuclear field Pair produc on in electron field Total a enua on

1E+3

A enua on [cm²/g]

1E+2 1E+1 1E+0 1E-1 1E-2 1E-3 1E-4 1E-3

1E-1

1E+1

1E+3

1E+5

Photon energy [MeV] Fig. 3.5 Photon energy dependence of the different reaction channels of photons with iron. At 7.06 keV the K edge of iron leads to a sudden increase in attenuation, compare Fig. 3.4. The attenuation multiplied with the material density yields the exponential fall-off length μ, (3.3). Data from NIST XCOM Database (Berger et al. 2010)

3.1 Absorption and Reactions of Photons

133

and electrons of even lower energy, which then induce photoelectric electrons and photons, and so on. This kind of salami tactics consumes higher energy photons in a chain of events with many secondary particles involved finally ending up at particles with negligible energy. Accelerators cannot directly influence or produce photons, but they arise from several processes of accelerated charged particles as secondary particles. We already came to know the Bremsstrahlung and synchrotron radiation, which describe spectra of photons produced by deceleration of charged particles, e.g. when passing matter. The decay of radioactive inventory and numerous nuclear reactions represents the second relevant source of high energy photons. These photons originate from the atomic nucleus and were given the name gamma-ray (γ ) in contrast to X-rays originating from the atomic electron shell. Therefore, in the accelerator context photons potentially represent a problem, since they contribute to the radiation dose rates, but originate from fundamental processes. On the other hand many applications rely on using the production of photons. For example, the interaction of charged particles with bound electrons produces known photon energies, Fig. 3.4, allowing for elemental identification.

3.2 Range and Stopping of Charged Particles Beams of massive charged particles (electrons, ions) follow different physics than photons, as their number is conserved by fundamental laws (like the conservation of velocity for photons), but their velocity is variable. For deep insight into the physics and mathematics of particle beam stopping and interaction with matter the reader is referred in particular to the book (Sigmund 2006) and also the accompanying book to the famous ion stopping software SRIM (Stopping and Range of Ions in Matter) which includes many examples and numbers of ion stopping (Ziegler et al. 2008). Stopping of charged particles predominantly arises from the interaction of the beam with the electrons in matter (IKEA’s Småland ball pool friction effect), similar to photons. The terms stopping, stopping power, specific energy loss, dE/dx, and friction are used more or less as synonyms describing an effect of an energy loss per length (e.g. MeV/mm), normalised to mass density (e.g. MeV cm2 /g), or per passed atoms [e.g. keV/(1015 atoms/cm2 )]. In fact, since the collisions with electrons induce stopping, distances cannot play a role for energy loss (as discussed in Fig. 3.3), making the energy lost per passed atom/area the most fundamental quantity. The Bethe-Bloch formula (3.6) for stopping power S in energy E lost per travelled length x describes this for ions in matter. Stopping depends on ion velocity not energy, but energy is usually the interesting quantity for other aspects of beam-matter interaction. ⎛ ⎛ ⎞ ⎞ n e z 2 e4 ⎝ ⎝ 2m e v2 ⎠ v2 ⎠ dE − 2 =− ln  S B (E) ≡ 2 dx c 4π m e v2 02 I 1− v c2

(3.6)

134

3 Interaction of Particle Beams and Matter

With electron density ne of the material, projectile elemental charges z (e.g. z = 1 for electrons and hydrogen ions), projectile velocity v, a mean excitation energy of the target material I(≈10 eV), and me , ε0 , c, and e fundamental constants. At energies below a few 100 keV/amu (atomic mass unit) ions additionally lose relevant amounts of energy due to collisions with nuclei (the billiard table effect). This nuclear aspect of stopping transfers energy to the target nuclei leading to cascades where individual particles hit nuclei and transfer enough energy for the hit nucleus to hit further nuclei, see Sect. 7.4. Electrons also lose energy due to collisions with target electrons, but with their correspondingly higher velocity at a given energy the relevance of several effects changes. Relativistic effects and Bremsstrahlung reach a relevant level at significantly lower energy compared to ions. In particular ions have to be beyond our considered energy range of 250 MeV for this, while electrons require only about 1 MeV. The Berger-Seltzer-Formula describes the stopping power of electrons consisting of the collisional and the Bremsstrahlung part. Bremsstrahlung dominates the energy-loss of electrons for example above 10 MeV for Pb or 400 MeV for H targets:   2(γ + 1) 1 e4 n e ∗ ∗ ln  S E (x) = 2 + F 8π 02 m e c2 1 − γ12 I m e c2

(3.7)

Equation (3.7) follows a different trend with similar input parameters and the Lorentz factor γ as a measure for the electron energy. The function F adds a term which depends on the target material properties and the electron velocity, typically small in the energy range considered here. Figure 3.6 compares the total stopping powers of electrons and ions. The behaviour for different ions is similar with a clear maximum in stopping power around a few 100 keV/amu and a minimum in the GeV range. Electrons in contrast have a minimum at around 1 MeV with higher values for

Stopping power [MeV cm²/g]

10000

Electrons Protons Alphas

1000 100 10 1 0.001 0.01

0.1

1

10

100

1000

Energy [MeV]

Fig. 3.6 Stopping power of charge particles in carbon. The stopping per length is calculated by multiplication of the values with the material density. Up to about 100 MeV the stopping power of electrons is about 100 times smaller compared to ions. Furthermore electrons show a different behavior, while different ions have similar but shifted stopping functions. Data from National Institute of Standards and Technology (2019)

3.2 Range and Stopping of Charged Particles

135

small and large energies. In all cases, we see the graphs do not strictly follow (3.6) and (3.7) in particular in the low energy half. The equations describe an important part of physics, but additional higher order effects add up to the collisional stopping in the high and low energy limits. Unfortunately, physics becomes very complicated in particular at lower energies. The electrons in the solid also move with the so-called Fermi velocity, changing their stopping effect if the projectiles have comparable velocity. Electrons on the other hand suffer additional energy losses by Bremsstrahlung emitted due to their strong deceleration in matter above a few MeV. We will not discuss the details of these higher order corrections in detail, but refer the reader to the given literature mentioned earlier. To summarise the findings: In particular at low energies stopping cannot be completely described analytically, but rather semi-empirically with fits to experimental data. This implies a limited accuracy in these energy ranges, compared to a full theoretical understanding. Uncertainties of the best known stopping powers range up to 6% (Ziegler et al. 2008), with the accuracy of full stopping models improving towards 1–2% at a few MeV/amu (the sweet spot). For calculating the stopping S Mix of elemental mixtures such as stainless steel or human skin, Bragg’s rule applies. Within this rule, the total stopping power is given by the atomic fraction ρ i of the weighted sum over the individual stopping powers S i of each pure element, (3.8). Deviations from Bragg’s rule occur in materials with strong chemical interactions between their constituents, in particular with light elements. While deviations †plane. The silicon atoms are clearly visible, as is the bond between the nearest neighbours. The contour interval is 0.05 e with contours going from 0.05 to 1.5. The electron density is high near the nuclei and low in between. Reprinted with permission from Elsevier from Sillanpää (2000)

3.3 Nuclear Reactions – Stupidity identifies itself by repeating the same action and expecting different results in each instance. This sentence, as true it is for real life, completely fails for the sub-atomic level of physics. The basic principle of quantum mechanics is uncertainty. Therefore it is indeed logical to expect different outcomes for every single beam particle impacting onto the same target. This fact should not be over interpreted, though. Summing up the outcomes over numerous interactions will lead to statistically solid probabilities, but it’s just probabilities so everything remains possible. Nuclear reaction cross-sections, or just cross-sections, describe the probability for a certain interaction pathway to happen. Graphically we can imagine them as a target disc which we try to hit with a projectile. In accelerator applications, usually the projectile, as a part of the charged particle beam, moves towards numerous targets atoms which are at rest. Every projectile sees the target discs of the many atoms it approaches, but still most of the space is empty making it improbable to hit a target. As a rule of thumb usually only up to a few percent of the projectiles undergo nuclear reactions. As with stopping nuclear reactions probe matter, independent of its arrangement in the form of density or porosity only the amount of passed atoms/area counts. In fact, generally many different nuclear reactions can take place between given projectile and target, each with its individual cross-section. In order to describe the cross-section of a specific interaction, the unit barn (= 10−28 m2 ) was defined. We could think of it as the area of the imaginary target disc of each target particle. Its dimension is extremely small, but of course our projectile

3.3 Nuclear Reactions

139

will face many targets on its way, since the atomic density of matter is large. The probability P of interaction is given by the cross-section σ times the number of target particles N T per target area A. P=σ∗

NT A

(3.10)

Equation (3.10) applies independently to every constituent (element and isotope) of the target with their individual cross-section. Different types of reactions can occur with the same type of target. We already came to know the elastic nucleus scattering as relevant contribution to stopping at low energies or the elastic scattering with the electrons as primary contribution to stopping at higher energies. In contrast to elastic scattering, which conserves the total kinetic energy E, nuclear reactions are inelastic. Inelastic reactions enable a transfer between mass and energy, generally not conserving the kinetic energy (but of course the total energy). Due to the equivalence of mass and energy [Einsteins famous (2.6)], the physical law of conservation of energy remains intact. The Q-value expresses this energy redistribution in the quantity of energy transferred in the nuclear reaction (usually in keV). Consequently, elastic scattering reactions feature Q = 0 while inelastic reactions identify by Q > 0 (mass consumption) or Q < 0 (mass generation) defined by (3.11) for a 2-body reaction as depicted in Fig. 3.1. Q = E = (m 1 + m 2 − m 3 − m 4 )c2

(3.11)

Physically, the Q-value results from a difference in the sum of the rest masses, related to different nuclear binding strength, between the particles before and after the nuclear reaction. This so-called mass defect calculates from the difference in the actual nuclear mass from the sum of the masses of an equivalent amount of isolated protons and neutrons. The highest Q-values are found in reactions of light elements with 6 Li(2 H,4 He)4 He marking the summit with Q = 22.38 MeV. Very thorough studies with sub-keV precise nuclear mass data exist, summarised in the ongoing Atomic Mass Evaluation project(Huang et al. 2017) and made publicly available on numerous websites (e.g. http://oecd-nea.org/dbdata/data/structure.htm or http://nrv. jinr.ru/nrv/webnrv/qcalc). A few examples were compiled in Table 3.1. The particles produced in a nuclear reaction can enter excited nuclear states, similar to excited atomic states, further reducing Q below the ground state mass difference. The nucleus features a shell structure of protons and neutrons similar to the atomic shell and its excitations as shown for three configurations of nucleon number A = 14 (14 C,14 N,14 O) in Fig. 3.10 (left). With enough energy provided by Q + E, nuclear reactions can produce 14 N in its excited state as the measurements shown on the right demonstrates. Here three different proton energies corresponding to three different Q-values of the 12 C(3 He,p)14 N reaction are detected. Most excited states quickly decay by emission of a photon with the corresponding energy, but even if the deexcitation is energetically possible, it can be hindered by the conservation of angular momentum. Every nuclear level features not only an energy, but also an angular

140

3 Interaction of Particle Beams and Matter

Table 3.1 Examples of mass defects (= binding energy), first 3 excited states, and coulomb barriers with protons. Protons naturally do not have excited states due to their single nucleon nature Nucleus

P

4He

12 C

18O

18F

180Ta

181Ta

Mass defect (keV) 0

28,296

92,162

139,808

137,370

1,444,662

1,452,239

Mass defect (keV/nucleon)

0

7074

7680

7767

7632

8026

8023

Nuclear states (keV); (spin) (Parity)

0; 1/2+

0; 0+ 20,210; 0+ 21,010; 0−

0; 0+ 4438.9; 2+ 7654.2; 0+

0;0+ 1982.07; 2+ 3554.84; 4+

0; 1+ 937.2; 3+ 1041.5; 0+

0; 1+ 39.54;2+ 77.2; 9−

0; 7/2+ 6.237; 9/2− 136.26; 9/2+

Proton Proximity barrier (keV)

181

364

1027

1338

1523

9555

9545

State transitions require spin differences of 1, otherwise the transition is forbidden and long-lived such as the 3rd state of 180 Ta also known as 180m Ta. Proximity barriers from (Blocki et al. 1977). Nuclear levels from NuDat 2.7 [https://www.nndc.bnl.gov/nudat2] and Nubase2016 (Audi et al. 2017) 12

p1 10

p2

Intensity [a.u.]

8

p0 6

4

2

0 1000

2000

3000

4000

5000

6000

Energy [keV]

Fig. 3.10 Left: Nuclear levels of the nuclei with 14 protons + neutrons (A = 14). 14 N represents the only stable nucleus. All nuclei feature numerous excited states. From every state different pathways towards the lowest energy state exist via emission of particles and photons. In the ground state 14 O decays via β+ , while the first few excited states prefer emitting a proton forming 13 N. From(TUNL Nuclear Data Evaluation Project). Right: 12 C(3 He,p)14 N reaction at E = 2.5 MeV and 160° producing 14 N in the ground and the first two excited states. The emitted proton energy reduces corresponding to the excitation energy leading to three distinct proton peaks

3.3 Nuclear Reactions

141

momentum and parity. Physicists call transitions which have a difference of more than 1 unit of spin between initial and final level forbidden, since 1 corresponds to the spin carried (away) by a photon. Nuclear physics apparently makes it increasingly improbable to carry away more spin. The longest lived example for a forbidden excited state transition is 180m Ta with a difference of 9–1 spin units between excited and ground state (Table 3.1) resulting in 1.8 * 1015 years half-life, but also other isotopes such as 99m Tc feature practically relevant half-lives due to this effect. Having fixed target situations in accelerator applications also requires a heavy moving particle after the reaction in order to conserve momentum. This prohibits a simple consumption of projectiles with only one product particle, but requires at least two products (2 variables need 2 equations). But nuclear physics further limits the possibilities. Nuclear reactions have to follow additional nuclear conservation rules, most importantly the conservation of particle number and electric charge. The amount of protons, neutrons, and leptons (namely electrons + neutrinos, see Sect. 4.4) will not change during the reaction, only a transfer between projectile and target is possible. The situation changes for β decays since these involve the weak force. In the β decay the conservation of lepton number becomes important. The conversion of a proton to a neutron, or vice versa, changes the nuclear charge which has to be compensated by emitting a positron or an electron, respectively. This violates the conservation of lepton count, consequently a neutrino or anti-neutrino, respectively, has to be emitted additionally. Physicists invented the so-called Feynman diagram to cover all possible reaction and decay routes, but this theoretical construct goes too far for applications. The actual probability and branching between different possible reactions is described by the individual reaction cross-sections or decay probabilities. Not every reaction allowed by the conservation laws will also occur. Reactions with Q < 0 have a threshold since this missing energy has to be provided by kinetic energy of the projectile (conservation of energy). A chemist would call these endothermic and reactions with Q > 0 exothermic, but of course thermal energies have no meaning for the MeV energies involved in nuclear reactions. Since this book discusses charged particle beams and target nuclei also consist of similar charged particles, the electro-magnetic Lorentz force produces a barrier potential for reaching a nuclear proximity required for nuclear reactions. We can understand the nucleus as an armoured tank with its electrical charge building some kind of Coulomb armour. With projectiles of low kinetic energy fired for example from a handgun or a rifle we cannot penetrate its armour, but the projectile will bounce off. The more punch we have the higher the probability to penetrate the armour instead of bouncing off. The same applies to nuclear interactions, the higher the projectile energy, the lower the cross-section for elastic scattering and the higher the nuclear reaction cross-section. Table 3.1 lists a few examples of barrier potentials towards proton projectiles derived from analytical calculations. These barriers scale-up roughly with the number of protons involved in the reaction. The barrier energies only indicate at which projectile energies nuclear reactions become possible, but any barrier can be tunnelled in quantum systems. For this reason nuclear reaction cross-sections usually start with an exponential increase from low energies towards higher energies as the next section will elaborate.

142

3 Interaction of Particle Beams and Matter

3.3.1 Cross-Sections

18O(p,n)18F

cross-sec on [mbarn]

In applications, nuclear reactions primarily involve ion beams. Nuclear reactions with electron projectiles require similar energies as with ions in the 10 MeV region, implying certain application drawbacks connected with the high electron velocity as discussed in Chap. 2. Furthermore, electron-nucleus reactions are somewhat limited by the fact that, in contrast to the constituents of ions, electrons are not present in the nucleus, limiting the possible reactions and products. For these reasons nuclear reactions with electrons have little application relevance and will not be discussed in this edition. Anyways, many physical aspects are independent of the projectile species. The magnitude of the cross-sections strongly depends on the projectile energy and the projectile-target combination. In some cases also the nuclear polarisation state significantly influences the reaction cross-section (Ciullo et al. 2016). Nuclear reactions require higher energies than the elastic reactions underlying stopping due to the proximity of projectile and target required by the short-ranged nuclear forces responsible for nuclear reactions (Coulomb barrier effect). The evolution of the cross-section σ with the projectile energy E is described by the total cross-section σ (E) (sometimes also just called cross-section). The example in Fig. 3.11 demonstrates the variation of the total cross-section of the 18 O(p,n)18 F reaction over five orders of magnitude between 2 and 200 MeV projectile energy. The cross-section first increases, then reaches a maximum of about 300 mbarn at ≈ 6 MeV, and then decreases towards higher energies again by five orders of magnitude towards 200 MeV. Empirically, many cross-sections follow a qualitatively similar behaviour with varying peak cross-section, peak width, and projectile energy at the maximum. 1000

J. K. Bair 1973 J. P. Blaser 1951 E. Hess 2001 S. Takacs 2003 J.M. Blair 1960 S.W. Kitwanga 1990 TENDL-2015

100

10

1

0.1 2

20

200

Proton energy [MeV] Fig. 3.11 An extract from the JANIS OECD Nuclear Energy Agency (NEA) (2017) database comparing experimental and theoretical differential cross-section for the 18 O(p,n)18 F reaction. Symbols mark experimental data, while the line shows calculated cross-sections from TENDL-2015 (Koning et al. 2015)

3.3 Nuclear Reactions

143

Figure 3.11 compares the semi-empirical cross-section TENDL-2015 (Koning et al. 2015) with various experimental data. All datasets roughly agree within a factor three to each other, but TENDL-2015 is unable to reproduce some of the features, for example the resonances in the 2–4 MeV region, which are, on the other hand, also ambiguous in the experimental data. The cross-section shows a typical threshold reaction, in this case Q = −2438 keV as displayed in the mass defect difference between 18 O and 18 F in Table 3.1. From the threshold on, the cross-section increases exponentially due to tunnelling of the proximity potential. For Q ≥ 0 reactions, the threshold with its exponential increase will adapt to the order of the barrier potential. For 18 O(p,n)18 F the Coulomb barrier of 1338 keV is lower than the energy threshold (see Table 3.1) and therefore not relevant. The existence of resonances arises from the quantum mechanical particle-wave duality. Each particle has its individual wave and as soon as projectile and target wave come into contact, resonant overlaps, similar to the interference of light waves, can occur. The resonances change the crosssection at specific energies by orders of magnitude, either increasing or decreasing it. Towards higher energies, generally the cross-section decrease due to reducing particle wave-function overlap. A moving projectile defines a unique direction/vector with its direction of movement. This vector represents a symmetry axis for the reaction, leading to a nonisotropic emission of reaction products. In other words, the cross-section changes with the angle towards the direction of movement, leading to differential crosssections dσ /d depending on the exit angle of the products, also called reaction angle, and projectile energy. Figure 3.12 shows an example. The quantity  describes the solid angle into which the given cross-section can be measured at the given energy E and reaction angle. Equation (3.12) allows calculating the solid angle from a given area A of a sphere of radius r. We can see it as a detector of area A with a distance of r to the point of reaction.

Cross-sec on [mb/Sr]

100

10

1 1° 45°

10° 120°

179° 90°

730

780

150°

0.1 530

580

630

680

830

880

930

980

Projec le energy [keV]

Fig. 3.12 Differential cross-section of the 18 O(p, 4 He)15 N reaction for detecting the 4 He at the stated laboratory reaction angles. The resonance at 629 keV is weakest at 90° and increases up to a factor 4 towards higher and lower angles. At 828 keV the cross-section decreases with increasing angle by about 20%. Data from Sigmacalc (Gurbich 2016) R-Matrix fits to experimental values

144

3 Interaction of Particle Beams and Matter

 = A/r 2

(3.12)

Differential cross-sections have a considerably lower level of available data in the literature due to the fact that determining them requires measuring the moving products in-situ and not only the amount of present isotopes at some point in time after the process (ex situ). Total cross-sections rather find use in isotope production and activation applications, while differential cross-sections find use in analytical methods due to their connection to detection and reaction kinematics. As in Fig. 3.11, the differential cross-section in Fig. 3.12 features an exponential growth from the low energy side and several resonances. In the differential form, a clear dependence of the resonance height on the reaction angle becomes visible. In σ (E) only the angular average would be visible, but each resonance varies differently with reaction angle. The determination of precise reaction cross-sections represents an integral part of many technological advances in accelerator applications, but also a challenging one. Physicists try to reduce the effort by determining only the data specifically required, leaving many holes in the data landscape. In many application cases discussed later in this book a certain element with several stable isotopes is used to produce a specific isotope via nuclear reactions. In this case, potentially several reaction types, usually sets of multi-product reactions such as (p,xn) can lead from different isotopes to the same nuclide. In this case the individual isotopes reaction cross-sections were summed up to a so-called production cross-section. Production cross-sections are a simplification with several drawbacks, but the advantage of easy experimental determination using natural isotopic composition. As an example 182 Re via protons could occur from natural tungsten from reactions with its various isotopes via 182 W(p,n)182 Re, 183 W(p,2n)182 Re, 184 W(p,3n)182 Re, or 186 W(p,5n)182 Re reactions. If we measure only the final 182 Re activity we will not be able to distinguish between the isotope specific reactions. In contrast, the reaction cross-sections could be determined only with an isotopically purified target of a single (tungsten) isotope. Unfortunately, theoretical physics has not proceeded to a point where mathematical descriptions for all nuclear scattering reactions exist. We already saw the Rutherford cross-section in (3.1) with its accurate theoretical description of elastic scatterning up to a few MeV as an example of such a theoretical description, but the underlying reaction does not involve a nuclear interaction in the sense of everything beyond the electrical charge visible to the outside (the billiard ball model). Existing approaches based on quantum calculations on the quark level (Quantum-chromodynamics) offer the potential for delivering theoretically derived cross-sections, but the computational effort strongly scales with the number of involved quarks and brings current supercomputers to their limits, even for hydrogen isotope reactions. A full solution of the problems probably requires the next level of computer technologies, more complete physical understanding of the strong nuclear force or of the four fundamental forces in general. Nuclear physicists found a set of models describing nuclear reaction cross-sections to inter- and extrapolate from given experimental data. These semi-empirical equations combine an adequate theoretical model description of the overall process and

3.3 Nuclear Reactions

145

require fitting of (theoretically) unknown parameters to the experimental data. This procedure yields a formula for calculating cross-sections over a certain energy and angular range with a statistical uncertainty given by the data. The problem is, the range of validity is not known and systematic errors in the experimental data or model could be unwillingly absorbed in the result. Hence semi-empirical models are a useful tool for data analysis and computations, but the results must be handled with care, especially in the extrapolation region where the risk of systematic errors of the model increases. σRes (E) ∼

1 (E − E Res )2 +  2 /4

(3.13)

Differenal cross-secon [mb/Sr]

In general, reaction cross-sections combine resonant and non-resonant regions. We already learned the start of nuclear reactions follows an exponential increase due to the tunnelling of the Coulomb barrier. The Breit-Wigner or also Lorentz resonance function (3.13) describes the probability distribution of resonant interactions. This cross-section depends on the central energy of the resonance E Res and a resonance width . Interferences with the non-resonant part of the reaction can lead to the typical down-up or up-down resonance where both cross-sections subtract and add up (or vice versa) before and after the resonance energy. Figure 3.13 shows three examples of this process with different  in all cases. The next level beyond this simple analytical fitting requires more complex models implemented in several codes. Implementing the above idea of resonances and the interaction of particle waves in the sense of the Schrödinger equation leads to the socalled R-Matrix algorithms such as the code SigmaCalc (Gurbich 2016). These fitting algorithms combine the features of several states (3.13) known from experimental data to a matrix which then allows solving the Schrödinger equation, reproducing cross-sections in the resonant and non-resonant region over an energy and angle 1000 800 600 400 200 0

0

1000

2000

3000

4000

5000

6000

7000

Proton energy [keV] Fig. 3.13 Differential cross-section of 12 C(p,p0 )12 C at 165°. Already at 360 keV this elastic reaction deviates from the Rutherford behaviour, in spite of the 1027 keV barrier. The data show several types of different resonances in the given energy range. Data from R-Matrix fit of SigmaCalc (Gurbich, 2016) to a set of experimental data

146

3 Interaction of Particle Beams and Matter

range given by the experimental data. Nuclear models such as the codes Talys (this is behind the frequently updated TENDL cross-section database) or Empire (Herman et al. 2007) follow a different approach by combining a multitude of limited physical models of specific nuclear interactions. These codes produce reasonable total crosssections as we saw in Fig. 3.11, although resonant features might be missing. For differential cross-sections the agreement significantly worsens. Nevertheless, these codes often provide the only available data for new developments and over the years progress on the underlying nuclear data continuously improves the result quality of these codes. Knowing what cross-sections are, the applicant will ask how to determine them experimentally. Cross-sections represent a probability at a given energy. Consequently, we required a defined known energy for the reaction in the target, an identification of the reaction, and a detector for counting the amount of reaction products in relation to the fluence of projectiles. For detecting the occurrence of a given nuclear reaction we have to detect the products of this reaction. This could be the fast light products (mostly p, α, and n) or the heavy product, which usually remains in the target due to its limited range. Stopping reduces the initial beam energy, leading to a mixing of different energies upon passing a target. This requires either a local measurement, an energy resolved measurement, or a restriction of stopping by using thin targets and large detector solid angles for compensation. Differential cross-sections are mostly measured via thin targets. The exact quantity depends on the element and beam energy, but typically lies in the order of 1 μm thickness. The target thickness induces a contradiction due to the well-defined energy with thinner samples (lower stopping), but the higher statistics with thicker samples (3.2). The detection of the heavy products follows the beam irradiation via ex situ spectroscopy of the decay radiation together with isotope identification via characteristic spectral libraries available for most isotopes, e.g. (Nucleonica GmbH 2014). This pathway yields differential cross-sections. For non-monoisotopic target elements, similar reactions potentially lead to the same product isotope. (p,xn) reactions are the prime example, but also other reactions, decays, or impurities lead to the same problem. In this situation only isotopically enriched targets allow unfolding the problem with an ex situ analysis. In-situ detecting the light product via particle detectors suffers from the problem of catching all angles around the target via physical placement of detectors. A differential cross-section results from a single detector angle. The in-situ detection allows for an energy resolved measurement of the products. This additional information in principle allows unfolding the stopping for 2-body reactions and using thick samples to generate complete spectra in one measurement. The backward calculation of the kinematics (see next section) has its drawbacks, but offers the potential for accelerated determination of cross-sections via stopping induced beam energy scanning (Möller S., Analytical continuous slowing down model for nuclear reaction crosssection measurements by exploitation of stopping for projectile energy scanning and results for 13C(3He,α)12C and 13C(3He,p)15N, 2017). In the end the best method derives from the energy range, the required accuracy, and the specific reaction to be investigated.

3.3 Nuclear Reactions

147

3.3.2 Kinematics The cross-section describes which reactions occur with which frequency/probability when a given beam impinges on a target. Kinematics describes how the particles move into, and, in particular, out of this reaction. The most important choice for understanding kinematics relates to the inertial system considered. In other words, what do we see as resting and what as moving? As always in this book, we have a moving beam particle and a stationary target defining our initial conditions. This is called the laboratory (Lab) system, the interial system we also reside in. Nuclear decays feature only a single stationary particle in the beginning, significantly simplifying the situation as no preferential direction or relative velocity exists. Due to this symmetry, decays simply emit their products isotropically (equally probable in all directions) and will not be considered primarily in this section. The Centre-of-Mass (CM) system represents the true physics of each interaction. In this system both projectile and target move, but with exact opposite momentum. In collider experiments, two beams move towards each other, making the CM system identical to the Lab system. All other cases require re-calculation of energies, angles, and cross-sections to switch between both. The Lab system represents our, and in the case of stationary targets, the targets point of view. For applications we are interested in the Lab system, since it describes the results we see. Switching from Lab to CM system reduces the kinetic energy available for reactions. In the Lab system, the centre of mass, as a point in between projectile and target, has to move towards the target. A kinetic energy and momentum not present in the CM system. This CM movement could be understood as a virtual particle carrying the remaining kinetic energy. The recalculation between CM and Lab and the view of the CM system belong to the fundamental particle physics or nuclear physics and will therefore be omitted in this book. For more details on the calculations related to this, the reader is referred to any standard nuclear physics book. The physics of nuclear reactions and decay kinematics depends on the number of involved particles. Each particle existing before and after the reaction has a set of kinematic properties, namely mass (m), kinetic energy (E), and movement vector (v) or in other notation energy and momentum vector. In applications only very few reactions involve more than two particles on the input side, due to the small probability of coincidence for the usual beam densities. This means we consider reactions of a projectile with a target. Already in Sect. 3.3.1 we learned about the existence of reactions with two products such as 181 Ta(p, n)181 W with a light product, the neutron, and a heavy product. The wording of light and a heavy product originates from nuclear reactions favouring to form the strongest bound products (see Fig. 8.1), which are typically heavy elements. Correspondingly nuclear reactions tend to release neutrons or protons or if the Q-values are attractive 4 He. These so-called 2-body reactions yield a unique solution for the kinematics of the products. Figure 3.1 depicts the 2-body situation. The application aspect of the kinematic theory starts when not all kinematic properties are known. Solving 2-body kinematics relies on the mathematical logic of requiring as many equations as we have unknowns, see (Zagrebaev et al. 2019) for an

148

3 Interaction of Particle Beams and Matter

online tool. The conservation of energy and momentum vector, see (3.14), provides the equations. Particle 1 (bearing E 1 , p1 ) represents the projectile, particle 2 the target and particles 3 and 4 the products. Particle 3 represents the light product and particle 4 the heavy product particle. Consequently, if we know all kinematic properties of an interaction except for two (remember each particle bears two properties), the kinematic equations will yield unique results for the two unknowns. → → → − → p2 = − p3 + − p4 p1 + − E1 + E2 = E3 + E4

(3.14)

Here the momenta are given as vectors. The pure forward momentum of the initial situation can receive an additional transversal momentum component balanced between the two product particles. Since two particles necessarily lie on the same plane, the whole situation becomes rotationally symmetric about the axis defined by the movement vector of the projectile towards the target. This rotational symmetry is the same as discussed with the reaction angle in Sect. 3.3.1. With the target (particle 2) initially at rest, p2 and E 2 become zero. Reactions with Q = 0 break the conservation of mass by opening an exchange channel between mass and kinetic energy. We need to take the transfer between mass and energy via the Q-value into account. This modifies the conservation laws of (3.14) to p1 = p3 ∗ cos(θ ) + p4 ∗ cos(φ) 0 = p3 ∗ sin(θ ) − p4 ∗ sin(φ) E1 = E3 + E4 − Q m 1 + m 2 = m 3 + m 4 + Q/c2

(3.15)

In production applications we usually know the projectile and target properties, while in analytical applications projectile and the properties of one or two products are known from detectors. The solution for a missing quantity, such as a product energy, is anything but straightforward. According to (Nastasi et al. 2014) Appendix 4, the product energies in the Lab system read in the non-relativistic case with E 2 = 0: E1m 1m 3 (m 1 + m 2 )(m 3 + m 4 )  2   Q m2m4 m1 Q 2 1+ − sin (θ ) + cos(θ ) ± m1m3 E1 m 2 E1

E3 =

E1m 1m 4 (m 1 + m 2 )(m 3 + m 4 )  2   Q m2m3 m1 Q 2 1+ − sin (φ) + cos(φ) ± m1m4 E1 m 2 E1

(3.16)

E4 =

(3.17)

3.3 Nuclear Reactions

149

 θmax = arcsin

 0.5 m1 Q m 2 m 4 (E 1 + Q) 1+ m 1m 3 E1 m 2 (E 1 + Q) 

sin(φ) =

m 3 E3 m 4 E4

 21

sin(θ )

(3.18)

(3.19)

The book (Nastasi et al. 2014) provides a comprehensive list of kinematic equations beyond (3.16)–(3.19) in its appendix. The equations include the possibility of elastic (Q = 0) and inelastic (Q = 0) nuclear reactions by allowing for different masses of incoming (m1 , m2 ) and outgoing particles/products (m3 , m4 ) and a kinetic energy production or consumption via the Q-value. Equations (3.16) and (3.17) offer two possible solutions (±) depending on the mass ratio of projectile and target. For a projectile lighter than the target, both products could be scattered in the forward direction (think of a grazing impact) or in opposite directions (think of a frontal impact). If the projectile mass equals the target mass or exceeds it, both products have to move in the forward direction due to conservation of CM momentum. Only one solution remains. In other words, for a given set of properties of one product, the properties of the second product are strictly defined by the conservation laws. Practically this means, if we analyse one product for its energy and mass at a certain scattering angle, e.g. , and we know the projectile mass, the 2-body kinematics, e.g. in the form of (3.16), can tell us for example which mass the target had. In this case m2 and m4 are two unknowns, but we can add the conservation of mass from (3.15) as the second equation required for a unique solution with 2 unknowns. In some cases logical exclusion principles yield extra information on the involved particles via conservation of nuclear numbers (proton, neutron, electron count) or known Q-values. This recalculation represents an important fact for accelerator based analytics by making the measurement of different quantities physically equivalent for understanding the whole reaction. 3-body or n-body reactions such as (p,2n) or (p,xn), respectively, follow different rules than 2-body reactions. The definition of the kinematic properties of one product does not define the kinematic properties of the other two products, since these two have no mathematical rule how to share the remaining energy and momentum. Energy, momentum, and mass are still conserved, but the additional degree of freedom yields distribution functions instead of singular values as depicted in Fig. 3.14. Imagine dropping a bag of food onto a bunch of dogs: All the food will be consumed for sure, but each time you do it every single dog will receive a different amount of food. This n-body situation is independent of whether a projectile-target situation or a decay such as the β− decay emitting a heavy decay product, an electron, and an anti-neutrino is considered. A prominent example of this problem is the KATRIN experiment searching for the neutrino mass via detection of the energy distribution function of the electron emitted in the decay of tritium. The problem of this analytical experiment lies in the decreasing counting statistics towards the high-energy end of the electron energy distribution.

150

3 Interaction of Particle Beams and Matter

Fig. 3.14 Energy spectrum of the electrons escaping from the β-decay of 210 Bi, which is a 3body decay (heavy nucleus, electron, muon). Due to their kinematic freedom n-body decays always feature product energy spectra. The example shows an electron energy E el = 0.4 MeV would correspond to a neutrino energy E v = 763 keV, but all other combinations are also possible

In the distribution function lower energies occur more often, since more allowed combinations of properties for the other particles exist in this region. Physicists call this the density of states. Considering the limiting case of one particle receiving the maximum possible energy (E Max ), the other two particles have to feature an exactly opposite momentum, due to the limited available energy, and only one state of the whole system remains possible. For simplifying the situation we can reformulate the problem to a 2-body reaction with an imaginary box combining two of the three products to one. In any case the physics of ambiguous solutions remains the same, making n-body reactions an at least unattractive situation for analytics.

3.4 Depth- and Stopping Dependent Reactions Considering the individual particle picture, a charged particle traveling through matter constantly loses energy, but it also has a probability to scatter depending on energy and projectile-target combination. Combining these effects yields the reaction probability of a nuclear reaction described by the cross-section σ in a depth interval of z to z1 or energy interval E 0 (= beam energy) to E 1 , respectively. In the energy picture, the ratio of the reacting ρ R to the stopping ρ S matter comes into play when considering compounds consisting of several elements. In these mixtures, all target particles induce stopping, constituting ρ S . Only specific target species interact via σ, constituting ρ R .

3.4 Depth- and Stopping Dependent Reactions

  z

W (E) ρR E0 σ E  ∫ dE  W ≡∫ dx = ∫ σ (E(x)) ∗ ρR dx =

x ρS E1 S(E  ) z1 z1

151

z

(3.20)

Equation (3.20) describes the reaction probability W in 3 different forms. First the reaction in an infinitesimal slice of the target of thickness x integrated over the considered depth, second the energy (E) dependent cross-section times the reaction partner density integrated over depth and lastly the depth is solved for the depth dependent energy via the means of the stopping power S. Setting E 1 = 0 assumes full stopping of the projectile in the target, a situation typical for accelerator applications. For analytical purposes the reaction depth will be important to resolve further information such as depth and local concentrations of elements and isotopes ρ from the analysis. For production purposes, the integral over the whole accessible depth, the range, or the integral over the energy from the initial accelerator energy to zero, respectively, is more relevant. In either case, the relevant quantities change over depth and this location/depth of reactions happening forms a quantity of interest. Figure 3.15 explains the different parts where stopping and depth define the reactions and whether products remain in the sample or leave it. Projectiles entering the target at a shallow angle, i.e. α close to 90°, induce reactions closer to the surface. The depth of the reaction scales according to (3.21). Variation of the impact angle enables virtually increasing the stopping power of the material for the projectile. The same scaling exists for the exit angle of the products, enabling differentially changing the effective stopping of projectile and products, a trick often used to increase for example the depth resolution of analytical methods. Unintended impact and exit angle variations relate to surface roughness and porosity.

Fig. 3.15 An exemplary depth dependent scattering situation. The way in and out of the sample differ depending on impact and exit angles. Leaving the sample requires sufficient product energy, depending on depth and exit angle of the products

152

3 Interaction of Particle Beams and Matter 3550

Differen al Probability E3/keV

6E-09

3500

5E-09

3450

4E-09 3400 3E-09

E3 [keV]

Probability in 1 keV Bin [1/Sr]

7E-09

3350

2E-09

3300

1E-09 0

3250 0

10

20

30

40

50

60

70

Depth [1022 atoms/m²]

Fig. 3.16 Depth calculation of the 18 O(p, 4 He)15 N reaction of 988 keV protons in SiO2 showing the depth profile of reaction probability together with the light product (4 He) energy E3 at 150° scattering angle. The reaction has a range of about 6.2 * 1023 atoms/m2 (recalculate with density to have it per length) with a lower energy limit of 520 keV due to the onset of the reaction barrier, see Fig. 3.12

Depth = Range ∗ cos(α)

(3.21)

Considering a perpendicular incidence, α = 0, of a proton probing for 18 O in a SiO2 (silicon dioxide) and combining Fig. 3.15 with the (3.19), (3.20), and (3.16) yields Fig. 3.16. Here we calculated depth in the sample in units of atoms passed vs. the light product energy E 3 at 150° scattering angle for the reaction 18 O(p, 4 He)15 N (cross-section in Fig. 3.12) with 988 keV projectile energy E 1 . For recalculation of this two-body reaction kinematics take care of the units and magnitudes. Note that the calculated E 3 is given at the depth of the reaction, when leaving the sample additional energy will be lost on the way out. Neither was straggling included. The figure demonstrates several aspects of depth dependent reactions. The light product energy changes with depth, but in this case only by about 243 keV, while the projectile energy changes by 468 keV. The origin of this discrepancy lies in the reaction Q = 3979.8 keV, adding additional energy to the kinematics which dominates the momentum conservation in the two-body kinematics. In the reaction probability, the cross-section clearly becomes visible, as stated by (3.20). If this reaction was used for analytical purposes, the behaviour of the cross-section leads to a low sensitivity to 18 O close to the surface (200 structural models with physical relevance and separable impact on the scattering spectra in connection with a fitting algorithm adapted to consider the relevant parts of the spectra. The MeV ion-beam analysis codes SimNRA (Mayer, SimNRA User’s Guide IPP Report Number: IPP 9/113, 1997) and NDF (www.surrey.ac.uk/ion-beamcentre/research-areas/ion-beam-analysis) on the other hand apply a layered sample structure with elemental concentrations individual to each layer, since the method features a depth resolution with elemental sensitivity. For methods with industrial maturity such as electron induced X-ray emission (a.k.a. energy-dispersive X-ray spectroscopy) instrument manufacturers provide their own codes. All of these codes and packages represent application specific compendia selected and optimized for a more or less specific task. Adding a source term to the equation system or simulating a nuclear reaction spectrum or whatever these code packages do relies on codes describing the basic physics of ion-matter interactions. This chapter demonstrated the complex physics behind beam-matter interactions, namely stopping, reactions/interactions, and kinematics. Starting already at the stopping power complex physics and numerous experimental data have to come together. For this topic several codes exist such as A-star and P-star for protons and α-particles and

3.5 Computer Modelling

159

e-star for electrons (National Institute of Standards and Technology 2019). The most common code for arbitrary ions is the program SRIM(Ziegler et al. 2008) with the latest version of 2013 being available freely on www.srim.org. SRIM allows also for integration in user programs with the help of its sr.module outputing stopping powers if given projectile and target properties. The SRIM software features a large set of examples and input data comparisons proving its accuracy in the few percent range down to a few keV/amu. For electrons the program CASP (http://www.casp-pro gram.org/) provides stopping powers, while CASINO (http://www.gel.usherbrooke. ca/casino) already extends beyond pure stopping and straggling. The Nucleonica database (Nucleonica GmbH 2014) provides a compendium of codes for electrons, ions, and also positrons and muons. Calculation of nuclear reaction cross-sections remains a major challenge to date. It has to be distinguished between ab initio codes calculating the nuclei on the quark level and nuclear model codes based on a semi-empirical approach. The former include extreme amounts of interactions, since all quarks of a nucleus interact with all other quarks and gluons via the physics of quantum chromodynamics (QCD). These ab initio codes do not require input other than fundamental physical constants. Unfortunately, current computational technology limits the capabilities of QCD calculations rendering it currently irrelevant for the complicated resonance cross-sections of elements such as lithium and beyond. In contrast, nuclear model codes allow for calculating reaction cross-sections for any projectile-target combinations, but they require input about the structure of the involved nuclei and physical models of the involved interactions. These structural data include experimentally determined energy level schemes and state lifetimes peppered with the typical experimental limits and accuracies. The most famous codes are Talys (www.talys.eu) and EMPIRE (www.nds.iaea.org/empire). The Talys code supplies the TENDL database, often cited here, which contains total and differential cross-sections for nearly all possible reactions. Many of the above mentioned software packages do not feature full graphical user interfaces (GUIs), but rather rely on a scripting language for user input. Programming becomes a central skill of a least higher level accelerator application experts and researchers. This relates not only to the operation of codes, but also post-processing of data such as fitting and extraction of results. The simple adjustment of a beam position requires only three points for a skilled operator able to apply a polynomial fit of 2nd grade to the data, while the simple non-programming skilled approach might require acquiring a dozen data points, stepwise approaching a minimum value. Programming or data science, respectively, nowadays often termed artificial intelligence “AI” in business presentations, becomes the accelerator experts sixth sense. The author recommends to any reader from personal experience to acquire solid programming skills when working in this field. Even human resources departments value the programming skills of accelerator physicists due to this natural connection. The programming language Python has proven to be maybe the most valuable option in the recent years. Not only is the syntax quite flexible and straightforward and its whole is based on open source, but most importantly it features numerous packages

160

3 Interaction of Particle Beams and Matter

with pre-programmed complex functions for mathematics, data handling, and so on. Some of the graphs presented in this book also found on Python codes.

The above code generates Fig. 3.18 from two files containing plain text lists of discrete points of the corresponding projectile energy dependent reaction crosssection and stopping power. The program interpolates these discrete lists forming integrable functions. The integrals are evaluated in a range of 1–60 MeV and plotted. Packages such as SciPy (Jones et al. 2001) and Numpy (Walt et al. 2011) ease this task by providing ready to use functions for most mathematical tasks. Packages for the export and complex plotting of the data exist allowing for fully automated parameter studies and acceleration of repeated analysis tasks. Even nuclear data packages exist for python integration.

References G. Audi, et al., The NUBASE2016 evaluation of nuclear properties. Chin. Phys. C 41, 030001, S (2017). https://doi.org/10.1088/1674-1137/41/3/030001 M. Berger, J. Hubbell, S. Seltzer, J. Chang, J. Coursey, R. Sukumar, D.S. Zucker, K. Olsen, NIST XCOM: Photon Cross Sections Database (Von abgerufen, 2010). https://www.nist.gov/ pml/xcom-photon-cross-sections-database (2010) J. Blocki, J. Randrup, W. Swiatecki, C. Tsang, Proximity forces. Ann. Phys. 105, 427–462, S. (1977). https://doi.org/10.1016/0003-4916(77)90249-4

References

161

G. Ciullo, R. Engels, M. Büscher, A. Vasilyev, Nuclear Fusion with Polarized Fuel (Springer 2016) Culham Centre for Fusion Energy, FISPACT-II Material Handbooks. (Abgerufen am 05 2018 von, 2018). https://www.ccfe.ac.uk/fispact_handbooks.aspx A.F. Gurbich, SigmaCalc recent development and present status of the evaluated cross-sections for IBA. Nucl. Instrum. Methods Phys. Res., Sect. B 371, 27–32 (2016). https://doi.org/10.1016/j. nimb.2015.09.035 M. Herman, R. Capote, B. Carlson, P. Oblozinsky, M. Sin, A. Trkov, V. Zerkin, EMPIRE: nuclear reaction model code system for data evaluation. Nucl. Data Sheets 108, 2655–2715 (2007) W. Huang et al., The AME2016 atomic mass evaluation (I). Evaluation of input data; and adjustment procedures. Chin. Phys. C 41 030002, S (2017). https://doi.org/10.1088/1674-1137/41/3/030002 E. Jones, E. Oliphant, P. Peterson, et al., SciPy: Open Source Scientific Tools for Python (Von abgerufen, 2001). http://www.scipy.org/ A. Koning, et al., TENDL-2015: TALYS-Based Evaluated Nuclear Data Library (Von abgerufen, 2015). https://tendl.web.psi.ch/tendl_2015/tendl2015.html M. Mayer, SimNRA User’s Guide IPP Report Number: IPP 9/113 (Max-Planck-Institut für Plasmaphysik, Garching). www.simnra.com S. Möller, Analytical continuous slowing down model for nuclear reaction cross-section measurements by exploitation of stopping for projectile energy scanning and results for 13C(3He,α)12C and 13C(3He,p)15N. Nucl. Instrum. Methods Phys. Res. Sect. B 394, 134–140 (2017). https:// doi.org/10.1016/j.nimb.2017.01.017 National Institute of Standards and Technology, NIST: Introduction of e-star, p-star, and a-star. (Von abgerufen, 2019). https://physics.nist.gov/PhysRefData/Star/Text/intro.html M. Nastasi, J.W. Mayer, Y. Wang, Ion Beam Analysis: Fundamentals and Applications. CRC Press (2014) Nucleonica GmbH., Nucleonica Nuclear Science Portal. (www.nucleonica.com) Version 3.0.49 (Nucleonica GmbH Karlsruhe, Germany, 2014) OECD Nuclear Energy Agency (NEA), JANIS (Von abgerufen, 2017). http://www.oecd-nea.org/ janis/ P. Sigmund, Particle Penetration and Radiation Effects: General Aspects and Stopping of Swift Point Charges (Springer, Berlin, Heidelberg, New York, 2006). ISBN 13978-3-540-31713-5 J. Sillanpää, Electronic stopping of silicon from a 3D charge distribution. Nuclear Instrum. Methods Phys. Res. Sect. B: Beam Interact. Mater. Atoms Vol. 164–165, 302–309 (2000) S.V. Walt, Colbert, S. C., G. Varoquaux, The NumPy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13, S. 22–30 (2011). https://doi.org/10.1109/mcse.2011.37.a V. Zagrebaev, A. Denikin, A. Karpov, A. Alekseev, M. Naumenko, V. Rachkov, M.A. Naumenko, V. Saiko, NRV Web Knowledge Base on Low-Energy Nuclear Physics—2 Body Kinematics (Von abgerufen, 2019). http://nrv.jinr.ru/nrv/mobilenrv/kinematics/Kinematics2Body.html J.F. Ziegler, J.P. Biersack, M.D. Ziegler, SRIM—The Stopping and Range of Ions in Matter (Chester, 2008)

Chapter 4

Secondary Particle Generation with Accelerators

Abstract The beam-matter interaction results in a multitude of secondary particles. These particles bear information useful for analytical method and the potential for applications through their controlled generation. The knowledge of their emission physics results in technological options for the control of their properties. The production of secondary ions, electrons, photons, neutrons, and exotic particles will be discussed based on relevant application examples and connections to future chapters. In particular, neutron and photon sources based on accelerated induced secondary particle physics already have numerous applications. Different source options will be discussed together with layout optimisations.

You know the best thing about a foursome? If one quits you still have a threesome (House M.D.)

In the last chapter, we investigated basic physics of particle-beam matter interactions and we already noticed the physical aspects to have different relevance for the applications. The terms production and analytical application form a rough frame of the directions of accelerator applications. There is an overlap extending beyond both being accelerator applications, but also several distinctions. Production applications rely on terms such as energy efficiency, rates and reaction pathways, while analytical applications more focus on separation/resolution of particles, ratios of processes and energy dependencies. What they have in common is a need to control the secondary particles released during the interaction. The scientific method requires three steps for controlling a process: First describe it to be sure what exactly you are talking about; second analyse it to reveal its patterns; third rehearse your analysis via feedback in order to verify it to a maximum level of confidence (the reason why physicsts determine fundamental constants with 18 digit accuracy). In general, accelerators allow for the production/release of all types of fundamental particles from the beam targets. Usually it is even unavoidable to produce or release certain particle types due to the physics of the interaction processes. Firing a particle beam onto a target yields a situation similar to throwing a stone into water, see Fig. 4.1: Upon contact the water deforms, some of it is released in all directions, backwards we see splashes and in forward direction bubbles of air. Just like this, the © Springer Nature Switzerland AG 2020 S. Möller, Accelerator Technology, Particle Acceleration and Detection, https://doi.org/10.1007/978-3-030-62308-1_4

163

164

4 Secondary Particle Generation with Accelerators

Fig. 4.1 A stone thrown into water releases a burst of matter from its target similar to a particle hitting a target

projectile impact on target atoms will release splashes of matter and energy in all directions (see Fig. 1.2). Water, being a liquid, will quickly recover to its original state just as matter will. The analogy lacks in the interaction of the stone with the water on the nuclear level. Furthermore it is a deterministic situation, while on the atomic scale processes follow statistical rules. So what are we talking about with the term secondary particles? It denotes all particles (ions, electrons, neutrons, photons …) produced in the reaction, or accelerated by a reaction collision, leaving the reaction zone/target. This definition is in fact a bit vague since for example the same product can leave the target or not, depending on it being directed into the target or directed outwards. The particle leaving the target will be detectable, though, a major difference for the application. Applications can make use of secondary particles. The energy/energy distribution of the product particles becomes a crucial part, since low energy particles hardly leave the target. Generally, the higher the reaction energy, the more secondary particles are emitted. In case of surface near low energy products this logic can flip over due to a local reduction of the interaction probability with increasing energy. Why should we want to produce particle from a particle beam? Ion and electron beams are easily produced, see Sect. 2.4, and these beams can be tailored due to their charged nature by beam optics, while in particular uncharged particles (photons and neutrons) or even strange fundamental particles are not easily emitted or influenced by our electro-magnetic technology. Secondary particles extend the range of accessible particles, if we can transfer our knowledge and ability to control the charged

4 Secondary Particle Generation with Accelerators

165

particles to the secondary particles. Furthermore, the secondary particles carry information about target and beam-matter interaction due to connecting physics such as kinematics and energy-loss. In spite of their useful aspect, secondary particles also represent a burden due to radiation safety. The origin of secondary particles lies in the non-deterministic nature of nuclear physics. It behaves a bit like the British people in the 2019 Brexit discussion: It’s unclear what it wants but it’s totally clear what it does not want. The conservation laws (momentum, energy …) make clear statements of what’s not possible. Anything possible will also happen with a certain probability. The following sections will explain our understanding of the underlying processes generating the different particle types. E = mc2

(4.1)

Last but not least, Einstein’s famous (4.1) connects to secondary particle generation with accelerators. It connects the electrical energy we put into the accelerator with the particles we release. Although in the application range of energies, secondary particles are mostly released from bound states, the E in Einstein’s equation can stand for the electrical energy from a power plug (with correct handling of units) which the beam-matter interaction converts into free particles.

4.1 Electrons, Atoms, Molecules, and Ions Generating secondary electrons and ions requires the transfer of a sufficient amount of energy from impacting projectiles to the target atoms to break their bonds with the surrounding matter and release target particles as free secondary particles, see e.g. (Gnaser 1999) as a more detailed reference. In this section we will consider secondary particles as particles already present in the target before the reaction, not particle generated for example in a nuclear reaction or decay due to the different physics connected with their emission. The equations of energy transfer based on conservation of momentum were discussed in the last chapter. An important physical conclusion from kinematics for secondary particles lies in the mechanism generating backward oriented secondary particles. Secondary particles typically feature low kinetic energies due to the kinematic contradiction of high energy transfer and momentum reversal. A high energy transfer from projectile to target implies a head on collision which conserves the forward momentum. The low energy limits the secondary particle range to a few ten nm. This limits the escape of the secondary particles to surface near depth. The closest surface is the backward surface, making this the direction relevant for applications. From kinematics, the primary particles (projectiles) can be deflected backwards in a single collision, if they are lighter than the target. A target particle, in contrast, requires several subsequent interactions with other target particles in order be redirected to the backscattering direction. These multiple scattering events depicted in Fig. 4.2 reverse the initial momentum directed into the target towards the backscattering direction.

166

4 Secondary Particle Generation with Accelerators

Fig. 4.2 The chain of events of secondary particle release. The red incident projectile hits a surface near atom (1). This primary knock-on atom (PKA) or electron receives a part of the projectile energy freeing it from its lattice binding. The primary hits another surface near particle (2). This secondary particle receives a fraction of the primaries energy, but again changes the direction of momentum (arrows). With sufficient energy it passes the surface barrier (3) being released as free, sputtered atom/electron

In addition to kinematic aspects energy-loss plays an important role for both the projectile and the secondary electrons and ions (in contrast to photons). On its way out of the sample, the secondary particle experiences energy-loss. Consequently, only the particles starting close to the surface feature enough energy to actually reach the surface, particles generated deeper will stop within the target. The projectile type, its energy, in this context called the impact energy, and its angle of incidence on the target represent the defining quantities for the generation of secondary particles and their properties from a given target. Heavier projectiles at energies around the stopping power maximum generally produce the highest secondary particle yields (yield = secondaries/projectiles). Shallow impact angles tend to shorten the chain of events shown in Fig. 4.2, shifting it closer to the surface and therefore increasing the yield. The target surface represents an important boundary for secondary electrons and atoms/ions. Electrons and ions constitute matter, therefore binding energy keeping matter in its state are equivalent to threshold energies for releasing free secondary particles. The surface represents a special situation for this binding, since basically half the binding partners are missing for the surface atoms. The corresponding binding energies are the so-called work function for electrons (known from thermal electron sources) and the surface binding energy for ions (known as sublimation energy). For example polycrystalline tungsten features a work function of 4.5 eV and a surface binding energy of 8.79 eV. The projectile has to displace the secondary particle from its initial binding, then it moves towards the surface where it finally surpasses the surface barrier. Interestingly this chain of events results in a secondary particle energy only weakly depending on the projectile impact energy (Behrisch and Eckstein 2007; Seiler 1983).

4.1 Electrons, Atoms, Molecules, and Ions

167

16 14

1.4 MeV 4He+ 3 MeV 3He+

Current [nA]

12 10 8 6 4 2 0 -10 0

10 20 30 40 50 60 70 80 90 100

Bias [V] Fig. 4.3 Apparent beam current versus target bias for two He projectile types impinging at 0° on a biased polished tungsten sample. Positive biasing draws back secondary electrons to the target, if their kinetic energy is below the biasing voltage. The secondary electron energies are mostly identical for both impact energies. The current at +100 V bias represents the impacting ion current, hence each projectile releases up to 7 electrons at −10 V biasing (14/2)

Figure 4.3 shows an example of the apparent current composed of the impacting ions and the backscattered secondary electrons for two different impacting ion energies. Most of the secondary electrons have energies below 10 eV, in spite of the MeV projectile energy. The kinetic energy E distribution function f E of the emitted particles typically follows the Thompson distribution (Eksaeva et al. 2017) (4.2). The only parameters are the distribution parameter a and the surface binding energy E SB . Consequently, the model states an impact energy independent emission energy, only material dependent values influence the emission energy. In reality, minor parts at the high energy fall-off show an impact energy dependence. f E (E) =

a−1 a(a − 1)E ∗ E SB (E + E SB )a+1

(4.2)

The angular emission distribution of secondary particles typically follows a cosine function of the angle against the surface normal, see Fig. 4.4. This cosine typically features a non-one exponent which weakly depends on impact energy for a given target. For secondary electrons emitted by electron impact this holds true independent of the electron impact angle (Seiler 1983), but for secondary ions and atoms, the angle of maximum emission points towards the specular reflection direction of the projectiles as we would expect from the billiard like collision kinematics. For very low ion impact energies and materials of special order such as single crystals and certain metals the maximum deviates from the specular direction, following for example the crystallographic planes or resembling the so-called butterfly shape. Whether a target atom is released as secondary positive ion, negative ion, or neutral depends on the surface. The initial binding energy of an atom with its electrons strongly depend on its chemical surrounding, leading to the so-called matrix effect.

168

4 Secondary Particle Generation with Accelerators

Fig. 4.4 Left: Cosine angular distributions of sputtered Ge particles emitted into the backscattering half-space against the surface normal. Reproduced from (Behrisch and Eckstein 2007). Right: W particles sputtered by low energy Ar ions with 50–150 eV impact energy are emitted in a butterfly shaped distribution. The dashed coloured lines represent experimental data, the black solid line a molecular dynamics simulation. Reproduced from (Eksaeva et al. 2017) published under CC-BY 4.0

Practically roughness, oxidation, and impurities further change the surface bindings from the bulk. In any case, the neutral emission mode is energetically favourable, but not inevitable. Sputtering with Alkali projectiles, e.g. Cs, strongly enhance the emission of negative ions due to their loosely bound outermost electron which they readily contribute to the surface upon implantation. Electron attracting elements, in particular oxygen, on the other hand increase the amount of positive ions while suppressing negative ion emission. Consequently, the release type of ions strongly depends on the target elements in the same manner. An oxygen atom would be emitted rather as negative ion than as positive ion, while a Cs atom prefers the positive over the negative state. The process of secondary ion release is called sputtering. Due to their low mass and stopping power electrons hardly enable the emission of atoms and ions, but it requires other heavy particles to release them. Sputtering consumes the surface via the release of material, making it useful in applications of microscopic drilling/machining. The binary collision approximation in the form of the semi-empirical Eckstein formula (Behrisch and Eckstein 2007) (equation 4.3) or the SRIM (Ziegler et al. 2008) and SD.Trim.Sp codes describes sputtering with high accuracy. Equation (4.3) describes the total sputtering yield Y S as a function of the ion impact energy E 0 , the stopping power S, and the empirical fitting constants A, B, C, E thres (= sputtering threshold) which are available in (Behrisch and Eckstein 2007). The binary collision model works well in the typical accelerator context, but reaches limits in the impact energy region below a few 100 eV typical for plasmas. With lower energies multi-body interactions gain increased importance for sputtering, requiring more complex molecular dynamics simulations. Energies above MeV and heavy projectiles mark the other limit of the binary collision approximation. Here the majority of energy deposits via electronic losses, the relevance of nuclear (billiard) collisions decreases. For these so-called swift heavy ions the physics of sputtering changes to a collective process local thermal spikes and evaporation.

4.1 Electrons, Atoms, Molecules, and Ions

169 

Y S (E 0 ) = A ∗ S(λE 0 ) ∗ with λ =

m2 m 1 +m 2



E0 E thres



−1

E0 E thres 1 2 ε h 3∗ 0 m e e2

B∗

9 ( 128π )



2

C

C −1

(4.3)

2

Z 1 Z 2 e2 Z 13 +Z 23

For special combinations of projectile and target, where volatile molecules can be formed, chemical interactions induce a loss of target material, so-called chemical sputtering. Unless sputtering this process is not a direct result of collision cascades, but relates to the accumulation of atoms loosely bound to the surface, so-called ad-atoms, produced by the projectile impact. Chemical reactions on the surface form volatiles, e.g. H ions on C form CH4 , which subsequently evaporated. The balance of surface concentration of projectile ad-atoms and the evaporation induces a temperature-dependence as demonstrated in Fig. 4.5. Higher temperatures open up more chemical sputtering channels, for example W sputtering by O ions features a chemical component due to the relevant vapour pressure of WO2 above 1100 K.

High Temperature

Room Temperature

Physical sputtering

Fig. 4.5 Sputtering/Erosion yield of graphite (carbon) against H impact energy. Open symbols represent surface temperatures of 570–920 K, filled symbols are around room temperature (≈300 K). Collision cascades induce physical sputtering. The formation of volatiles such as CH4 induces chemical sputtering for chemically interaction projectiles and targets only. The chemical sputtering yield mostly depends on the surface temperature, while physical sputtering depends mostly on the impact energy. Consequently, physical sputtering (solid line) dominates at lower temperatures and higher impact energies, while chemical sputtering dominates at low impact energies. Reproduced from (Behrisch and Eckstein 2007) with permission by Springer

170

4 Secondary Particle Generation with Accelerators

4.2 Neutrons I’ve Been Looking for Freedom (David Hasselhoff)

Neutrons are naturally bound to nuclei. Free neutrons have a limited half-life of about 610 s. Their rest mass exceeds the proton mass by 1.3 MeV, consequently they decay to a proton, an electron, and an electron anti-neutrino. Yet their specific properties make it worth setting them free. Neutrons are the only heavy particle without electric charge (except for some special particles). This implies several important differences to charged particles: The electronic energy-loss mechanism does not apply to neutrons. As a consequence, neutrons have orders of magnitude longer ranges than charged particles and they can penetrate matter down to thermal energies in the meV range. In contrast to ions, neutrons cannot be bound to matter, therefore remaining free even at zero kinetic energy. No charge also means no coulomb barrier hindering nuclear reactions. While ions require several MeV, neutrons react with nuclei at any energy (respecting conservation of energy). This decouples the beam energy and reaction rates from the reaction Q-value. Neutrons enable production of other isotopes than ions, an important degree of freedom for isotope applications. The absence of nuclear charge implies important application drawbacks. So far we have no means of accelerating neutrons. We can only produce free neutrons via nuclear reactions releasing them from the atomic nucleus. Thereafter we have to live with the release energy. Collisions with light nuclei, in particular the hydrogen bound in water, enable reducing the neutron energy (aka. Moderation), but these collisional processes cannot produce sharp beam energies as we know from charged particle beams, see Fig. 4.6. Collisional processes will produce statistical energy distributions, preventing for example non-ambiguous kinematic relations. Most neutron releasing reactions anyways produce broad energy spectra due to their multi-body decay nature. Even for sharp primary spectra the high range of neutrons enables interactions with the surroundings generating mixed spectra as displayed in Fig. 4.6. The absence of nuclear charge also prevents the application of beam optical devices. For neutron sources producing an isotropic emission the flux density will decay with the square of the distance to the neutron source. It will not be possible to focus the complete source flux onto a single target like with a charged particle beam or amplify the neutron beam using external energy input as with optical Lasers. Neutrons guide tubes provide a small relief from this practical problem. Seeing neutrons in the wave picture enables total reflection, as known from light optics, from the guide tube walls. Total reflection prevents absorption and transmission of the incident neutrons when incident in a shallow angle onto the wall. The incident angle must be smaller than a critical angle defined by the arcsin of the ratio of refractive index of two adjacent materials (e.g. air vs. road for light). For neutrons with their little interaction with matter only minute differences in the refractive index to the vacuum value exist. Traditionally nickel coatings on glass tubes were used, but newly applied layered materials feature slightly higher refractive indices, enabling the reflection of neutrons from larger incident angles, further reducing the losses. The

4.2 Neutrons

171

Fig. 4.6 Binned neutron flux spectrum for sample exposure in the HFR research fission reactor in Petten. DEMO is a planned nuclear fusion power plant, but in spite of it producing only 14.1 MeV and 2.45 MeV neutrons from the D-T and D-D reactions a broad energy spectrum can be observed in its inside. The PGAA analytical instrument at the FRM-2 reactor works with a cooled neutron spectrum. Data extracted from DEMO from https://fispact.ukaea.uk/wp-content/upl oads/2016/10/CCFE-R1636.pdf; HFR from https://fispact.ukaea.uk/wp-content/uploads/2016/10/ Handbook_HFR_UKAEA1532.pdf; PGAA from (Kudejova et al. 2008)

neutron guides enable fitting more end-stations to a given neutron source without excessive flux density losses associated with the distance law. At the same time they can be tailored to add functionality via filtering of energy, direction, or spin of the neutrons. Unfortunately, the effect only works for relatively long neutron wavelength connected to cold neutrons (meV). The nuclear reaction cross-sections of neutrons typically features three different behaviours along the μeV to MeV range of neutron energies. At low energies up to about 1 keV the cross-section decreases exponentially from high values. In the keV to MeV range multiple sharp and strong resonances occur from a relatively constant baseline. Above MeV again a region of decreasing cross-section starts, but at a lower level compared to the low energy range. This lead to the definition of three main neutron energy ranges, the thermal, the resonance (also epithermal), and the fast neutron energy range as depicted in Fig. 4.7. Thermal neutrons originate from collisions with matter at a certain temperature, therefore the region further separates according to the thermal temperature into room temperature (=thermal) and cryogenic temperatures typically in the liquid hydrogen range (=cold). Traditionally so-called research (fission) reactors produce application relevant fluxes of neutrons from uranium fission. These reactors produce a thermal neutron output flux of a few W up to a few 10 MW (e.g. the FRM-II reactor) with maximum flux densities in the order of 1015 neutrons/cm2 at the core and some 1010 n/cm2 at far end-stations. The source strength of the best research reactors lies in the order of 1018

172

Neutron energy [eV] 1E-5 1E-3 1E-1 1E+1 1E+3 1E+5 1E+7 1E+9 1E+1 1E+0 1E-1 Fast

1E-2 1E-3

Thermal

1E-4

Cold

Cross-sec on [barn]

Fig. 4.7 Representative neutron reaction cross-section of the 30 Si(n, γ)31 Si reaction used for neutron doping of semiconductors. Three distinct regions of behaviour can be found in the cross-section. Data from TENDL (Koning et al. 2015)

4 Secondary Particle Generation with Accelerators

1E-5 Resonance

1E-6

neutrons/s. The radioactive waste and the highly enriched uranium with >20% 235 U (weapon usable) required for maximum neutron fluxes started a political debate about possible alternatives for these reactor types. A replacement for the regular 3–5% 235 U enriched fuel is possible at the expense of lower neutron fluxes and service life. In most cases lower neutron flux limits the productivity of the applications (flux and runtime are connected), therefore lower fluxes devalue the expensive reactor and endstations. Together with the decaying acceptance of fission reactors this strengthens the search for possible high flux alternatives not relying on fissile fuels. Neutrons are applied in a wide range with a certain overlap with charged particles, but often in a complementary way. Industrial applications include the doping of silicon via neutron capture of 30 Si forming 31 Si which decays to 31 P or the high sensitivity analysis of geological samples for resources. In the scientific context, neutrons have a special meaning for soft and biological matter due to the low energy deposition of thermal neutrons, but also numerous analytical applications in materials research exist. In the medical context neutrons aid in particular the production of certain medically relevant radioisotopes, but also direct treatment of cancer with neutrons irradiation is established.

4.2.1 Accelerator Neutron Sources Accelerators with their possibilities for extreme beam power density and flexibility provide the main alternative for fission reactions for producing intense neutron beams. Beam focussing leads to small neutron source extend, enabling high fluxes even at lower powers. In contrast to fission reactors, beam optics enables close distances between neutron source and application resulting in increased usage ratios of produced neutrons. Bringing source and application closer together improves the geometric efficiency or in other words the ratio of flux density to source strength. Some, not all, concepts work completely without heavy metals and result in only

4.2 Neutrons

173

little activation, important aspects for a new decentralised application concept of neutrons. The flexibility of accelerators and the many option in connection with the complicated technical developments of this new type of neutron source only recently resulted in the first competitive devices. Many nuclear reactions produce neutrons so lets first sort them to get a start on the technological concepts. Historically the 9 Be(4 He, n)12 C reaction was used for producing neutrons from α-decaying radioactive isotopes resulting in a radioisotope neutron source. Mixtures of for example 241 Am produced in fission reactors with beryllium metal are still in use as reference sources with neutron fluxes up to about 106 neutrons/s. Practically we would like to see at least ten orders of magnitude higher fluxes/source strength. The reaction rate depends on the ratio of reaction crosssection to stopping power (see Sect. 3.4). Consequently, protons and deuterons have advantages over helium projectiles since they offer lower stopping power combined with technically favourable ion source properties (see Sect. 2.4). Independent of the projectiles two classes of reactions exist: 2-body reactions with light isotopes (practically hydrogen isotopes, lithium, and beryllium) such as 2 H(d,n)3 He or 7 Li(p, n)7 Be and multi-body (p, xn) reactions with heavy elements such as tungsten. Light isotopes offer small stopping power and low reaction thresholds. For example the reaction 7 Li(p, n)7 Be has Q = −1.64 MeV. Due to the limited amount of not supershort-lived light isotopes and the requirement of only a few MeV projectile energy only little problematic radioisotopes are produced with light targets. Every reaction produces a single neutron with fixed energy. In contrast, reactions with heavy targets require at least a few 10 MeV projectile energy for surpassing the Coulomb-barrier. A typical target is W due to its good thermo-mechanical properties and acceptable costs. Here nat W(p, xn)Re reactions produce several neutrons per reaction, but the multi-body reaction results in broad neutron energy spectra. Above about 100 MeV projectile energy a fission of the target nucleus becomes possible, releasing even more neutrons. The efficiency of an individual reaction decreases above a few 10 MeV, but as more reactions releasing more neutrons per reaction become available, the efficiency improves with beam energy. The spallation process can even be boosted with neutron induced fission reactions. These so-called accelerator driven systems multiply the spallation neutron count by a factor 10 or more by using them for inducing sub-critical fission reactions e.g. with uranium. This special neutron source type will be discussed in Sect. 8.1. MeV photons allow releasing neutrons from the atomic nucleus via the nuclear variant of the photo-effect. Electron based neutron sources exploit the same mechanism by first producing high energy Bremsstrahlung photons from several 10 MeV electrons which subsequently induce the nuclear photo-effect. In both cases the (γ, n) process becomes efficient in the so-called giant dipole resonance region typically found in the region of 13–25 MeV photons (Herman and Fultz 1975). Basically, the photons bring the nucleus to a high energy excited state as depicted in Fig. 3.10. This state then decays via emission of nuclear constituents. In spite of the resonance being “giant” the overall process is not as efficient as ion induced processes and generally results in very high photon radiation levels, forming a practical limit for radiation shielding.

174

4 Secondary Particle Generation with Accelerators

The previous discussion already sorted out a few concepts. Let us start with a small-scale neutron source, a so-called electronic neutron generator (ENG), with ≈100 keV deuterons inducing (d, n) on tritium targets. Small DC accelerators with metal targets loaded with tritium such as Titaniumhydride as depicted in Fig. 4.8. Most of the beam energy converts to heat due to energy-loss in matter. For detailed calculations see Sect. 8.3.1. The reader can calculate the required beam current and power for a given neutron output using the reaction probability. The heat load induced in a shallow layer due to the small ion range limits the neutron output. A technical variant uses slanted targets for increasing the receiving target area as depicted in Fig. 4.9, at the trade-off of a larger target. Target size and the minimum distance strongly influences the neutron flux received by an application. Neutrons distribute isotropically, therefore a larger working distance results in lower flux. With a sourcesize in the order of 1 cm3 but a minimum working distance of about 3 cm the source flux distributes over 113.1 cm2 . A source strength of a typical commercial product of 109 neutrons/s results in a maximum flux density of 8.8 × 106 neutrons/(cm2 s). The high stopping power at the low projectile energy results in a low yield of neutrons per MeV of projectile energy. According to Fig. 8.10 the reaction probability in a Titaniumhydride lies around 2 × 10–5 , multiplied with 120 keV ion energy we

Fig. 4.8 An electronic D-T neutron generator encased in black polyethylene blocks. The tube encasing has a diameter of roughly 60 mm and emits up to 109 n/s. The device weights about 15 kg in total. The output decays over operating time due to the consumption of the tritium, lasting about 10,000 h. Copyright Forschungszentrum Jülich/Rahul Rayaprolu

Fig. 4.9 Technical approach for reducing the areal heat flux in a beam target by increasing the impact area through inclination

4.2 Neutrons

175

end up with 6 GeV of energy invested per emitted neutron. This low efficiency limits the neutron output. Consequently, increasing the projectile energy will extend our technological limit of source strength. Unfortunately the T(d, n)4 He reaction cross-section drops towards higher energies, encouraging us to consider other reactions when aiming at higher efficiency in the MeV region. Competitive concepts of such neutron sources appeared together with the shut-down plans for fission based sources, e.g. (Mauerhofer et al. 2017). This concept achieves a high flux density via a 10 MeV proton beam incident on a compact and flexible beryllium neutron source with close positioning of moderators and end-stations. The MeV beam energy leads to a limited localized nuclear activation of components as depicted in Fig. 2.57, activation starts becoming an issue in this energy range. At the required ion fluxes of >1 mA on a 40 mm diameter target implantation of gaseous species imposes a threat for mechanical stability of the target. Bubbles can form at the end of the ion range, rapidly destroying the target via gas damage (Sect. 7.4). Either the target has to be hot enough to outgas the implanted species or it has to be thinner than the ion range in order to implant the ions into the cooling fluid behind the target. In this case, a 0.7 mm thin target was chosen due to the ion range of 0.8 ± 0.04 mm. The target acts as a beam window as discussed in Sect. 2.6 between vacuum and cooling water in this case. Mechanical stability represents an issue, defining a lower beam energy limit connected to a minimum thickness. Due to the bad thermo-mechanical properties of lithium designs with lithium targets rather choose liquid lithium as a flowing curtain, see Fig. 4.10. The lithium flow combines target and coolant functions. Lastly, in particular deuteron, gas targets enable the direct use of the lightest target element. Reasonable ion ranges require pressures in the bar range, too much for the vacuum system of the accelerator. Either a solid window, limitations for the beam power, or clever and powerful vacuum systems in combination with beam optics can be applied. With the technical concepts ready we can take a look if the MeV region actually brings a gain. Figure 4.11 compares a beryllium with a tungsten target. At 3 MeV the beryllium cross-section reaches relevant values, reaching a maximum at 10 MeV. A further increase of beam energy above 10 MeV brings practically no increase in crosssection, only slight gain due to decreasing stopping power. Heavy elements on the

Fig. 4.10 Sketch of the three basic target options for neutron sources. Solid window like target, liquid flow target, and gas chamber target. The gas target chamber applies a cascade of gas apertures and pumps in order to reduce the vacuum pressure inspite of the high influx from the gas target

4 Secondary Particle Generation with Accelerators

Neutron produc on cross-sec on [mbarn]

Neutron produc on cross-sec on [mbarn]

180 160 140 120 100 80

9Be

60

natW

40

184W(p,t)

20 0

natW

14000

200

12000

10

20

30

40

Proton energy [MeV]

50

a

300

184W(p,t)

10000

250

8000

200

6000

150

4000

100

2000

50

0 0

350

186W(p,n)

0

50

100

150

Proton energy [MeV]

200

0

(p, n) and (p, t) cross-sec on [mbar]

176

b

Fig. 4.11 Neutron production cross-section for p+nat W and p+Be in comparison to an individual (p, n) cross-section and the tritium production cross-section relevant for radiation safety. Additional reactions open up at higher proton energies resulting in a monotonous increase of the neutron production cross-section with energy for the heavy tungsten, but a constant value for Be above 50 meV. At the same time radiation safety issues strongly increase with relevant tritium production above about 13 meV. Data from TENDL (Koning et al. 2015)

other hand allow for a multitude of (p, xn) reactions, producing several neutrons per reaction, which sum up with increasing energy. The comparison of the (p, n) reaction cross-section with the neutron production cross-section in Fig. 4.11b shows (p, n) dominates the neutron production cross-section up to 9 MeV, but thereafter other reactions take over resulting in a different behaviour than with the light target. The cross-section monotonically increases and the stopping power decreases, the process becomes increasingly efficient reaching a value of 120 MeV/neutron at 200 MeV with on average 1.66 neutrons produced per proton. At the end of this books energy range of 250 MeV the spallation regime starts. Here the proton energy suffices for fission reactions of the target nucleus, breaking the nucleus into two smaller nuclei and a set of neutrons. The neutron production cross-section continues to increase, but the numerous isotopes produced further increase nuclear waste issues. Kinematics connects the maximum neutron energy of accelerator neutron sources to the incident beam energy. In most cases high neutron energies are rather a complication than an advantage, see for example the neutron induced cross-section in Fig. 4.7 screaming for thermal neutrons. The moderation and shielding (see Sect. 2.7.3) grow in size with increasing beam energy. Furthermore, increasing activation and bremsstrahlung require long distances between source and application in order to preserve practical dose rate and signal to noise ratios. Extending neutron source regions with increasing beam energy hinder a proportional increase in flux density. In conclusion, the beam energy selection always implies a compromise between flux, cost, size, and practical problems. A practical setup requires maintenance and end-of-service provisions. This converts nuclear activation products to radioactive waste. Naturally, activation strongly scales up with beam energy since more reactions open up. For a 10

4.2 Neutrons

177

MeV proton source with beryllium target considerations were already presented in Fig. 2.57. In this case target activation resulted mainly from Iron alloyed to the Beryllium for processing reasons. Be and other light ions produce short-lived isotopes from proton irradiation. The only critical product from light ions is tritium as long-lived yet hardly detectable radioactive product, accompanied by secondary products due to the neutron radiation. In spallation targets, the reaction results directly in radioactive isotopes and a broad range of elements, leading to structural activation with spallation energies. Concerning other secondary particles, with increasing energy more species arise as reactions with more negative Q-values become possible. Currently several scientific accelerator neutron sources are in operation or under realisation. The lab-scale NOVA-ERA source was already discussed. Numerous commercial systems based on proton and deuteron ion beams with handheld to laboratory sizes providing similar source strength up to several 1014 neutrons/s are available (IAEA 2000). The currently most extreme example is the European Spallation Source (ESS) under construction in Lund. This device applies a 2 GeV proton beam to a rotating tungsten target producing up to 1018 neutrons/s (≈FRM-II). It employs a 400 m long proton LinAC with 5 MW beam power operated in pulsed mode for neutron time-of-flight analysis methods. The beam impacts on a 2.6 m diameter rotating tungsten target. Up to 42 beamlines enable connecting a wide range of instruments and experiments.

4.2.2 The Specific Energy Efficiency Generally, energy efficiency and costs currently becomes increasingly important in technical concepts and this also affects accelerator applications. Definitely it would be a waste of ink printing all the zeroes typically present in the value of the energy efficiency of scientific experiments. For applications, the energy efficiency constitutes a central quantity. This is not only about costs of electricity, as of course consuming less energy to achieve the same goal is always cheaper, but mainly about technical limits as we saw in Sects. 2.2, 2.6 and others. If a system is properly optimized, the energy efficiency starts to limit its output due to thermal, legislation, or other limits. At this point, a target cannot produce more isotopes, a neutron source cannot generate higher brilliance, and the available laboratory cannot fit a larger machine. Therefore, the energy efficiency has to be considered for both production and analytical tasks. As discussed in the last section, the production of neutrons can either exploit light targets with low stopping power S and a single neutron per reaction or heavy targets with high stopping power and several released neutrons per reaction via (p, xn). For the calculation of energy efficiency, we make use of the equations from Sect. 3.4. We select two exemplary reactions, namely the bombardment of beryllium with protons as discussed in (Mauerhofer et al. 2017) and the bombardment of tungsten with protons. Figure 4.12 depicts the calculation results based on integration of (3.20). In the energy region below 21 MeV the beryllium target has significant advantages with a factor 18 higher reaction probability at 10 MeV. Furthermore, the ion range exceeds

4 Secondary Particle Generation with Accelerators 8E-3

10000

5E-3

8000

4E-3 3E-3

6000

2E-3

4000

1E-3

2000

0E+0

0 0

5

10

15

Proton energy [MeV]

20

Reaction probability

6E-3

12000

Energy efficiency [MeV/neutron]

7E-3

Reaction probability

14000

Reaction probability W Reaction probability Be Efficiency Be Efficiency W

1.8

1000

1.6

900

1.4

800 700

1.2

600

1.0

500 0.8

400

0.6

300

0.4

200

0.2

100

Energy efficiency [MeV/neutron]

178

0

0.0 0

50

100

150

200

Proton energy [MeV]

Fig. 4.12 Evolution of reaction probability W and energy efficiency H of Be and W reactions with protons. Up to 21 MeV the Be reaction is more effective. The calculation considered neutron production cross-sections from TENDL-2017

the one in tungsten by a factor 4.7, easing thermo-mechanical issues. We can see the energy efficiency at 15 MeV still ranges in the 6 GeV/neutron range, similar to the small scale D-T electronic neutron generator type. The reaction produces slightly lower neutron energy and has a different technical basis able to handle more beam power. Significantly better efficiency requires >40 MeV and a heavy metal target. For the application, the delivered flux density is of higher importance than the source strength. The flux density derives from the source strength and the minimum working distance. With increasing beam energy we can roughly estimate a constant source diameter due to the increasing efficiency requiring less current at higher beam energy (Fig. 4.12), a growth of target length according to the beam range, and a growth of shielding thickness required for forcing the outside γ-radiation level to a given constant value. In conclusion, the minimum working distance increases with beam energy, reducing the ratio of maximum flux density to source strength. Nevertheless, the increase in efficiency enables higher source strength and typically results in higher flux density and more space for additional end-stations. The γ flux grows according to the prompt γ-production cross-section resulting in an intensity I γ . We assume a constant neutron source strength. Taking the γ intensity of 10 MeV protons onto beryllium as reference intensity with a requirement of 0.1 m lead shielding (Mauerhofer et al. 2017) (yes that is oversimplifying) we can calculate the required thickness d for a lead shield with a constant mass absorption coefficient of 0.1 cm2 /g and a lead density of 11.3 g/cm3 according to d[m] = 0.1 m + ln(Iγ /Iγ 0 (10 MeV − Be)/113

(4.4)

From 10 MeV towards 200 MeV we can see an increase of the required shielding thickness from 0.1 to 0.27 m (Fig. 4.13) and a growth of source thickness by energy-

4.2 Neutrons

179 0.30

1E+00 1E-01

0.25

1E-02

γ/proton

1E-03

0.20

1E-04 1E-05 1E-06 1E-07 1E-08 1E-09 1E-10

γ/proton (Be) γ/proton (W)

0.15 0.10

Shielding Be Shielding W

0.05

Lead shield thickness [m]

Fig. 4.13 Calculation results of the required lead shield thickness due to proton-induced γ-emission

0.00 0 20 40 60 80 100 120 140 160 180 200

Proton energy [MeV]

loss calculations from 0.7 mm (Mauerhofer et al. 2017) to 26 mm (W) and 172 mm (Be) using SRIM (Ziegler et al. 2008), respectively. With this the minimum working distance grows from 0.107 to 0.3 m (W) resulting in a reduction of (accessible) flux density due to source size and shielding by a factor 8 from 10 MeV towards 200 MeV proton energy. In the same range our efficiency improves form 9 GeV/neutron (Be at 10 MeV) to 120 MeV/neutron (W at 200 MeV) by about 2 orders of magnitude leaving us a flux density gain of a factor 9.4 when comparing 10 to 200 MeV sources. The calculations presented in this section should be understood as rough estimations. Different reaction in particular with deuterons, shielding materials, and radiation levels strongly influence the outcome. The used theoretical reaction crosssection probably significantly differ from actual cross-sections, but experimental data are so far not available for the full range of values considered here. Nevertheless, we can conclude a higher beam energy improves energy efficiency and with this source strength significantly. For the practically more relevant flux density, the growth of the accelerator neutron source with beam energy eats up a part of the gain of higher beam energies. The increasing costs and radioactive waste issues of increasing beam energy add up to the evaluation, opening up application scenarios for all types of sources.

4.3 Photons Photons definitely represent the secondary particle type we have best control over in our current state of technology. Solid theories for the electron transitions in the atomic shell connected to line emission and photon absorption, models of Bremsstrahlung, and nuclear effects exist and are implemented in common codes such as GEANT4. In contrast to massive particles, kinematics does not strictly define the photon energy

180

4 Secondary Particle Generation with Accelerators

a specific ion beam produces, but we have more physical mechanisms and with this freedom of instrument design. Why produce light in accelerators and not use a lamp?! Accelerators are the only devices capable of providing high-energy particles with very narrow distributions and extreme power density. The particle energy is decoupled from the atomic structure enabling reaching energies not accessible with thermal (incandescent lamp) or electronic band-gap (LED, plasma) mechanisms. The reasons and technology of this were discussed in Chap. 2. In short: Our technical flexibility and control of the charged particle beam together with the solid understanding of the photon production processes leads to highest photon energies, narrow photon energy distributions, and high intensity light sources. These three perks represent the main aspects of all accelerator based photon sources. In applications of these photons, the energy requirements are typically given by certain processes or required range, therefore physicist condensed the power density and spectral width to the definition of the quantity brilliance: Brilliance R (x) =

Photons in a narrow wavelength band time ∗ angular spreads ∗ area

(4.5)

Equation (4.5) typically comes in the units of photons/(mm2 mrad2 second) for a bandwidth of 0.1% around the central photon energy. A high brilliance source is an intense collimated single wavelength source, comparable to a LASER, while a low brilliance source would be a divergent weak source of significant spectral width, comparable to an incandescent lamp. Equation (4.5) enables a quantitative comparison between technologically different setups. In the photon source community, it became the number one quantity of importance similar to the beam energy in the fundamental particle physics community or the triple product in the nuclear fusion community. Besides this also radiation pulse-length have an importance for time-resolved analytical processes. The free-electron laser definitely offers the best light properties and output flexibility. Why the hell use any other light source type at all? The other light sources suffice for most applications and the FEL’s specific and absolute costs exceed the other light sources by orders of magnitude. A medical imaging application would hardly profit from improved x-ray sources in its current form, since other technological aspects limit the spatial and contrast resolution. The economic and practical aspects of x-ray tubes outweigh their inferior light source properties. A statement like this has to be taken with care, since every technological advancement requires investment before it pays off. Nevertheless, the more performant and expensive free electron source types find application practically only in science.

4.3 Photons

181

4.3.1 X-ray and Bound Electron Sources By far the most common accelerator based light sources in application are of the bound electron type. “Bound” depicts electrons, which are part of an atom and consequently rest in solid or liquid matter (also gases but they have practically too low density). The interaction with electrons bound to matter involves the energies of these binding states, in contrast to free electrons discussed in the following sections. The one example everyone, except for the most lucky, know are the typical medical x-ray tubes. These tubes consist of an electron source, usually a hot emitter, a target of mostly W, Mo, or Cu due to their good heat conductivity and a DC voltage source accelerating the electrons from the source to the target with a few 10 keV and mA currents. Figure 4.14 depicts this layout using a thermionic electron source. The compact design consumes only electrical power, enabling enclosing it in a sealed case under vacuum. No additional vacuum or beam optics are required and the device is ready within a few seconds, forming the basis for their practical success. Electron beams offer practical advantages over ion beams by avoiding sputtering, a consumed resource (e.g. H2 gas) in the source, and by having at least two orders of magnitude lower stopping power. Therefore, all devices operate on the technological principle depicted here. Figure 4.15 shows exemplary x-ray emission spectra of such a tube for two different projectile energies. The spectrum combines the continuous Bremsstrahlung emission with the tungsten K-lines as given in Fig. 3.4. Each target element shows its own characteristic lines and the targets have to be selected according to the analytical requirements of x-ray energy. Figure 4.15 shows the four so-called Kα1 , Kα2 and Kβ1 , Kβ2 emission lines of W. The characteristic lines have a fixed photon energy, only their production cross-section increases with projectile energy. The continuous Bremsstrahlung aspect increases in spectral width and intensity with the projectile

Fig. 4.14 Simplest design of a thermionic emitter (K) and impact anode target (A) x-ray tube with water-cooling (C). A sealed housing eliminates the need for vacuum equipment while still allowing x-rays to leave the vessel. The voltage Ua defines the electron impact energy. Reproduced from Wikimedia, public domain

182

4 Secondary Particle Generation with Accelerators

Photon flux [1012 Photons/keV m² A s]

100keV 80 keV 80 keV, 3mm Al 100

10

1 20

40

60

80

100

Photon energy [keV]

Fig. 4.15 X-ray tube spectrum with tungsten target using 100 or 80 keV electrons without or with 3 mm Al absorption filter. The spectrum comprises an overlap of Bremsstrahlung with characteristic emission. The two main K-α (≈60 keV) and −β (≈70 keV) line pairs dominate the photon spectrum only with an absorber. The maximum energy depends on Bremsstrahlung and is consequently projectile energy dependent. The characteristic lines change in intensity due to the energy dependent emission cross-section. Filtering reduces in particular the low energy part, resulting in a higher average photon energy (=“harder” spectrum). Calculated using SpekCalc 1.1 (spekcalc.weebly.com) (Poludniowski and Evans 2007)

energy. The total photon flux increases with projectile energy, but the optimal energy depends on the optimal photon energy spectrum for a given application. Figure 4.16 explains the physical mechanism behind the characteristic emission. The projectiles hit an electron bound to the atomic nucleus in the anode target. The release of this electron (ionisation) opens a vacant position which wants to be Neut ron Prot on Elect ron Vacant binding

Kβ Nucleus



el



ec tr on

sh

Auger elect ron

el

M L

K ls

ext ernal excit at ion

em it t ed elect ron

Fig. 4.16 Emission modes of bound electrons. An external excitation such as an electron beam kicks the system from the equilibrium state to an excited state which decays via emission of secondary particles. Picture adapted from original work of Cepheiden from Wikimedia commons CC-BY-SA3

4.3 Photons

183

filled due to the minimisation of energy. Electrons from other binding states of the atom with lower binding energy can transfer to this vacant position. If the released electron originated from the innermost shell of the atom this line is called K line. The next shell, the L shell, features less binding energy resulting in lower x-ray energy compared to the K shell. The origin of the transferred electron adds an α to the name (=Kα) for the nearest shell, a β for the 2nd nearest shell and so on. Due to the stronger binding of the inner electrons with increasing proton count Z of the nucleus, the Kα x-ray energy E X scales roughly according to the Moseley scaling given in (4.6) EX ≈

3 ∗ 13.6 eV(Z − 1)2 4

(4.6)

Bremsstrahlung adds a weaker but continuous part to the spectrum extending from the peak electron energy down to zero or, due to absorption in the exit path, a few keV, respectively. The term “brems” originates from the German word for deceleration and “strahlung” equals radiation. The origin of Bremsstrahlung lies in the deceleration of the projectiles in the target as discussed in Chap. 3. Consequently, Bremsstrahlung strongly scales with particle velocity, a fact favouring electrons over ions due to their lower mass. For x-ray sources the resulting Bremsstrahlung spectrum and intensity matters. Kramer’s law describes the continuous target-material dependent photon flux spectrum J per projectile current according to (4.7) J (λ, E E ) =

  K Z Target λE E − 1 ∗ λ2 λ2 hc

(4.7)

With Kramer’s material constant K, the atomic number of the target Z Target , the photon wavelength λ, the electron kinetic energy E E given by the acceleration potential, Planck’s constant h and the speed of light c. Many applications, in particular in medicine, consider Bremsstrahlung as a spectral impurity, since a large spectral width reduces result quality by folding the contrast aspects of different photon energies to a single result. For example in medicine, wide spectra blur the contrast due to the energy dependent photon absorption length, while increasing radiation dose to the patient. Since the reasons of the spectral width lie in fundamental physics the only option is a spectral filtering. The easiest option for spectral filtering is putting a metal plate on the x-ray source. The metal absorbs photons similar to a high pass filter by exploiting the spectral “hardening” induced by the naturally stronger absorption of lower energy photons in matter (see Fig. 3.5). Figure 4.15 compares a filtered with an unfiltered spectrum, clearly showing the reduction of mostly the low energy flux. The filter also reduces the intensity of the desired characteristic emission by a smaller factor. Consequently, a filtered x-ray tube has to provide a higher primary photon flux for the same output flux. The resulting spectra still cover a large wavelength range. If sharper spectra are required, band-pass filters based on the reflection of x-rays on the crystallographic plane spacing allow for obtaining single emission line spectra. The crystals small width band-pass reduces

184

4 Secondary Particle Generation with Accelerators

Fig. 4.17 The GALAXI x-ray source (left) represents the technical limit of bound x-ray sources. A liquid metal curtain acts as target and efficiently removes the heat to a cooler below the table. The 70 kV, 250 W source produces >109 photons/s. Copyright Forschungszentrum Jülich GmbH

the total photon flux significantly, making it only interesting for special scientific applications. The standard medical x-ray tube used to acquire for example Fig. 6.3 uses about 30 keV with about 3 mA electron current. About 1 s of its photon flux suffices for producing the picture. Special scientific applications require higher fluxes. As with any target, the power load limits the x-ray tube output. Besides inclined geometries as shown in Fig. 4.9 or Fig. 4.14, liquid metals enable extending this technological limitation. The GALAXI device depicted in Fig. 4.17 represents an example for this advanced realisation of a bound electron source with a liquid Gallium-Indium jet target. The device was developed for material analysis using x-ray scattering. The low vapour pressure of Gallium together with the weak vacuum requirements of electron and photon beams make the use of a window between source and target unnecessary. Nevertheless, GALAXI uses a vacuum system for reducing photon scattering in air, improving beam brightness. GALAXI provides a total flux of 109 photons/s.

4.3.2 Synchrotron Sources The technological limits of bound electron sources bring up the question of how to further increase brightness, enabling for example imaging shorter processes or

4.3 Photons

185

for achieving intense photon beams even for monochromatic x-rays. Electrons with their strong Bremsstrahlung are the right Ansatz, but as we learned, the target results in technological limits. Subtracting the target leaves us only with a beam of free electrons, but how to accelerate/decelerate them sufficiently strongly without beammatter interaction? Think about driving a car: You feel forces (which equals acceleration via F = m * a) when accelerating, braking, and/or cornering. Accelerating forces in accelerators are too weak with their current limits in the order of MeV/m as discussed in Sect. 2.2.2. Braking equals the bound electron sources, with their energy gradients by stopping in the order of 10 MeV/mm (for a few 10 keV electrons in metals). Consequently, the cornering option remains open, but we will need it in the quality of a race-track not a city cruise. Technical setups for cornering were discussed in Sects. 2.2.2 and 2.3.2 in the context of beam optics. Think of the particle momentum as a 3 dimensional component vector. Changing the beam particle direction, using for example a dipole magnet, physically equals a forward deceleration and a perpendicular acceleration of the beam particles. Figure 4.18 illustrates the physics behind free-electron light emission by beam deflection. For example, a 90° deflection within 1 m curvature length decelerates a 1 GeV beam by 1 MeV/mm in the forward direction and accelerates it by the same amount in the perpendicular direction. The magnetic field strength and electron beam energy define the curvature radius and with this the magnitude of the acceleration. The curvature radius decreases with increasing magnetic field strength and increases with increasing beam energy, see Sect. 2.3.2, resulting in increased photon energy. These knobs allow varying the light wavelength and photon energy dynamically, respectively, theoretically in an unlimited but practically in a limited range. Free electron light sources lack a target for the light production, completely changing the technological basis and limits inherent to bound electron sources. Instead of the target heat load limitation, the magnitude of deflectional acceleration limits the output. Equation 2.11 discussed the photon power emitted by beam deflection as a source of losses to the beam energy inducing an upper limit for the beam energy of circular

Fig. 4.18 The concept of producing synchrotron light based on charged particle acceleration in magnetic fields. A dipole magnet reduces the particle velocity in the forward direction z by transferring the forward momentum vector component to the perpendicular direction x; it accelerates and decelerates the beam. This produces radiation in the tangential forward direction at every point of the curvature forming a cone with a wavelength distribution defined by the magnetic field strength, particle type, and beam energy. The more kinetic energy the charged particles bear and the higher the photon energy, the narrower the emission cone

186

4 Secondary Particle Generation with Accelerators

electron accelerators. At this energy, the synchrotron radiation emission intensity reaches the limits of input power of the accelerator, exactly the situation we are looking for now. This energy limit lies in the GeV range, slightly above our 250 MeV limit, but the strong beam energy scaling of emitted power with beam energy (2.11) demands this energy range. Varying beam energy results in varying photon flux and energy, requiring compensating the Bremsstrahlung losses with acceleration cavities to maintain electron energy and photon source properties. For GeV range and the requirement of constant beam energy we end up with the synchrotron accelerator type. This special class of Bremsstrahlung connected to free electrons in synchrotrons is usually just termed synchrotron light/radiation, while Bremsstrahlung usually depicts only the emission from bound electron targets. The current technical limits of beam optics result in synchrotrons of a few 100 m circumference. The synchrotron being an AC accelerator, compared to the DC accelerators used for bound electron sources, delivers a pulsed beam current. The photon emission follows this AC electron beam pattern, making the synchrotron light a pulsed light with pulse length of a few 100 ps (GHz frequencies) and lower. The constant beam energy defines the synchrotron as a storage ring. For loading this storage ring with an electron beam typically a smaller synchrotron is connected to the storage ring which itself is fed with a LinAC and/or a DC accelerator. Once the storage ring is filled with electrons, the synchrotron radiation damping (see Sect. 2.3.1) and the accelerator confinement allow for a continuous operation with the same electrons for several hours. Light sources based on ions would require beam energies proportionally higher as the particle mass, see the E/m scaling in (2.11), resulting in significantly increased device costs. For this reason free electron light sources always use electron beams. The cornering force induced by the magnetic deflection results in photon emission into a tangential cone. The opening angle of this cone shrinks inversely proportional to the relativistic γ-factor (the factor between relativistic and rest mass), resulting in an increase of brilliance with beam energy. Since every point along the curved part of the beam path represents an origin of one of these cones, a dipole magnet emits a photon fan with a width increasing with length and decreasing with curvature radius. Apertures allow selecting parts of this radiation fan for (several) specific end-stations. The emitted photon pulses already feature significantly higher brilliance compared to bound electron sources, but the restriction to the beam optical elements discussed in Sect. 2.3.2 so far limited our technological freedom. Continuing the thinking of the race-track we could include in principle an infinite amount of curves in a closed race-track, the beam path can be more complex than an American oval race-track. The so-called wiggler (Clarke 2004) follows this thinking by applying the copy and paste method to the dipole concept. It adds several dipole magnets with changing polarisation in a row, generating a beam path in the shape of multiple S-curves as depicted in Fig. 4.19. The beam wiggles around the undeflected path, emitting radiation at the apex of each wiggling. The apex represents the point of maximum acceleration, since the second derivative of the position yields the acceleration. In case of a sine wave wiggling its second derivative is a—sine, the absolute value of maximum deflection and acceleration lie at the same point. This

4.3 Photons

187 Phot on em ission at t he t urning point

Periodic m a g n e t i c d e felction

Fig. 4.19 Working principle of undulators and wigglers with the S-shaped electron beam path (red line), black arrows indicating the point and direction of maximum acceleration, and yellow arrows the emission of light. The beam actually moves into and out of the paper plane, but for illustration reasons the beam path was rotated by 90°

radiation adds up linearly in the sense of a sum of multiple dipole magnets resulting in a larger total emission of the wiggler compared to the dipole. The undulator uses the same geometry as the wiggler with, at first sight, only a smaller period length and lower magnetic field than the wiggler. This shorter and weaker magnets result in a smaller maximum deflection (Sect. 2.3.2), the apex points of emission come closer together. In the GeV range, the electron velocity is very close to the speed of light c resulting in a relativistic length contraction given by the γ value with γ ≈ E Beam /me c2 (remember: me c2 = 511 keV). In the GeV range γ ≈ 104 , therefore a 10 mm period length becomes 1 nm as seen in the centre-of-mass system of the electron, an x-ray wavelength. Consequently, the emitted photons in an undulator have a spatial overlap and a fixed phase relation for a certain wavelength, therefore fulfilling the prerequisites for light interference. Constructive interference increases the photon brightness and brilliance for a certain wavelength and its multiples compared to a wiggler, while destructive interference weakens the parts of the photon spectrum not fulfilling the propagation conditions. This spectral selection increases the peak brightness for certain wavelength in the form of a quadratic scaling of brightness with the number of periods, but it also strongly restricts the instrument tuning by coupling the field strength, period length, and photon wavelength. Furthermore, the length contraction dictates small technical dimensions of the undulator parts for emitting x-rays and requires GeV electron beam energies in order to have a length contraction large enough to allow the parts to be made and aligned. These dimensional restrictions of the interference condition result in a technological limit of undulators for generating high photon energies/short wavelength. Wigglers and dipoles offer practical advantages for achieving higher and broader photon energies at a given beam energy. The undulator physics will be discussed in more detail in Sect. 4.3.3 where its idea is extend further for the free-electron Laser. Figure 4.20 compares the emission spectra of these three so-called insertion devices. Wiggler and Dipole/bending magnets generate broad spectra in a wide energy range with a clear upper energy limit not shown in Fig. 4.20. The radiation spectrum of the technically simple and anyways present dipole magnet can be derived from theoretical models to a high accuracy making it ideal for calibration and less brilliance demanding applications. Still its brilliance exceeds the values of bound electron sources by orders of magnitude. The wiggler spectrum is basically

188

4 Secondary Particle Generation with Accelerators

Fig. 4.20 Comparison of the photon spectra delivered by dipole magnets, Wigglers as the improved variant, and the sharp periodic structure of undulators. All spectral distributions have maximum photon energies not shown in the figure. The three insertion devices differ in peak brilliance (logscale) by orders of magnitude with the undulator providing the highest brilliance. Translated and reprinted from (Möller and Falta 2010) with permission by Springer

identical to the dipole, but with increased brilliance. The undulator generates a resonance pattern of even multiples due to its interference physics with a peak width in the order of a few 10–3 of the peak energy. Its brilliance exceeds the values of the other two types by orders of magnitude. While the undulator generates significantly higher brilliance, it also has disadvantages for applications due to its limited wavelength flexibility and maximum wavelength. Each of the three options has its individual strength, consequently synchrotron facilities make use of all three options in specifically optimized end-stations. The brilliance of synchrotron light receives increased importance when using monochromators, further reducing the spectral width down to the MeV scale. While monochromators are also connected to bound electron sources, their low brightness demands long integration times and mm-sized spot sizes for reasonable detectors signal intensity. With synchrotron light, these limits disappear, opening up applications with highly resolved imaging or time-resolved in-situ experiments using single wavelength photons. Compared to bound electron sources, which have a peak energy defined by the beam impact energy, free electron light sources have a technological disadvantage for producing photons above about 100 keV due to the required high beam energies and magnetic field strength in the superconductor range. The example of the device BESSY-II with its vast amount of end-stations as depicted in Fig. 4.21 demonstrates the synchrotron device layout and its connection

4.3 Photons

189 Speicnerring

Hochfreqenz-Resonator (HOM Kavitat

BeispielBeamline

Experimentierstation Experimentierfur die Beaml

Synchrotron

Fig. 4.21 BESSY II synchrotron lab layout. In the centre an accelerator and the storage ring provide the 1.7 GeV electron beam. On 45 tangential ports different experiments/end-station exploit the photon beam generated by Bremsstrahlung sources in the storage ring. The size and concept of synchrotrons allows serving more end-stations than any other concept. Credit HZB/Ela Strickert

190

4 Secondary Particle Generation with Accelerators

with numerous applications. An inner 1.7 GeV electron synchrotron feeds the 240 m long outer storage ring with electron pulses up to an average beam current up to 300 mA. The pulse structure can be adapted via the choice of bunch length in the source region to 2 or 17 ps. Attached to this storage ring, multiple dipole magnets, one wiggler (7 T), and 12 undulators generate the light for about 45 end-stations. BESSY-II uses the high degree of instrumental flexibility of insertion devices for generating photon energies from 0.4 MeV (THz radiation) up to 90 keV (x-ray) with a range of intensities, polarisations, pulse structures, spectral width and so on. The insertion devices generate the light used for about 180 different analytical methods. The multitude of end-stations attached to synchrotrons enables combining a multitude of different methods within the same laboratory, sometimes even within a single end-station. More methods add more complementary information by showing a sample from different perspectives, an additional strength of synchrotron based analysis. The book (Mobilio et al. 2015) provides a comprehensive review of freeelectron light sources and their applications in analytics. Figure 4.22 shows such a multi-method example of an archaeological analysis using four different complementary synchrotron light-based analysis methods (Arlt et al. 2019). More details on the individual methods (except Fourier transform infrared spectroscopy FT-IR) will be presented in Sect. 7.1.2. In this example, historical Egyptian papyri were analysed at BESSY II to reveal hidden text fragments and to enable a text analysis without unfolding of the fragile artefacts. The method allows for visualising nanometre thin remainders of certain inks by their absorption properties for x-rays. The high brilliance allowed for a lateral resolutions 1 are considered forbidden transitions and the larger the mismatch the longer the half-life of the given transition. The mismatch of 3 results in a relatively long half of 99m Tc catalysed by the intermediate 7/2 state. Without this state the half-life would be even longer. The ground state of 99 Mo has a spin of 1/2+, a good match to the 99m Tc state (1/2−), explaining why 99 Mo actually decays preferably to the excited state. 99m Tc forms a good example since it can be produced in both fission reactors and accelerators (Guérin, et al., 2010) with pathways depicted in Fig. 5.3. For reproducing it via protons we can apply the 100 Mo(p, 2n)99m Tc reaction. Protons and neutrons feature a spin of 1/2 and the ground state of 100 Mo has a spin of 0. Spins can be added or subtracted, only the absolute value has to be conserved. In summary the projectile and target have a spin of 1/2 + 0 = 1/2 and the products of 1/2 + 1/2–1/2 = 1/2 favouring the production of the same excited state as with the decay of 99 Mo. The last example implicitly assumed a 100 Mo target, but if we take a look at the isotopic composition of natural Mo in Fig. 5.3, only 9.63% of natural Mo consists of the 100 Mo isotope. This small content of the desired target reduces the production rate of 99m Tc proportionally (3.20) and results in production of numerous other Tc isotopes with different half-lifes and emission modes. This product impurity potentially reduces the result quality of medical diagnostics applying 99m Tc by mixing of signals. Enriched targets reduce this isotopic impurity contribution at the drawback of strongly increased target costs. Targets made from mono-isotopic elements (Bi, Ta, Na …) avoid this issue, if available. Impurities exist in any material due to the nature of technical processing. Typically, the elements from the same column in the periodic table represent the strongest impurity, e.g. W in Mo and Mo in W. Purified target

5.1 Generation of α-β-γ-n Sources and Activation

211

Fig. 5.3 nn99m Tc a typical isotope used in medicine produced from the decay of 99 Mo produced in fission reactors can also be produced via proton or deuteron irradiation. The decay of 99 Mo(1/2+) results to 88.1% in 99m Tc(1/2−) due to the conservation of nuclear angular momentum, ion reactions such as 100 Mo(p,2n)99 Tc have different pathways

elements help reducing this contribution to product quality, but they can never be fully avoided. In summary, the purity of the final product depends on the initial target purity, the isotopic purity and the folding of these with the production cross-sections and decays. The isotope half-life provides an additional natural selection mechanism. Significantly shorter half-lifes enable an elimination of the undesired isotope by waiting a few half-lifes for it to decay. This requires producing an excess of the actual product, the more the closer the half-lifes. Significantly longer half-lifes contribute less to the total activity which may be tolerable for the application. A 10 times larger half-life results in a 10 times lower activity for the same amount of produced isotopes, see (5.1). The decays also produce daughter isotopes which might be of relevance as a product or impurity. Typically, the connections are short, since light projectiles only allow for one field steps on the nuclide chart, but in some elements (e.g. Fe or Mo) with instable elements in between small decay chains can develop. Sometimes several choices of projectiles are possible for reaching the same isotope. Figure 5.4 demonstrates this at the example of accelerator production of 64 Cu. Let us discuss the four cases of interest to test our knowledge gained so far: The production from 64 Ni(p,n)64 Cu features a high cross-section of up to 800 mbar at 10 MeV (Sayed et al. 2015) and a low threshold of 2.5 MeV, actually the lowest among all reactions starting from 64 Ni. Its selection has the advantage of resulting in only little side reactions with short-lived products (61 Co), but its low isotopic abundance result in low reaction rates. The 65 Cu(p, p+n)64 Cu has a cross-section of only 400 mbarn at 23 MeV. Its threshold at 10 MeV exceed the one of 65 Cu(p, n)65 Zn resulting in a strong production of 65 Zn in this pathway, but the 500 times longer half-life of 65 Zn results in only little contribution to the total activity. The deuteron reaction 66 Zn(d,α)64 Cu has a positive Q-value but only a negligible crosssection of about 50 mbarn at 10 MeV. The many Zn isotopes complicate the situation

212

5 Technical Applications

Fig. 5.4 The isotopes around 64 Cu allow for different pathways to be chosen. p and d projectiles allow for 4 different target isotopes The cross-sections and the generation of unwanted isotopes (also in view of target impurities) dictate the selection of the projectile

by producing a zoo of Ga isotopes with half-lifes comparable to 64 Cu. The reaction 63 Cu(d, p)64 Cu also features a positive Q-value. Its cross-section reaches a maximum of 300 mbarn only at 5.5 MeV opening up the production of Zn isotopes via reactions of 65 Cu in a natural Cu target. Anyways producing 64 Cu in a Cu target complicates the separation of the active from the stable isotopes, favouring a chemical contrast between target and product. Comparing the four options 64 Ni(p,n)64 Cu appears to be most attractive, in spite of the low isotopic abundance of the target. Coming back to the example of Tantalum (Ta) used for beam optical apertures as discussed in Sect. 2.7 we can combine all our obtained knowledge. The technical requirements on apertures demand maximal heat dissipation capability (conductivity + melting point) at minimal material costs and (long-term) radioactive inventory. We would like to select an element for an aperture operated at a 15 MeV proton accelerator with kW beam power. The importance of the Coulomb barrier for nuclear reactions was mentioned before and in the MeV range we are still below the maximum barriers of the heaviest elements. Consequently, we start our search with heavier elements since these feature larger barrier potentials. Au would be a good choice due to its good thermo-mechanical properties and its high barrier with only a single isotope and many stable isotopes around. Tungsten (W) as a significantly cheaper alternative with equally good thermo-mechanical properties has the disadvantage of several low threshold (p,n) reaction producing rather longlived Re isotopes. The next idea is Ta, due to its acceptable price tag at reasonable thermo-mechanical properties. Table 3.1 states a barrier of 9.545 MeV for a proton beam on Ta, but in contrast the (p,n) reaction has a threshold of Q = −0.976 MeV, reaching 1 mbarn at 5.45 MeV. Figure 5.5 demonstrates the possible pathways. The given barrier potential corresponds to the cross-section maximum of about 100

5.1 Generation of α-β-γ-n Sources and Activation

213

Fig. 5.5 Graph of Ta activation-decay scheme for E 0 =15 MeV proton irradiation. The graph shows reaction pathways with Q + E > 0 below the respective product. Greyed isotopes and anything above or to the right of the pane is not reachable. Black isotopes are stable, red and blue decay via β+ and β− , respectively. Ta can be considered a low activation element, since only 181 W contributes to the long-term radioactive inventory with most path’ ending on stable nuclides

mbarn. Ta features practically only the 181 Ta isotope, limiting the nuclear activation in quantity and diversity as given in the example of Fig. 5.5. The comparably low (p,n) cross-section produces 181 W, an unstable isotope with an inconvenient half-life of 121 days. 181 W decays by electron capture with Q = 186 keV back to 181 Ta, leading to low dose rates and shielding requirements. Try and compare the situation to other elements, for example Fe or Cu, and list their strength and weaknesses for specific applications.

5.1.2 Examples of α, γ , β − , β + ,n Sources from Accelerators Having discussed the principal path’ accessible on the nuclide chart this section will discuss a few selected examples. In general, the application defines the requirement of chemical element and minimum half-life, e.g. due isotopic separation and preparation of the products or due to economic minimum service times of devices using a radioactive source. For applications, generally shorter half-lifes are desirable since this results in higher activity per produced isotope. The main question to be answered for selecting an isotope is the type and energy of radiation the source has to emit.

214

5 Technical Applications

Practically, every decay emits photons. This makes all isotope sources γ-sources. The details make the difference. Emitters with rather low energies in the 100 keV range, e.g. 55 Fe produced by 55 Mn(p,n)55 Fe, and emitters with MeV scale energies, e.g. 22 Na produced by 22 Ne(p,n)22 Na, exist. For both aforementioned examples a single line dominates the emitted photon spectrum, a situation generally desirable. In particular heavier isotopes such as 160 Tb (160 Gd(p,n)160 Tb) feature several intense photon lines and strong Bremsstrahlung contribution from emitted electrons. Medical diagnostics applies β+ sources for Positron Emission Tomography (PET). Besides this also a few scientific application of positrons for example in positron lifetime spectroscopy exist. The β+ decay follows either the Electron Capture (EC) and/or the positron (e+ ) emission decay route. Equation (5.3) depicts the difference between both for the prominent PET isotope 18 F. With a certain probability, for 18 F = 3%, the nucleus absorbs a shell electron and decays to 18 O without positron emission. Radiopharmaceuticals generally require short half-lifes for diagnostic signal strength as well as for effective irradiation of tumours, see Chap. 6. Very short half-lifes lead to cost explosions for transportation which will be calculated in detail in Sect. 5.1.4. This aspect results in a technically induced lower limit for the half-life. With a halflife of 110 min 18 F perfectly fits into this application window. It releases 634 keV in the positron route (3-body decay = broad spectrum). The production usually works via the 18 O(p,n)18 F reaction with several 10 MeV protons from cyclotrons. 18

F + e− →18 O + νe (EC: 3%)

18

F

→18 O + e+ + νe (positron: 97%)

(5.3)

The positron emission route of β+ subtracts 1022 keV from the EC decay energy for producing the positron and not consuming the electron. Consequently, not all isotopes marked in red (β+ ) on the nuclide chart can emit positrons. For example, the decay energy of 55 Fe of 231 keV is insufficient for positron emission, resulting in 100% EC decay. 64 Cu features both routes and also the β− decay route (Fig. 5.6). This multitude of decays limits the radiation purity of 64 Cu as a positron emitter when compared to 18 F with its positron emission dominated decay. In the medical Fig. 5.6 Decay scheme of 64 Cu. Only 17.8% of the decays emit a positron. EC dominates the overall decay. The path over an intermediate 2+ state of 64 Ni takes place in only 0.54% of the decays

5.1 Generation of α-β-γ-n Sources and Activation

215

context, the mixture of radiations increases the patient dose for a given PET signal, but the chemical aspect of copper and its metabolism nevertheless add an interesting aspect against 18 F. Section 5.1.1 discussed its production options. Among the accelerator produced isotopes β− decay is rather rare due to the pathway options leading mostly towards the proton-rich half of the nuclide chart (e.g. (p,xn)). Only a few cases of neutron-rich stable isotopes in advantageous positions have β− products on their proton rich side in the nuclide chart. The application of electrons has an application field in medicine, where low energy electrons enable a local tumour irradiation by body-internal sources. Section 5.1.1 discussed 99m Tc as a β− emitter already above as an example of a rather pure photon emitter. Only a few accelerator produced α-decaying isotopes exist, since these generally lie on the upper edge of the nuclide chart, the fission domain. From the stable elements only bismuth has a path leading to the α-emitting polonium isotopes 206 Po to 210 Po via proton or deuteron reactions. Reactions of lead with helium ions potentially also lead to these polonium isotopes, but due to the high stopping power of Helium ions in matter the reaction are generally less attractive than the proton reactions. From Uranium and Thorium also pathways exist with (p,n) and (d,2n) reactions and subsequent β− decay resulting in the relatively efficient α-sources 232 U and 238 Pu. These release several α’s per produced isotope due to long decay chains through the valley of α-emitters down to polonium. The elements have the drawback of the complicated handling due to their classification as nuclear fuel. The use of α-emitters will be discussed further in the frame nuclear batteries in Sect. 8.2. No direct neutron emitters such as the fission produced 252 Cf can be generated from accelerators. The only option is using an α-emitter such as 210 Po or 232 U mixed with beryllium for exploiting the 9 Be(α,n)12 C reaction. So-called AmBe sources exploit the same reaction with beryllium using the α-emission of 241 Am. Due to the widespread availability of fission produced isotopes the accelerator route is so far unattractive. Recently the nuclear data section of the International Atomic Energy Agency released a software enabling screening of arbitrary isotope production using with any light ion projectile (nds.iaea.org/mib). The software’s name Medical Isotope Browser suggests a certain application, but it can also be used for assessing production of technical sources. The provided results extend from amounts up to radiation safety aspects, the input data can be of different type, and the projectile energy can be scanned over a range for finding optimized quantities. These features make this tool a versatile and convenient option for finding new interesting products and investigating and optimizing details of existing products. Since this software uses only semiempirical cross-sections, the results require experimental verification.

5.1.3 Comparison to Production by Neutrons A large part of the nuclides associated with nuclear technology originate from fission reactors, in particular the pressurized water uranium reactors (PWR). This reactor

216

5 Technical Applications

Fig. 5.7 The masses of nuclides and their part in the total produced isotopes produced by fission of the three nuclear fuels 233 U, 235 U, and 239 Pu (coloured lines) and the fission reactor mean (black line). The fission reaction breaks the fissile isotope into two parts of mostly 80–110 and 125–150 amu. Published by JWB at en.wikipedia under CC BY 3.0

type dominates the nuclear power generation and scientific applications. Although variants and vastly different sizes exist, the physics of the nuclear reaction in these reactors is the same since about 60 years. These reactors generate a typical set of nuclides, for example 137 Cs, 131 I, 90 Sr. For example 90 Sr could be produce by the 4 product reaction 235 U(n, 2n)144 Xe90 Sr with a Q = 175.7 MeV. Fig. 5.7 depicts the two regions of produced isotopes in this reactor type. The origin of this isotopic signature lies in the neutron energies and the fissile fuel Uranium. The fission reactions require emitting several neutrons per reaction to be selfsustained, resulting in broad primary neutron energy spectra due to the n-body reaction nature (Sect. 3.3.2). The water moderator surrounding the fissile fuel further thermalizes this spectrum due to the increase of fission cross-sections towards lower neutron energies (similar to Fig. 5.8). This results in typical neutron spectra extracted from the reaction zone as depicted in Fig. 5.7. PWR’s emit neutrons from thermal energies (MeV) to several MeV, but thermal neutron energies dominate the flux. Accelerator based neutron sources (see Sect. 4.2) also produce broad spectra, but with more weight on the fast neutron part of the spectrum. The additional conversion step from ions to neutrons of accelerator neutron sources in general results in a reduction of overall efficiency for isotope production compared to direct ion irradiation, favouring the direct application of ions. Currently this leaves the field of neutron irradiation to fission sources. The emitted neutrons enable external production of additional isotopes not directly generated by the fission reaction as fission fragments. Since neutron reactions are not limited by the Coulomb barrier neutrons can interact with any target at any energy. The thermal and mostly also the epithermal neutrons cannot induce negative Q-value

Fig. 5.8 Qualitative sketch of a uranium pressurized water fission reactor neutron spectrum. Most of the neutrons initially bearing up to ≈10 MeV are moderated to the thermal energy range in the order of a few 10 MeV

Rela ve neutron flux

5.1 Generation of α-β-γ-n Sources and Activation

217

1E-1 1E-3

Thermal Fast

1E-5 1E-7 1E-9 1E-3

Epithermal 1E-1 1E+1 1E+3 1E+5 Neutron energy [eV]

1E+7

reactions. Besides fission reactions thermal neutrons can mostly induce only the (n,γ) reaction also called neutron capture. For some isotopes (n,p), (n,d), and (n,α) also feature positive Q-values, but even in these cases usually the (n,γ) cross-section dominates at thermal energies. On the nuclide chart the (n,γ) reaction represents a move straight to the right towards the β− decaying half (Fig. 1.1). Fast neutrons can induce negative Q-value reactions. For stable isotopes this opens up the (n,2n) and (n,xn) reactions which for any stable isotope target have to have negative Q-values in the order of MeV (Fig. 8.1). Since the second emitted neutron is initially bound in the stable nucleus, it logically requires energy to remove it from there. The (n,2n) reaction moves us one step straight to the left on the isotope chart. Ion irradiation offers a clear cut-off in the particle energy defined by the set acceleration voltage, resulting in well-defined reaction conditions. The broad neutron spectra make the production of isotopes from fission (and other) neutrons less controlled than those induced by mono-energetic ion beams, resulting in a larger variety of unintendedly produced isotopes. The long range of neutrons and their interaction with surroundings anyways result in broad neutron spectra even in the theoretical case of a mono-energetic neutron beam as for example provided by a D-T neutron generator (Sect. 4.2.1). Therefore, isotopes, e.g. the medically relevant 131 I, have to be acquired by chemical and physical separation from the fuel rods or neutron targets. The accelerator equivalent of fission is spallation (Sects. 4.2 and 8.1). Spallation also produces a double peak of products, but in principle from an arbitrary starting nucleus instead of only uranium. The resulting product requires a chemical separation of the interesting isotopes from the target material, since numerous elements and isotopes are present in a mostly homogeneous mixture. Several currently relevant isotopes cannot be produced by accelerators, for example 90 Y which lies outside the possible pathways of H and D on the nuclide chart. Some isotopes such as 131 I can be produced via both neutrons and accelerators using e.g. 130 Te(d,n)131 I, but the ground state spins of this ion reaction mismatch with 1 on the input side and 7/2 + 1/2 on the product side, potentially preferring a different reaction. The numerous other iodine isotopes accessible via (d,xn) from stable Tellurium isotopes would results in a rather low purity, while (n,γ) provides

218

5 Technical Applications

less access to detrimental isotopes in this case. In this case neutrons offer a qualitative advantage. The isotope 64 Cu has a half-life of 12.7 h, consequently the production profits from a high local production rate and fast chemical separation of the 64 Cu from the host material (Cu or Ni), yielding a potential advantage for accelerator production in an application near device. The lower costs of accelerators compared to fission devices will generally enable maintaining a denser grid of accelerator production facilities than neutron facilities. With all the criticism of neutrons and fission, their strength and complementary working with ions has to be admitted. The difficulty of producing neutron rich isotopes, namely β− decaying isotopes, using ions can hardly be expected to change due to the fundamental movement options on the nuclide chart as discussed above. Neutrons hardly produce proton rich (β+ decay) isotopes, since this would require a strong high energy flux inducing (n,2n) reactions which would require completely different reactor types such as fast breeders or generation 4 concepts. In some cases, e.g. for positron emitters produced by ion irradiation, the pathways are clearly limited. In a few cases the neutron and proton induced isotopes compete regarding their costs and properties in the sense of a commercial competition, for example in the case of 64 Cu production which is possible using both neutrons and ions.

5.1.4 Optimization of Production and Cost Efficiency Scientist are proud of good ideas and clever solutions, but to bring these to application it has to be possible to make a profit with it. In this section we will work towards a case study for cost calculation of producing isotopes with accelerators. The application/usage defines the value of a product. The (preparation) costs form from numerous aspects. For example we discussed already the 1e/eV rule (now rather 25c/eV) for accelerator beam energy as being the main contributor for investment cost of the accelerator itself. On top of this base cost levels due to vacuum components, radiation safety, maintenance, and personnel add up. A higher degree of integration and automation saves investment and running costs, but not every aspect scales identical with the production goals of a certain application. The price (which should not be confused with the value) of a product, a market property, derives from the balance of supply and demand and the competition for the product. These aspects belong to economics, we as the accelerator applicants focus on optimizing the technology, dictating the product costs. First of all we should get an overview of the contributors to our product costs and decide on a common denominator to have them comparable. Here we chose the hour as common denominator, yielding the following equation for our product costs: Costs =

Operational costs hour Isotopes produced hour



lead time isotopes/product ∗ e− decay constant handling losses

(5.4)

5.1 Generation of α-β-γ-n Sources and Activation

219

Equation (5.4) shows a possible cost calculation basis with several contributors. A similar term could be found for other accelerator products such as analysis services or patient irradiation. The last term might be the easiest one, since it describes the physical decay of the produced isotopes on the basis of the time required between the production of isotopes until their final use in the form of a product (lead time). The operational costs summarise everything consumed by the production and the investment for all devices and personnel required for conducting the production, including the accelerator and post-processing facilities, if required. First of all it contains salary of personnel, power consumption, and maintenance. The investments required for buying the accelerator and preparing a laboratory (building, shielding), or a corresponding rent for the laboratory, have to be distributed over a reasonable lifetime. These properties depreciate over time by wear and technological advancement, reducing their value over time, often independent of usage (this part belongs to maintenance). Lastly it contains the costs for targets, namely the mother isotope, its isotopic enrichment, and price. The isotopic enrichment strongly scales the price, for example a factor 4 between 97 and 99% enrichment of 18 O in H2 O. From 97 to 99% the production rate will only increase by 2%, but the amount of impurities reduces by a factor 3. Only if the impurities significantly reduce the product quality or induce radiation safety problem such a price increase can pay off. Usually only minute quantities of radioactive isotopes are required, therefore the most important factor for isotopes is the usage ratio, the amount of mother isotopes lost or remaining unused during the production process. The production rate of isotopes depends on the choice of beam energy, current, and target. The operational costs and in particular the investment part scales with these technical properties. The physics behind it was discussed in Chap. 3. To summarise: The beam energy E defines a reaction probability which monotonically increases with E (for thick targets). Multiplication of this number with the flux of incident projectiles and the enrichment of target isotopes yields the number of nuclear reactions per time. For example 64 Cu could be produced using either 3 MeV on natural Nickel due to 64 Ni being the only of the stable isotopes with a (p,n) reaction barrier below 3 MeV or a beam energy >10 MeV could be used on enriched 64 Ni, avoiding the production of un-wanted isotopes from the four other Nickel isotopes. Enriched 64 Ni represents an extra cost factor, but the efficiency factor H (see Sect. 3.4) is considerably lower at 3 MeV, see considerations in Sect. 3.4. How will these aspects weight against each other or do they even matter in the final costs or do other quantities dominate? Lastly, the demand of isotopes per final product (e.g. an injection dose) and the losses of material upon conversion of the (raw material) isotopes to the product influence the costs. In the example of 18 FDG (C6 H11 FO5 ) for PET, every product FDG molecule contains only one 18 F isotope. The final product, an injection dose of FDG in this example, contains many of these molecules in a quantity defined by the application requirements. Multiplying the costs per isotope derived from the first ratio in (5.4) with the corresponding amount of molecules in each product yields the cost per product. Not all product ends up in syringes, in this example, due to purification and handling, a part of the isotopes are lost/retained in the apparatus.

220

5 Technical Applications

Fig. 5.9 Cost estimate curves of the specific cost of a single 18 F FDG dose for PET depending on accelerator beam current and the lead time from irradiation till injection. Higher beam currents and shorter lead times clearly reduce the specific costs. Already 2 h more triple the costs due to the 1.83 h half-life

Price per Injec on [€]

Let us now apply these theoretical considerations to a specific case study. Our example calculates cost for an accelerator facility with estimates for synthesis capacity and the impact of transportation distance for an 18 F FDG factory. A neat aspect of this example is the exponential increase of cost with transportation distance due to decay of the relatively short-lived 18 F. We derived the optimal beam energy regarding energy efficiency from depth dependent stopping calculations in Sect. 3.4 to 12 MeV for the 18 O(p,n)18 F reaction with a reaction probability of 0.33%. 18 O enriched water costs about 120e per gram when ordering larger quantities with 97% enrichment from major isotope/chemicals suppliers. We assume the 18 O not reacting will be completely recycled into the process. The conversion and separation consumes about 20% of the produced isotopes. Trained personnel for nuclear environments and accelerator operation costs about 60e/h in the public sector in Germany in 2019. Operation and production control require at least 3 people, accounting to 180e/h. The cyclotron itself accounts for 12 Me considering the 1e/eV rule. The shielded laboratory building including a chemical processing plant for FDG synthesis accounts to 8 Me. We assume a lifetime of this complex of 30 years over which we distribute these costs (depreciation model). This cyclotron type delivers up to several mA beam current with a down-time of 10% in particular for ion source maintenance. The electric power requirements of the base device including ion source, magnets, vacuum, and controls account to about 80 kW plus the beam power, which is generated with an efficiency of 1/3. Therefore, a 12 MeV at say realistic 400 μA proton beam requires about 95 kW electrical power. The total operational cost account to 453 e/h, with the device and building accounting for 55% of this. Per mA and hour the accelerator produces 6 × 1016 18 F atoms at 12 MeV from which the FDG will be formed. Every application of 18 F FDG for PET requires up to 10 pMol = 6 × 1012 FDG molecules (= 600 MBq activity) at the patient location. Figure 5.9 compares different lead times as a sum of chemical processing, transport, and application. 2 h more for transport represent the difference in time required to transport the FDG 1600 1500 1400 1300 1200 1100 1000 900 800 700 600 500 400 300 200 100 0

10h lead Ɵme 15h lead Ɵme 12h lead Ɵme

0

1 Ion beam current [mA]

2

5.1 Generation of α-β-γ-n Sources and Activation

221

from Forschungszentrum Jülich to Cologne or to Frankfurt. Consequently, in Frankfurt the same product would cost 3 times more than in Cologne, in Berlin (15 h) even 16 times more. These cost calculations justify a rather dense grid of 18 F sources with strong advantages for smaller and cheaper sources installed at the location of application. Please note these calculations represent rough estimates based on experience and a few numbers from relevant manufacturers. Nevertheless, the case study highlights the importance of powerful accelerators, since a large part of the costs (e.g. personnel) scales not at all with the accelerator power. Beam current is the cheaper option compared to beam energy since the accelerator size will scale only weakly with beam current but linearly with beam energy (constant specific acceleration). Furthermore, the cost structure calculated here dictates the technological development goals to be pursued by revealing the major contributors to the product costs. In this case, due to the short half-life of 18 F, transport time is the major contributor. Consequently, smaller accelerators with higher current and lower beam energy inducing less radiation safety issues and building requirements would be a possible solution to improve the production site density in a country.

5.2 Rare Isotope and Radioactive Decay Tracers From our daily life we know: A process is easiest to understand and the result becomes most credible if we see it happening live. Two reasons draw responsible: Firstly, the live observation actually contains the most complete set of information, we are sure nothing was missing or overseen. Secondly, we are certain that a straight path between start and end state exists. This may sound trivial, but imagine we left our pizza on a table with three dogs in the room. After we come back from the rest room, the pizza is gone, but there are at least five possible event chains connecting the same starting and end-point. The dogs may even select one among them to blame, implanting an expectation of what happened in us, but building up experience or claiming someone guilty from incomplete information remains unsatisfying. For these reasons, scientists developed the concept of an in-situ experiment. Simply spoken we are looking inside the situation, more accurately spoken the observed process remains (mostly) unaffected by our measurement and the time resolution of the measurement is faster than the characteristic progression time of the process. For watching dogs eating pizza, the human eye is (just) fast enough, but in technical and natural processes already placing a proper probe often constitutes a major challenge. Imagine a bearing working for example in a combustion engine or an electric motor. We want to analyse the rate of abrasion of material from the bearing in a car engine while it runs through different scenarios, for example city traffic and autobahn. We cannot measure it directly by mass-loss or thickness changes since it is hidden in the engine. The contrasts of measuring the whole engine are too small since we compare μg of wear particles to 100 kg of engine weight. We could remove the material from the lubricant, but separation takes time and still we could not derive the

222

5 Technical Applications

Fig. 5.10 Scheme of a radiotribological experiment. A system transports particles released by wear towards a photon detectors. The detected signal enables wear quantification

exact location the material originated from. For solving this ambiguity, the material needs to be marked and traced. Upon erosion of the marked material it can be traced like a child dropping breadcrumbs in the forest. Scientifically, the breadcrumbs are named tracer, with the advantage of nuclear breadcrumbs being inedible, ensuring a known amount reaches our measurement. In Sect. 5.1 we received the tool for marking materials with tracers on the atomic scale through nuclear reactions. In Chap. 6 we will learn that medicine applies this technology quite intensively for diagnostics, but a few examples also exist in the technical context. This technical applications consider processes of material migration, for example corrosion, evaporation, or plasma sputtering. This section focusses on nuclear activation based tribology, also called radionuclide tribology (RNT), but many aspect share among the different fields. The science of wear and surfaces in relative motion defines tribology. Figure 5.10 depicts the basic setup for a RNT experiment. Wear as a factor of component lifetime and production costs affects all applications of moving parts. The friction between surfaces in contact removes material over time with a rate measured in thickness or weight per area and time depending most importantly on the contact pressure, lubricants, relative velocity, the materials, and the surface roughness. In particular, bearings with their contact surfaces being hidden in a larger part with lifetimes often in the order of 10,000 h complicate a precise and direct measurement of their erosion rates. An in-situ analysis of the erosion would enable accelerating the testing and parameter studies yielding information on static and dynamic processes. Parameter studies could for example include varying lubricant properties, the friction velocity, or start-stop scenarios. Radioactive sources provide a solution, since modern detectors can quantify them down to quantities of single atoms. Radioactive materials can be (electro-) plated onto the affected surfaces, but this potentially changes the surface properties. Accelerators enable placing these probes directly into the affected surfaces by activation of the investigated materials without relevant change of the material and its wear properties. Depth dependent activation (Sect. 3.4) enables a depth localisation and quantification of the activation and focussing of charged particle beams (Sect. 2.3) allows for a lateral localisation down to μm scales. In contrast to neutron activation, ion activation usually results in less different products with shorter half-lifes easing

5.2 Rare Isotope and Radioactive Decay Tracers

223

detection and increasing measurement contrast. This so-called Thin-Layer Activation (TLA) achieves sensitivities down to a few 10 ng (Racolta 1995). The surface near activation of ions results in reduced radioactive inventory compared to a bulk activation, reducing overall costs. Activation reactions exist for practically all technically interesting materials using p, d, or 3 He ions in the order of 10 MeV (Racolta 1995). The wear removes small amounts of material from the contact surfaces. In order to detect these they have to be transported away from the wear zone to a detector. Measuring the amount of activity still present in the part would yield a much lower signal to background ratio than measuring the removed part in a remote location starting with zero activity. Typically, a lubricant transports the eroded activity. A γ-detector placed in the lubricant system then allows quantification of the activity. The time-dependent signal increase of the detector corresponds to the amount of wear. Assuming a constant depth distribution of the activation/isotopes yields a proportional/linear increase of the signal with wear. The detection accuracy and the integration time required per wear parameter set depend on the activity in the sample. Higher activity increases the signal level, but safety and cost aspects together with possible material property changes due to the radiation damage demand as low activity as necessary. In the case of a fluid lubricant transporting the wear, the fluid integrates the released activity. The nuclides released by wear add up to the existing inventory in the oil, their release rate represent the slope of the actual measured signal which is proportional to the total activity in the oil. The actual sought quantity is a derivative of the experimentally measured quantity. Measuring derivatives is always a bad situation, since deriving the measured values exaggerates the statistical noise, reducing the signal to noise ratio of the sought value. In order to keep the activity close to the detector and allow for a credible calibration of the detection efficiency, the floating wear debris has to be reproducibly located close to the detector. Additionally, the closer the activity to the detector the better the geometrical detection efficiency and the better the detection limits. Filters or magnets can aid in fixing the radiation to a defined point in the lubricant system, but their efficiency strongly depends on the wear particle properties, for example the particle size to filter porosity dimension. Loss of lubricant, remote filters, or sedimentation potentially reduce the detection efficiency. Clogging of filter can also increase the geometric efficiency, faking an increase of wear rate over time. Since the application defines the materials of interest, also the possible tracer isotopes are given. Only the beam energy enables a slight selection freedom by enabling more Q < 0 reactions with increasing beam energy. Let us consider the example of automotive engines by an analysis of specific activity and its differential evolution in the circulating lubricant oil. In this case typically steels will be eroded. The activation as calculated by depth dependent reactions, see Sect. 3.4 in (3.20), varies with stopping power and reaction cross-section. In (Rayaprolu et al. 2016) it was shown how higher beam energies result in more uniform depth profiles of activation in tungsten due to the flattening of cross-section and stopping. For iron,

224

5 Technical Applications

as the main constituent of steel, the activation by a few MeV protons yields predominantly 57 Co and 58 Co, both with strong γ-lines of 122 and 811 keV, respectively, and half-lifes of a few month. These isotopes originate from (p,n) reaction with the rarest iron isotopes since these feature the lowest reaction thresholds (Q-value < 0). Depending on the alloying elements present in the steel additional isotopes occur via corresponding reactions. To name only a few: Chromium leads to 54 Mn, vanadium to 51 Cr, and tungsten to 184 Re, enabling to separate different steels by the different activation products connected to the composition. An energy resolved detection of the photons enables quantification of selective processes preferentially removing certain elements. Let us consider the method in the frame of a case study of radiotribology of an iron material. This could be pure iron, a steel, or any other material with a correspondingly lower iron percentage. Other constituents will not be considered due to the independent nature for nuclear analysis. For simplicity, the calculation assumes pure iron which naturally contains 2.119% 57 Fe and 0.282% 58 Fe. The activation using cyclotrons in the 10–30 MeV range yields activation thicknesses in the order of 1 mm. Figure 5.11 shows us the amount of 57 Co (Half-life = 271.8 d) and 58 Co (Half-life = 70.9d) isotopes produced and their distribution throughout the sample depth for a 12 MeV proton irradiation. The production strongly varies with depth. The calculation allows for a direct connection of depth and eroded isotopes. A constant distribution and with this a linear connection of wear rate and counting rate increase 1E-7

Reac on probability/μm

Co-57 Co-58 8E-8 6E-8 4E-8 2E-8 0E+0 0

100

200 Depth [μm]

300

400

Fig. 5.11 Depth dependent reaction- probability of the 57 Fe(p,n)57 Co and 58 Fe(p,n)58 Co reactions in pure iron irradiated with 12 MeV protons. The calculation considers the isotopic ratio in natural Fe. The irradiation produces isotopes down to 0.3 mm with a total range of 0.35 mm. Variations in cross-section and stopping power vary the isotope production density. Stopping from SRIM2013 and cross-sections from TENDL-2015

5.2 Rare Isotope and Radioactive Decay Tracers

225

can only be assumed for 58 Co down to about 50 μm where increasing cross-section and stopping power cancel each other. A varying depth distribution would require using Fig. 5.11 as a depth scale for the later erosion. Considering Fig. 5.11 the selection of 12 MeV appears actually quite optimal. A lower beam energy would reduce the total amount of activity, but the plateau of 58 Co eases the result interpretation and 12 MeV lies close to the cross-section peak of 58 Fe(p,n)58 Co resulting in a maximum sensitivity. For the sake of simplicity of a constant production rate, we focus on 58 Co, although 57 Co offers 5 times higher signal levels. The photons emitted by 58 Co results in a certain detector efficiency of say 10% (only 1/10 of the decays result in a signal). The efficiency generally decreases with photon energy, also favouring the lower energy photons of 57 Co. Furthermore, we assume a geometrical efficiency of 50% equal to all wear particles come to rest on one side of the detector. 58 Co emits a 811 keV photon with 99.5% probability per decay. If we irradiate the material with 1 mC of protons=6.25 × 1015 protons this produces 1.1 × 108 58 Co/μm with the reaction probability of 1.8 × 10–8 . In total, we would receive 1.1 × 108 × 0.5 × 0.1 × 0.995 = 5.5 × 106 counts/μm of wear. The assumptions represent a rather optimistic case, but the numbers would allow resolving a loss of 0.1 nm with 550 counts/second resulting in a statistical uncertainty of 4.2% [see Sect. 2.5 in (2.43)] for a wear rate of 0.1 nm/s. Instead of activating the sample, we can also build the sample itself from traceable metastable and rare stable isotopes. This could be 13 C enriched molecules consumed by a plant or deuterated water participating in a chemical process instead of hydrogen. Naturally, only 1.1% of all carbon nuclei are the 13 C isotope and only 1.5 × 10–4 of all hydrogen nuclei are the deuterium isotope. Rare stable isotopes have the advantage of being relatively cheap compared to accelerator produced isotopes (e.g. 99% enriched D2 O has a price tag of a few 100 e per litre), but since they occur naturally a certain background exists reducing the possible contrast compared to radioactive tracers. Being stable, these isotopes do not emit radiation and hence their detection requires active methods. The chemistry of isotopes differs only insignificantly, therefore the actual process to be investigated remains unaffected by the isotopic changes, as it should be. At the same time, this prohibits a detection contrast using chemical analysis methods. The isotopes have to be followed by methods with a nuclear or an atomic mass contrast. Chapter 7 will discuss several options accelerators offer for tracer analysis, but also numerous conventional methods exist such as mass spectrometry or chromatography.

5.3 Material Modification Humanity is addicted to forms. We shape nature to make it more beautiful or functional and we want it to stay that way. Our need for more, better, and cheaper requires

226

5 Technical Applications

testing and extending the technological limits. Unfortunately, approaching technological limits counteracts the wish list by melting, creep, fatigue, bending, dimensions, and so on. For this reason, central development challenges for technical products include bringing the material in shape and developing materials, which stay in shape in spite of the harshest treatment and environment. With the ongoing climate change another factors adds itself to the top of the wish list uninvited. Sustainability and its equivalent resource efficiency require making things smaller, increasingly specialised, and longer lasting. Optimized shapes enable reducing resource usage without decreasing component performance. Technical limitations of mechanical shaping, casting, and joining techniques limit the freedom of shape. New technologies such as 3D printing extend the shaping options. 3D printing start filling application niches by offering advantages in certain applications. Ongoing progress reduces costs and extends the application range. Accelerators with their possibility for extreme power load densities contribute to these advancements. The potential for reducing waste by avoiding to mill down a material and instead printing it directly in shape sounds appealing, but as always in technology the details are important. Technological disadvantages of 3D printing include the increased energy usage for melting a material instead of milling it and the post processing of 3D printed objects due to the different surface and bulk properties. In the end, the application decides on the balance of benefits and disadvantages. The same methods continue to be relevant on the micro-scale. Bringing the material into shape means sputtering, heating, radiation damage and implantation on this scale. Radiation induced changes also change a materials interaction with chemicals, further extending the processing capabilities. The development of microchips to a large extent relies on increasing flexibility and decreasing structure sizes on the microscale. These manufacturing technologies rely on accelerators in many different aspects. The impact of microchips and their performance on our society and industry emphasizes the importance of micro-machining technology. One of the aspects of the semiconductor technology as a basis for microchips is the tailoring and synthesis of specific material properties by accelerators. Without these processing steps, silicon is just a badly conducting metalloid.

5.3.1 Doping by Implantation and Activation Doping changes the properties of silicon from being a useless metalloid to being a semi-conductor. A band-gap between conduction and valence band draws responsible for the low electrical conductivity of pure silicon. Electrical conduction requires electrons in the conduction band, but the band-gap energy (1.12 eV for silicon) prevents electron from leaving the valence band. Only at higher temperatures the high energy tail of the thermal energy distribution allows a few electrons to pass the band-gap. Doping the silicon introduces new intermediate energy levels into the band gap. The exact position in the band gap and whether these levels contain extra

5.3 Material Modification

227

electrons or free electron-states (holes) depends on the dopant element. This intermediate level shortens the energetic distances, significantly increasing the electrical conduction via these electrons or holes. Doping with elements providing excess electrons produces a negative (n-type) conduction, while excess holes result in positive (p-type) conduction. So far so boring. The difference to a normal conductor lies in the possibility to control the amount of these excess electrons and holes via the dopant concentration. With small dopant concentrations in the order of 10 ppm applying a voltage allows for a complete depletion of these small amounts of charges. Without its extra charges, the depleted layer goes back to the originally low conductivity. The directionality/polarity of this depletion depends on whether it is a p- or n-type semiconductor. Combining a p-type and an n-type semiconductor results in a transfer of charges, since the n-type can provide electrons to fill the holes in the p-type semiconductor. This transfer results in a depletion layer. Applying a voltage over this p–n junction results in either a stronger depletion of the charges and a further reduction of conductivity by applying a positive voltage to the n-type side (reverse biasing) or in a refilling of the free charges and an increased conduction by applying a negative voltage to the n-side. This unidirectional isolator feature is responsible for the unique possibilities for electronics offered by semiconductors. In Sect. 2.5, we discussed several different types of radiation detectors, in particular silicon and germanium based. Production of a semiconductor radiation detector starts with the respective single crystals. The detection exploits exactly this depletion zone effect of a p–n junction operated in reverse biasing. Charged particles again separate the combined electron-hole pairs, resulting in a current of the otherwise isolating depletion zone. For radiation detection, usually very thick depletion zones are required. The depletion zone represents the effective detection volume, requiring thick zones for effective particle energy absorption. The small dopant quantities would not even be considered as a part of the composition in most other materials, but with accelerators they can be easily controlled when measured as an ion current. For a well-controlled doping the original material has to be extremely pure and crystalline. Practically single-crystals represent the ideal basis. Nowadays silicon single crystals disks, so-called wafers, can be made with several 100 mm diameter and sub-mm thickness for only a few 10 e. These single crystals are then doped with known amounts of for example phosphorous (n-type) or boron (p-type) in so-called implanters. Figure 5.12 shows such a silicon wafer preparation and implantation facility. Typically acceleration voltages of 100 keV DC implant the dopants about 100 nm deep into the silicon, see Fig. 5.13. Post implantation annealing removes the defects induced by displacement collisions (Sect. 7.4) of the ions entering the solid and increases dopant distribution homogeneity. The strength of ion doping lies in the accurate control of ion dose, species, and range defined by acceleration voltage, ion optics, and sample current. A spatially homogeneous doping is the challenging part of doping. Different dopant concentrations will result in different conductivity, increasing for example the electrical losses in regions of lower dopant concentration. The depth homogeneity suffers from straggling as depicted in Fig. 5.13. A peak forms in depth using a single

228

5 Technical Applications

Selector magnet Si-Wafer containers

Fig. 5.12 A 70 kV implanter with attached clean room for 300 mm wafer doping (left). In the clean room the operator attaches wafer boxes to the automatic manipulator of the implanter (right). Its ion source produces a multitude of species from gases and vapours. A magnet selects the ion species and directs the beam (arrow) onto the wafer. Copyright Forschungszentrum Jülich/Sören Möller

Deposi on probability [a.u.]

200 180 160 140 120 100 80 60 40 20 0 0

50

100 Depth [nm]

150

200

Fig. 5.13 SRIM 2013 calculation of 77,000 phosphorous ions implanted with 70 keV into silicon. The implantation shows a maximum of deposition probability at 97.5 nm depth which extends down to 200 nm due to straggling. According to SRIM every projectile induces 960 displacements into the silicon lattice using a displacement threshold of 15 eV

beam energy, but ideally the dopant should reach a constant concentration throughout a relevant depth. The typically Gaussian beam profiles (see Fig. 2.21) limit the lateral implantation homogeneity. Beam optical shaping can only partially compensate for this. Scanning the ion beam using deflection beam optical elements over the wafer provides a fast option for redistribution of the implantation dose, but for large wafers, required for industrial scales, the deflection and with this also the impact angles become significant. Variations of the impact angle reduce the implantation depth through the geometrical path effect. A translational or rotational movement of the target itself avoids this, but this movement is generally slower and technically more challenging in the vacuum environment of an implanter.

5.3 Material Modification

229

The range of doping represents an economic limitation of ion implantation. The high stopping power of heavy ions such as phosphorous with a few 100 keV allows only doping the near surface of a wafer. Doping the whole silicon ingot (from which the individual wafers are cut out) in one step would significantly reduce the required effort and improve the dopant homogeneity at least within a single ingot. Unfortunately, the beam energy required to implant boron or even phosphorous several ten millimetres deep into silicon are not feasible. The economic need for large wafers is in conflict with this idea, at least for ion beams. Doping by nuclear reactions/activation represents a feasible alternative due to the low amounts of dopants required. Currently only fission reactors are used for this. Here the 31 Si produced via 30 Si(n,γ)31 Si decays to stable 31 P. This methods represents a prime standard for doping and several patents were issued in the 1970s and 80 s, but the reality of the limited availability and the costs of the required large thermal neutron sources limits the economic relevance. Doping by light ion beam irradiation reduces the required beam energy for reaching larger depth. A patent (Patentnr. US09192452, 1998) was filed for phosphorous doping using the 30 Si(d,p)31 Si reaction with a subsequent decay of 31 Si to the stable 31 P similar to the neutron reaction. Unfortunately, the required isotope 30 Si is the rarest among the silicon isotopes, leading to poor reaction probability in natural silicon and correspondingly high ion dose requirements. For this doping process, standard cyclotrons would be sufficient for activation of a few mm of silicon, but the economic gain of treating large volumes vanishes with these physical limitations. The typical problems of inhomogeneous depth profiles of ion interactions also remain as discussed in the case of iron activation in Sect. 5.2. Doping via implanters on the wafer level requires less investment due to the lower beam energy and the 100% implantation efficiency compared to the sub-% efficiency of nuclear reactions. Accelerators also provide options for technical developments and quality control in the doping and semiconductor context. Ion beams, in particular RutherfordBackscattering spectrometry (RBS) enable analysis on the ppm level of dopant concentrations with depth resolutions down to nano-metres (see Sect. 7.1.5). Deficiencies in the ion stopping models and also the spatial aspects of homogeneity can be measured for doping analysis and optimisation after implantation (ex-situ). These methods allow for the quantification of the implanted element and additionally for the remaining crystallinity via ion channelling.

5.3.2 Welding, Cutting, and Additive Manufacturing Welding and cutting as classical manufacturing technologies and the still new field of additive manufacturing all require introducing a high power density into the material to remove or melt material. At several occasions in this book the extreme power densities connected to accelerated beams impacting targets was mentioned. Consequently, charged particle beams can be considered to take over manufacturing duties. For the introduction of power/heat into a material no difference exists between ions

230

5 Technical Applications

and electrons, except for a higher range of electrons. For manufacturing it makes no difference whether the beam deposits the power within 1, 10, or 100 μm, making this aspect irrelevant. Technically, the easier handling of electron beams and electron sources favour electrons and this is also what is applied in the accelerator based manufacturing field. Considering additive manufacturing, also called 3D printing, Laser based methods are currently more common. In contrast to Lasers, electron beams require the manufacturing to be in vacuum due to the energy loss of electrons in atmosphere and the interaction with the targets potentially releases significant radiation dose rates via Bremsstrahlung. These aspects make the Laser based technology generally the cheaper option. Applying electron beams has to offer quality or processing speed advantages to balance this. Lasers have the fundamental disadvantage of high and variable reflection coefficients, in particular for metals. Charged particle beam technologies feature 100% energy absorption, independent of incident angle or surface conditions. Typically, the manufacturing methods apply electron energies of some 10 keV equal to a range in the order of 10 μm with beam powers up to 100 kW in a high vacuum chamber. Focussing the electron beams results in local power densities easily above 10 GW/m2 . Beam optics enables scanning speeds in the order from μm/s to km/s, depending on the setup details. Varying the length of exposure t of a certain position enables fine-tuning the heat penetration depth. Equation (5.5) states this penetration depth depends on the square-root of the thermal diffusivity κ of the material and the exposure duration t. The beam range defines a minimum heat penetration depth, but according to (5.5) only heating pulses shorter than 10 μs actually allow keeping the heat within this depth (for a typical steel κ). Using beam scanning of pulsed beams enables setting of the exposure time as discussed in Sect. 2.2.1. Thermal diffusion length =



2κt

(5.5)

Firing such an intense electron beam onto a material easily melts the material. Electron Beam Additive Manufacturing (EBAM) or also Electron Beam Melting (EBM) represent the accelerator variants of additive manufacturing based on melting powders. The reproducible and surface morphology independent power absorption of 100% together with the control of heat penetration optimized regarding powders and manufactured part provides control over the material micro-structure. This allows for improving the mechanical properties and the surface quality. The high beam power density enables a high productivity since the processing of a certain powder volume equals a certain amount of energy invested for melting it. The capability for melting also provides an option for high accuracy welding with very small welding width and depth. This localisation of the affected zone enables joining materials with incompatible thermal expansion coefficients, which would crack when using larger welds. Vacuum parts are one of the application fields, but also tiny components such as μm thick membranes or foils profit from the spatial resolution of the electron beam. The localized energy deposition enables cleaning, heating, annealing, or melting the surface. The choice of temperature via adjusting the beam power density selects

5.3 Material Modification

231

between the different functions within one device. Moving the beam over the surface enables texturing the target surface in an arbitrary pattern. This can increase for example the effective surface area for functional purposes such as joining with another material or it can harden the surface via the rapid cooling inherent to a thin heat affected surface layer.

5.3.3 Surface Modifications and Nano-Machining The last section discussed the preparation of materials on the macro-scale, in this section the focus shifts to the micro-scale. The short length scale implies modifications of the surface or at least surface near modifications down to a few μm, in contrast to the bulk related methods discussed above. The importance of the surface for technical applications extends beyond thin film technology. Complete technical setups, so-called Micro-Electro-Mechanical-Systems (MEMS), imprinted on the surface enable extremely compact and resource efficient sensor and actuators as depicted in Fig. 5.14 (Huang 2019). These systems combine electronics with mechanical systems. The inertial measurement unit installed in smartphones represent a widespread application of MEMS. The wide range of sensors fitted into modern smart-phones ranging from acceleration over orientation up to atmospheric pressure requires small and highly integrated systems and contribute important features to the device impossible with equivalent conventional scale devices.

Fig. 5.14 Left: 1 mm sized MEMS system chain drive-train example made from silicon. Courtesy of Sandia National Laboratories. Right: Inertial switch prototype. Reprinted from (Huang 2019) with permission by Springer

232

5 Technical Applications

500 nm

Fig. 5.15 Formation of so-called fuzz induced by implantation of He into W. The structure develops naturally due to the formation of sub-surface gas bubbles and their subsequent bursting in W and many other materials. The fuzz nano-structure forms an open porosity with extremely high surface area, low heat conductivity, and a porosity of up to 98%

Furthermore, the surface represents the first contact point for the interaction of a material with any external process. The absorption of light depends on the surface structure and composition, condensed to the surface emissivity ε. Figure 5.15 shows an example of a self-arranged surface structuring induced by helium ion implantation increasing for example ε from about 0.1 in the regular surface state to ≈1. Chemical interactions, for example reactions in a catalyser, scale with the available microscopic surface area, which can be orders of magnitude larger than the macroscopic surface area, given a smart microstructure surface morphology. Surface hardening and structuring reduce mechanical friction and increase wear resistance resulting in improved component lifetime and reduced costs by requiring only a modification of the surface instead of more expensive wear resistant bulk materials. Several accelerator driven technologies exist for these surface modifications. An important aspect for the manufacturing of complex structures is aspect ratio of the manufacture structure, e.g. a hole or trench. Figure 5.16 depicts four ideal and real examples. A beam inducing sputtering will start drilling a rectangular shaped trench. As the sputtering reaches higher depth, the angular emission distribution of the sputtered particles (cosine like emission, Sect. 4.1) results in a certain amount of prompt re-deposition at the trench side-walls. Consequently, a non-rectangular shape develops. The manufacturing speeds typically scales with the achieved beam current. In particular, for small structures the beam emittance and current density limit this value as discussed in Sect. 2.3. The established technology for the manufacturing of nano-structures in semiconductors is lithography with photons in the order of 10 eV, depending on the structure size. Lithography combines the physical marking/modification of a photoresist by beam bombardment with subsequent chemical etching steps removing the marked or the non-marked part of the photoresist, respectively, depending on the selected

5.3 Material Modification

233

Fig. 5.16 Four options for micro-machining aspect ratio and the problem of actual component separation and distance. a The ideal rectangular trench. b A real drilling hole produced by keV FIB with a characteristic form given by trench-side interactions. c An ideal 3D machining modification leaving material above the part removed at a designated depth. d A real 3D machining feature with an entrance channel for the beam

working mode and resist. The part of the surface still covered by photoresist will not be removed by the subsequent etching step with a different etching agent due to the protective function of the photoresist. Typically, first a mask is produced using a CAD model and a small beam inscribing the nano-structure onto photoresist covered metal (e.g. Cr) layer deposited onto a transparent material (e.g. quartz glass). The masks imprinted with the foreseen structure enable a quick irradiation of wafers in the production stage in the order of minutes using a broad photon beam, which illuminates the wafer where the metal was removed in the mask. The photon wavelength limits the minimum structure size possible to manufacture using this method. Ion- and electron-beams offer the potential for strongly reduced wavelength, see the de-Broglie wavelength (7.1). This would further decrease the possible structure sizes enabling manufacturing more complex and energy efficient microchips. All types of charged particles are applicable offering specific advantages in terms of range and trench aspect ratio as shown in Fig. 5.17. Accelerators could also serve as photon sources (Sect. 4.3) producing shorter light wavelength than currently available, extending the limits of the classical photolithography. So far, accelerators have only little relevance in the industrial mass fabrication, since photo-lithography provides significantly higher production speed due to its large irradiation area enabled by the masking technology. This compares to the relatively low speed of accelerator beam scanning for the interesting small structures and the limits of beam current delivered to nano-metre sized spots (nA range). Masks are difficult to apply for charge particle beams due to the high penetration depth of electrons and the sputtering induced by ion impact. Moreover, the energy invested per incident particle clearly favours 10 eV range photons over several 10 keV range ions or electrons. The speed of particle beam lithography technology currently lacks about 7 orders

234

5 Technical Applications

60 µm

Beam spot

40 µm

P-beam writing e- beam writing FIB 2 MeV protons 50 keV Ga-ions 50 keV electrons

20 µm PMMA polymer 2 µm Fig. 5.17 The lithographic aspects of different beam types. Protons with their low straggling allow for a damage/developing profile located at the track end (Bragg peak), deep in the material, with only little lateral extent and a high trench aspect ratio. 10 keV range heavy focussed ion beams (FIB) sputter only at the surface, effectively drilling a hole instead of modifying the resist. Electrons create a plume of damage due to their large straggling effect resulting in comparable depth and lateral extent of the developing effect

of magnitude in manufacturing speed (measured in area/time) behind photon lithography. Bridging this large gap via technical improvements appears challenging and would require orders of magnitude improvements in all involved fields, namely beam optics, particle sources, resists, and instrument layout. However, accelerators can also serve as photon sources (Sect. 4.3), possibly producing shorter light wavelengths, thereby extending the limits of the classical photo-lithography. The discussed charged particle lithography still relies on the same technological approach as photolithography, but in principle the whole technology could also look different. Removing target material directly without a photoresist via ion induced sputtering (Sect. 4.1) represents an alternative route. Analysis methods make use of this so-called focussed ion beam (FIB) method, see Sects. 7.1.3 and 7.1.4. FIB, for local 3D analysis. While heavy ions are established, technical solutions for light ions were only developed only in the last 10 years, e.g. the Zeiss Orion NanoFab device. Furthermore, the ion-matter interaction changes materials via induced heat and radiation damage (Sect. 7.4). Any of these contrasts can be exploited by chemical and physical removal methods. Following this idea MeV protons, preferably applied due to their high mass over charge and the highest beam currents generated in the accelerator sources (Sect. 2.4), have the potential of becoming a next generation tool for nano-scale 3D machining. The main advantages over electrons are the smaller amount of large angle scattering events in matter, leading to high aspect ratios of cuts, and the higher stopping power. In contrast to keV ions which allow only surface-near 2D machining, MeV level

5.3 Material Modification

235

Fig. 5.18 SEM pictures of 2 MeV proton beam nano-machined 3D structures in silicon. a Freestanding silicon wires at 6.95 μm depth. b Two levels of photonic crystal slabs at 2.4 and 7 μm depth. Reproduced from (Dang, et al. 2013) with permission by Springer

ions offer increased penetration depth, allowing for 3D machining by shifting the Bragg Peak to different depth via a modification of the ion impact energy (Fig. 5.17). These are actually the same advantages relevant for proton therapy of cancer, see Sect. 6.2.1, although on a smaller depth scale. The combination of both allows for producing complex and 3 dimensional structures as shown in Fig. 5.18 by variation of the beam energy and modification of, in this case, porosity of silicon. The limit of MeV ion-beam machining so far lies in the achievable lateral resolution and the processing speed. While focussing of keV ions and electrons to sub-nm beam spots became state-of-the-art, MeV proton beams required for relevant 3D machining depth reach only the order of some 10 nm spot sizes with < nA currents. The main obstacles related to the beam spot size lie in the beam emittance, the size and type of focussing optics, and the constructive challenges of compact arrangements of ion optical elements (see Sect. 2.3.2). The 3D nano-machining potential of this technology is driving the development, potentially creating a new market if the technological limits can be significantly extended. Besides these designed surface modifications nature also develops spontaneous patterning, nano-structuring, and ripple formation by ion-beam and plasma impact through natural self-organised processes (Bradley and Harper 1988; Bernas 2010). These have the advantage of requiring neither scanning nor masks. Plasmas provide an equal flux of ions and electrons to surfaces, but since electrons hardly influence the material structure plasma practically represent a technical way for providing low energy ion beams, reducing instrument costs. Ion impact-energies in plasmas range from eV to 100 individual methods. All of them exploit similar physics and devices discussed earlier in this book. This chapter discusses about 30 methods, highlight their information properties to enable the reader understand and select a complementary set of methods for a given analysis problem.

Progress is only possible through death and retirement (H. W. Möller)

This chapter concentrates on applying accelerators to gain deeper insight into materials and components in order to proof this saying of my father wrong. These samples can originate from industrial context for quality assurance, from scientific developments, or whatever can be imagined. The electron microscope represents a straightforward example of the similarity of these two worlds. Originally developed in a university its importance and value soon became apparent in fundamental research. The extreme magnifications possible due to the short wavelength of electrons compared to visible light opened up the nano-scale world. Not only physics became interested in the technique to study phenomena in solid state or material physics, but also biology and other disciplines applied it. Finally, when the devices became more mature and cheaper they found their way into quality assurance and development laboratories of metal, electronics, and many other industries. The three main tasks of material analysis are the identification of chemical and isotopic constituents, their quantification, and their localisation in spatial and structural respect. These three analysis qualities translate to a decision diagram when seeking for a suitable analysis for an application. Figure 7.1 aids in identifying the actual requirements when thinking about an analytical method. Not every problem requires localising, identifying, and quantifying all elements with isotopic resolution on a nm scale throughout m3 volumes with ppm accuracy. Strength usually leads to drawbacks in other aspects. For example, nm resolution will hardly be possible in a m3 sized bulk sample, but rather on a mm sized surface layer sample. The strength © Springer Nature Switzerland AG 2020 S. Möller, Accelerator Technology, Particle Acceleration and Detection, https://doi.org/10.1007/978-3-030-62308-1_7

271

272

7 Material Analysis and Testing

Imaging/ Localisation Resolution: nm or mm

Probing volume: mm³ or m³

surface layer, thin film, or bulk

Quantification

Detection limits: ppm or %

Light or heavy elements

Tomography or 2D map

Identification

Absolute or relative

Elements or Isotopes

Chemical or Elemental

Fig. 7.1 Decision diagram for comparison of photon, electron, ion, and neutron beams in analytics. All methods have their specific strength and weaknesses given by the “or”. No single method can satisfy all optima of each of the three categories

of each method will be given in the following sections. Combining several methods cancels weaknesses of the individual methods at the expense of additional time and costs. An example situation: Synthesis of materials by physical (e.g. sputtering) or chemical (e.g. solid-state reactions) means is a delicate business. The input materials or sputtering targets can be easily characterised and their composition is usually known with sub-percent accuracy, but the final product not necessarily follows this composition. Besides these raw materials, the injection of reactive process gases such as air may be required for certain products. Vessel materials tend to mix with the reagents. Several reaction pathways or steps will connect the initial raw materials with the final product. In the case of sputter-deposition, these are the sputtering process, a gas phase transfer, the sticking onto the substrate and finally a solid state reaction on the substrate. In all steps non-ideal processes take place and the actual skill of producing the targeted material lies in the smart optimisation of the input parameters. For example synthesizing a NCM lithium cathode material with this technique involves mixing Li, Co, Mn, Ni, and O. A set of different stable compounds exist with the given ingredients and even the targeted material has, by intention of charging and discharging it, the possibility for varying lithium stoichiometry or content. A single compound is of interest, the possible outcomes have no use for a battery. Careful optimisation of reaction temperature and time, gas composition, and nonstoichiometric mixture of the ingredients (e.g. a bit of excess lithium) will lead to a good product. This way of optimisation, or calibration of the production method, required for obtaining an optimal and reproducible material cannot be taken blind. Tomographic information on the elements distributed over the some 100 μm thick cell with sub-% absolute accuracy is required. Relevant methods will be discussed in the following sections, but a full solution to the requirement is actually not available, so far. In conclusion, analytics play an important role in scientific research and industrial production by being the eye of development. After production, the role of analysis continues in the use of products. Over the lifetime of a product deterioration and fatigue take place. These abstract terms connect directly to physical and chemical changes of the product. This could be oxidation of an electric contact, the formation of inactive phases inside a battery or lattice-defects and micro-fractures in a reactor vessel. A sensitive detection of these initially minute

7 Material Analysis and Testing

273

changes and extrapolation of the trends represents an economical means for shortening testing times and understanding failure mechanisms for improving component lifetime in an economical fashion. Precise analytics allow identifying problems and limits before they become practically relevant. The more precise and complete the analytical methods reveal these changes, the better and the more economical the final product will become and the less probable unforeseen situations and regress will occur. Today’s analytical methodical park includes methods for detecting single atoms (usually nuclear methods as in the medical diagnostics) up to macroscopic structures and quantities of regions deep in a material bulk. For the quantification counting statistics, signal noise, and background represent the main limitations, while for the spatial resolution of the localisation the wavelength of the probing particle constitutes a natural limit. As we know from quantum physics everything in nature, and in particular sub-atomic particles, can be considered as either a wave or a particle. The De-Broglie wavelength λ of a particle beam of momentum p is given with Planck’s constant h by λparticle =

h h ≈ p mv

(7.1)

The wavelength describes the so-called diffraction resolution limit, with visible light wavelengths in the range of 380–700 nm limiting the possible resolution of optical analytics to rather large structures. Physicists definitely found tricks to circumvent these optical limits, but these tricks lack in universal applicability, see for example the 2014 Nobel prize on super-resolved fluorescence microscopy. To achieve better resolution the electron microscope was invented, leading to a reduced wavelength/diffraction limit of e.g. 12.2 pm for a 10 keV electron beam. Even shorter wavelength can be reached with ions and neutrons at high energies which is the physical basis of probing quarks and their substructure using particle physics accelerators such as LHC at CERN. This extension of physical limits beyond optical and chemical analytics represents the fundamental strength of accelerator based analytical technologies. Practical solutions or devices, respectively, hardly reach these ideal limits, but it constitutes an ongoing challenge. The extent to which these technical challenges were solved defines the quality of an instrument. We will discuss these technical solutions and the advantages of different methods for different applications. Some challenges are more difficult to solve for certain probing particles and hence for each analysis task a different particle type might be advantageous. Commonly specific tasks require specific optimisations, leading to a diverse situation of similar, but in detail different analytical devices for a single method. Some technological challenges reoccur throughout single methods or even groups of methods, which is the very reason Chap. 2 belongs into this book. The detection of the products and secondary particles emitted in the beam-matter interactions exploited for the analytical method represent one of these challenges where technological limits of integral technical components limit the analytical technology.

274

7 Material Analysis and Testing

Fig. 7.2 3 kV scanning electron microscopy image of a polished stainless steel surface after 50 min of hydrogen plasma exposure. The imaging reveals the development of a nano-structure with precipitates and grain boundary rifts. Courtesy of Marcin Rasinski

The detection of electrons and ions with silicon-based electronic detectors reaches an efficiency of 100% as soon as the sensitive/active detector thickness reaches the stopping range of typically 100 μm to a few mm, see Fig. 2.43. For photons and neutrons, detection efficiencies quickly drop for energies above a few keV. This reduces the detection limits and requires frequent detector calibration for an absolute quantification of the results. The calibration transforms a qualitative to a quantitative measurement. For ion based analysis this step usually drops out due to the usually 100% detection efficiency, leading to a so-called calibration-free method. Imagine an electron microscopy image showing for example a metal surface as the one in Fig. 7.2. Imaging represents a qualitative analysis, meaning the image shows us something structured. It is impossible to derive dimensions, particle thickness, composition of the structures or for example the form of cracks from the image, although this information is encoded in the image. The information remains hidden. Quantification of such an image requires many steps from the physical connection between a certain difference to the measured signal, the incoming flux and fluence, over the analysis geometry and optics up to the detectors and statistical image analysis. Although acquisition of quantified data requires significantly higher effort, the quantified results provide more information often crucial to find trends in datasets or exclude false conclusions due to too many assumptions on what should be seen in the image compared to what the image actually shows. An analysis apparatus needs certain equipment and design aspects in order to allow for a calibration or even a calibration-free analysis, but also the understanding of the underlying physics requires a certain maturity in the form of theoretical understanding. In the end, even the best calibration-free method relies on a calibration/measurement of angles, sizes, and distances of the actual apparatus with all the tolerances and alignment involved.

7 Material Analysis and Testing

275

Table 7.1 Cost comparison for analysing samples for their elemental composition. Costs were estimated with 2018 prices without claiming knowing the absolute truth or certain device prices Method

Accuracy (%)

Detection limit (ppm)

Resolution (nm)

Upkeep (ke/year)

Invest (ke)

Cost/sample

XRF

1

10

106

115

300

30

EDX

10

1000

10

120

300

45

PIXE

1

1

1000

150

1500

75

Investments assumed to depreciate over 20 years. 3000 (EDX, PIXE) or 4000 (XRF) samples per year assuming 200 days with 8 working hours with one expert working full time on the device. Upkeep includes room, power, maintenance, and 1 expert (100 ke/year labour costs)

Technology is always advancing in our modern society (which is definitely not self-evident). As a result, new devices come onto the market tackling the challenges of the different methods and extending the range of the technically feasible. This should encourage the reader to consider new technologies and combinations of these, if existing components are insufficient. This term technical realisation describes the point of a certain device/instrument/apparatus in this multi-dimensional space of methodical and technological challenges. German language has very precise and convenient words for describing technical devices and would use the term “apparative Umsetzung”. In English language the term intends to describe how a machine implements a certain technical challenge within the technological limits. The apparatus represents one of the main building blocks of any analytics. Having a good idea is another building block, but it must be possible to implement it technically. Furthermore, having a first implementation, a prototype, often leads to the insight of how to improve the apparatus power. The following generations of this device will then implement better resolution, economics, precision, and so on, gradually approaching the technological limits. This prototyping approach is typical and very practical for complex problems and hence found its way not only into analytics, but also into many organisational and business processes such as agile management. Randomly picking out an example allows for a calculation of analysis costs for three different methods available for laboratory applications. The task: Analyse the elemental composition of a material. The methods: Photons using the x-ray fluorescence spectroscopy method (XRF), electrons using the energy dispersive x-ray analysis (EDX), or protons using Particle induced x-ray emission analysis (PIXE). These methods are all based on the emission of characteristic x-rays but exploit different projectile types. Table 7.1 compares a few important aspects. Personnel costs dominate the upkeep for all devices. The large device required for PIXE consumes more power and space, leading to higher costs compared to the table sized devices for XRF and EDX. High investment costs of accelerator based analytical tools lead to a strong scaling with degree of capacity utilisation. The assumption of full utilisation leads to comparable costs for all three methods, but with lower utilisation, in particular PIXE specific costs will increase rapidly due to the higher investment costs. On the positive side, PIXE offers the best detection properties.

276

7 Material Analysis and Testing

This chapter lists numerous examples of accelerator based methods, pushed as far as it can go without loss of generality. Numerous other methods exist not even mentioned here, but listing all methods would go beyond the scope at least of this edition. For all this, there’s only one thing you should know: In the end it doesn’t even matter how many methods we list. It is important to understand the basics, then we get them all. Look down the list and what’s there marks examples of what ought to be. (freely from: Linkin Park—In the end).

7.1 Ion-, Electron-, and Photon Beam Analysis • You will never become one of the best without accepting your feedback, even the worst of it Just like for your personal development, technical developments require feedback. Imagine for example the task of synthesizing a glass, but it comes out dull. What to do? Trying again until is an option, but if it’s not working after the third approach you scientifically qualify as stupid. Analysis methods will provide an insight into the reasons and the points for optimisation of the production process. The analysis using accelerated ion- and electron-beams and secondary photons produced from accelerators features several distinct qualities for material analysis. Numerous competitive methods based on light, chemical, or physical mechanism exist. Usually these are even less expensive, but accelerator based methods offer the strong combination of quantified accuracy, spatial resolution, and atomic or nuclear sensitivity. Figure 7.3 depicts the geometry of the interaction underlying an analysis. Geometry represents the main aspect and also challenge of the corresponding analysis devices. In the following, the angle between the incident beam and the normal of the target surface will be depicted by α. The reaction angle θ depicts the angle between the detector and the beam direction. By this definition, backscattering depicts particles Fig. 7.3 The fundamental geometry of surface analysis methods

7.1 Ion-, Electron-, and Photon Beam Analysis

277

scattered towards the half-space of the incident beam (θ > 90°), while forward scattering depicts the particles scattered in beam direction (θ < 90°). Due to the limited range of charged particles in matter, described in Sect. 3, forward scattering restricts to targets of μm thicknesses or gracing incidence geometries (45° < α < 90°). Since the beam carries a forward momentum its direction has a special meaning for the reaction geometry in particular via the conservation of momentum before and after the reaction. The angle θ represents a circle around the incident beam. The detector position on this circle is arbitrary with respect to the reaction, but in the case of α = 0° the path length inside the sample of the particles exiting the sample depends on the angle between sample surface normal and detector β. The angle β becomes relevant if stopping and absorption are considered by the fact of significantly larger stopping and absorption in solids. ρ=

I D∗σ

(7.2)

The quantification of a target material per investigated surface area ρ using massive particles (not photons) from an infinitely thin target slice with the amount of incident projectiles on the investigated surface D, the amount of the received events I of an interaction given by a cross-section σ. Equation (7.2) states a proportionality between concentration of the sought material and the detector signal I. Together with counting statistics (Sect. 7.1.1) this represents a technological limitation for detecting minute quantities, a detection limit. D represents the integrated beam current received by the sample. The more beam current and the more time spent for each measurement, the smaller the concentration ρ the method can detect. Thinking of a matrix of measurement pixels acquired for a 2D sample mapping such as Fig. 7.2 this directly defines the amount of points we can measure on a given sample area in a given analysis time. Therefore (7.2) also connects lateral resolution, beam current, detector properties, detection limits and accuracy. In this section we discuss several methods based on photon (the types originating from accelerators), electron, and ions beams. These three types have their distinct properties regarding technological availability, such as the outstanding quality of electron sources (Sect. 2.4), their continuous matter interaction (stopping or loss of quantity), and their nuclear properties regarding nuclear and electronic interactions. Furthermore, the particle energy distinguishes the analytical methods with a particular difference between energies below and above about 1 MeV. Below this energy mostly only interactions with the electrons of the sample take place, while for higher energies the relevance of the electrons reduces while the importance of nuclear interactions increases. These six general accelerator based analysis regimes translate into the methods discussed here. Due to technical feasibility, the high energy photon and electron regimes are hardly commercially available, while the high energy ion analysis is just on the edge of becoming commercially available. As this book focusses on established technology the following sections will restrict to the commercially available subset.

278

7 Material Analysis and Testing

7.1.1 Physical Concepts, Detection Limits, Accuracy and Uncertainties • Results wouldn’t be nothing, nothing without an error or uncertainty (It’s a results world) Material analysis by accelerators relies on the interaction of photon, electron, ion, or neutron beams with matter (see Chap. 3). The only other degree of freedom is the particle energy (which is limited by technical and economic constraints). These statements make the physical world of the analysis smaller and easier to understand. The analytical information itself relies on the change of the properties of these projectile beams and/or the emission of secondary particles originating from interactions with the target/sample and their subsequent detection. These interactions take place on different physical levels, depending on projectile type and energy. Three distinct levels describe these interactions with higher energies probing smaller structures. On the low energy range up to some keV, chemical bindings are probed. Increasing energy opens access to the electronic structure of atoms connected to the elements, but also structures such as crystalline order become visible. In the MeV range, the Coulomb barriers begin to fall and nuclear information becomes accessible, adding isotopic sensitivity. Usually technical limits of energy resolution blur the information of lower energy levels to invisibility for higher energy analysis beams. Consequently we have to decide which type of information (elemental, isotopic, chemical) the analysis needs to yield in order to decide on its details. Neutrons differ from the other projectiles due to their extremely weak electro-magnetic interaction with targets, making them more universal but also the technically most challenging projectiles. Not all data interpretation is as easy as imaging. Even imaging often contains layers of information, think of a map displaying not only land mass outlines but also altitude, ground type, or borders. With some physical understanding we could even derive the regions of intense rainfall or the movement of continental plates via their connection to mountains and altitude variations. Many of the methods discussed in this chapter are indirect methods. Unlike for example a voltage measurement with a multimeter, the sought after value is not directly measured, but only its impact on a detected secondary particle spectrum. This type of measurement requires a model for data analysis, see 3.5. Models such as stopping power and kinematics allow for a more or less unambiguous evaluation of the measured spectra, extracting the analysis result from the raw data. For this interpretation a model can employ a forward or a backward calculation ansatz. A forward ansatz varies the input parameters (e.g. the sample structure) of a given model until the resulting spectra match the measured ones. A backward ansatz applies mathematical inversion of given analytical models. Material analysis requires those physical and mathematical models for understanding the processes responsible for the obtained data. As such the model also introduces additional systematic uncertainties by the limited amount of cases and physics considered for constructing the model. A simple example: The face

7.1 Ion-, Electron-, and Photon Beam Analysis

279

Fig. 7.4 The face-swap app as a model for the limited understanding of an algorithm. The limited knowledge and situations implemented in the algorithm let it identify the authors face and a drawing of Kofi Annans face, although the latter is just a good painting not a real face (the model yields a false positive)

recognition software of a face-swap app tries to find human faces in a picture via the typical eye-nose-mouth combination. Figure 7.4 shows an example where this model leads to a misinterpretation of data since not only the human face features this structure. Data analysis by models potentially induces such an ambiguity into results, therefore stay sceptical about modelling outcomes. Detectors, see Sect. 2.5, form an important technical aspect in this distinction as any result can only be as good as the data used for it. The detector dynamic range, the ratio of minimum to maximum detected signal level, typically lies in the order of some thousands. The larger the dynamic range, the more information and complex interpretations become available. In the digital world this originates from typically 8 (28 = 256 levels) to 16 bits (216 = 65,536 levels) of signal sampling. Even if the dynamic range is not limited by digital sampling accuracy, it remains difficult to reach relative accuracies below 10−5 . In fact many of the methods presented here feature about 1% = 10−2 of dynamic range. Most physical measurements suffer from the same technological limits leading to a stacking of problems. This starts at fundamental constants or cross-sections required for the data analysis, but hardly known with better accuracy than the applying method itself (since it is the same they are determined with). Technically a shift of sensitivity enables circumventing the limitations of dynamic range. In photography modern cameras do this automatically by adjusting exposure time, aperture, and amplification (the ISO value). All analysis methods have mostly identical means for manoeuvring between large and small signal intensities. Unfortunately, this also has limits, resulting in the detection limit of the method, the smallest detectable signal. The elemental composition analysis described in Sect. 7.1.5 for example typically allows for detecting parts-per-million (ppm = 10−6 ) of a single

280

7 Material Analysis and Testing

element in a sample. Background effects but also simple statistical limits of count rates induce the detection limit. Due to these difficulties, physicists and mathematicians developed mathematical formalisms for dealing with errors/uncertainties. Both terms, error and uncertainty, are often used as synonyms. The word “error” sounds like a problem of the data, therefore today the more precise term “uncertainties” is common as it describes a natural part of the measurement world. In this formalism uncertainties are categorized into two groups: Systematic and statistical (or random). This distinction is very important for comparing data to each other. Data acquired on the same device may all show the same systematic deviation from the same data acquired on a different device. Device A always delivers “systematically” say 20% higher values than device B, but they could still show the identical trend of their data. When measuring a certain value under the same conditions repetitively the systematic uncertainty will not appear in between the data points. Here the statistical uncertainty remains. This difference relates to the absolute uncertainty to measure the true value and the relative uncertainty between two measurements following a certain trend. Statistically means the results will distribute around the true expected value of the measurement. An example: The dice throw counting statistic experiment. Every number on the dice should appear equally often, but after say 6 throws, 2 numbers probably didn’t even appear once. The more often you throw the clearer the measurement indicates the equal probability of each number. Statistical uncertainties shrink with every repeated measurement and mathematically disappear in the limit of infinite measurements. Systematic uncertainties exist independent of the data statistics (loaded dices remain loaded no matter how often you throw them). In any case, the uncertainty represents a confidence interval in the sense that statistically not all results must lie within the specified uncertainty corridor. For example specifying a 3σ uncertainty says 99.73% of the data points lie within the uncertainty corridor. The remaining data points may be well outside this range without disproving the result. Practically several uncertainties occur simultaneously, such as counting uncertainties, device drifts, input data uncertainty and so on. Summation rules allow for calculating the resulting total uncertainty. The result strongly depends on the connection between uncertainties. Independent uncertainties originate from independent sources, for example the counting errors of two detectors. Dependent uncertainties are related to common effects, for example temperature drifts of two devices next to each other. The two devices measure the impact of the temperature drift independently, therefore a correlation value has to be subtracted from the sum of uncertainties in this case. Dependent uncertainties are practically often negligible so they will not be discussed here. For a quantity A derived from several independently measured values x i the error propagation rule for independent x i reads:  A =

2  2  2  2   dA dA dA dA xi = x1 + x2 + A + · · · i dx i dx1 dx2 dx3 (7.3)

7.1 Ion-, Electron-, and Photon Beam Analysis

281

Calculating A requires the equation deriving A from the x i . The derivatives of this equation enable calculating the individual errors via the change (=first derivative) of A towards the variations in x i . Multiplying this sensitivity towards x i with the uncertainty range x i yields the contribution to A. x i /x i has to be small or higher order derivatives are required for an exact solution, but if this is violated the measurement has anyways more severe problems than correct uncertainty propagation. We evaluate this at the example of (7.2) with A ≡ ρ where we assume I, D and σ to be uncertain.  2  2  2 dρ dρ dρ I + D + σ ρ = dI dD dσ     2 2 2  e −eI −eI I + D + = σ (7.4) Dσ D2σ Dσ 2 Equation (7.4) demonstrates the calculation of total uncertainty of ρ. The reader may insert some exemplary numbers to check how the relative uncertainty ρ/ρ depends on the uncertainties of the other values. Figure 7.5 provides an example of two different experimental counting situations of I as an inspiration. Counting statistical uncertainties are the prime example of uncertainties in analytical methods 6

experimental 12 μC fit 12μC

5

experimental 70μC fit 70 μC

Counts

4

3

2

1

0 460

480

500

520

540

560

Channel Fig. 7.5 NRA measurement with small and large integral signal level of the 10 B(α, p0 )13 C on the same NCM811 battery material sample after 12 and 70 μC of α-ion dose. The number of counts increases proportionally. Evaluation of the boron content becomes increasingly accurate with more counts. In the 12 μC case a part of the interpretation becomes questionable due to detection limit with 1 ± 1 counts while the 70 μC case offers a large signal-to-noise ratio around channel 540

282

7 Material Analysis and Testing

as many of them rely on counting particles. Derived from the Poisson distribution, (7.5) describes the uncertainty of the detected events/counts I as its square-root. I =



I

(7.5)

The largest partial uncertainty dominates the total uncertainty of ρ, while three identical uncertainties result in a square-root increase. In this special example ρ depends linearly on all three input quantities, but in cases where powers of higher order or exponential relations connect the derived with the measured quantities, some quantities affect the result stronger than others and small uncertainties in a quantity with high scaling result in large uncertainties of the derived value. Complex models potentially even feature several incompatible outcomes within larger uncertainties, if the dependence is not strictly monotonically (e.g. sine function). Understanding errors allows improving an analysis technique on the technical side by identifying the critical values yielding the highest benefit for the cost of reducing their uncertainties. Finally, uncertainties have a certain notation standard +0 A = 5.130−0.15

B = (1.23 ± 0.1) ∗ 108 ; B = 1.23(10) ∗ 108 C = 1050 ± 65(sys) ± 123(stat)

(7.6)

In the example (7.6) we have three uncertain quantities A, B, and C. A has a value of 5.130 without a positive uncertainty (it’s an upper limit), but only a negative uncertainty of 0.15, a notation often seen in technical drawings where this type of uncertainty is called tolerance ensuring e.g. a pin fits into a hole. Quantity B has a symmetric uncertainty of 0.1 * 108 . Both notations of B have the same meaning with the bracket version giving the uncertainty of the last digits of the preceding number. Quantity C features two separable symmetric errors, the systematic error of 65 and the statistical error of 123. Besides the accuracy on the single data-point level many applications require mapping information in 2D or 3D. Accelerator based methods implement the lateral resolution via the beam spot size. A smaller spot size leads to smaller/better lateral resolution, but in general also reduces the beam current (=signal intensity). Beam optics and special particle sources reduce the spot sizes with typical demagnifications in the order of 100. The limits of spot size are mostly of technical nature with currently achievable spot sizes down to about 1nm. Typically the methods are prefixed with a μ- or nano- in order to indicate a small spot size (e.g. SIMS vs. nano-SIMS) version of the regular method. Despite the spot size, these methods show no difference to the un-prefixed versions. Smaller spot sizes are not always desirable, not because it’s good to hide information (its not!), but it depends on the sought information. Smaller spots are prone to irregularities such as dust or remainders of mechanical polishing on the sample while larger spots average out these aspects resulting in reduced uncertainties. Both types, local and mean result, contain different kinds of information. Finally yet importantly, the selection of spot size, mapping area,

7.1 Ion-, Electron-, and Photon Beam Analysis

283

and point density define the amount of required measurement time and processing resources. The holy grail of analytics, a full tomographic (3D) mapping requires additional depth information in every measured point of the 2D map. Essentially four ways for depth resolved information exist. The enery-loss of charged particles equals a connection between projectile energy, product energy and depth (S in units of dE/dx). Consequently, energy resolved detectors provide depth information (see Sect. 3.4), but straggling and energy resolution effects limit the depth resolution. This type of depth resolution represents the strong point of ions due to their lower straggling compared to electrons. The second option exploits sample rotation in transmission mode, similar to the medical computer tomography. This mode requires beam transmission through the sample, generally requiring the range inherent to photons. A more general option exploits the variation of range and reaction cross-sections with beam energy. The change (d/dE) of these parameters with energy defines the resulting depth resolution. Increasing projectile energy increases the range. Adding up results at several energies allows for defolding the results according to the depth dependent sensitivity at each energy. Lastly, the sample can also be mechanically sectioned and analysed in a cross-cut geometry. In addition to classical mechanical methods, sputtering based methods enable a removal of surface layers, revealing deeper layers with the progress of removal, see Sect. 7.1.4. Sputtering can also remove slices on the nm scale as described for example in Sect. 7.1.3. Inhomogeneous sputtering and the surface modifications induced by the sputtering introduce uncertainties and roughness to the surface. Roughness generally challenges material analysis methods, in particular the ones on the micro- and nanoscale. Technical surfaces are rough. This has several implications in particular for inhomogeneous media and layered structure analysis as depicted in Fig. 7.6. An analysis beam larger than the characteristic lateral roughness scale will yield a different result than a beam smaller than the lateral roughness.

Fig. 7.6 Rough surface can mean a lot of things. The left part shows an example of a rough layer on a smooth surface while on the right part the layer is smooth and the surface rough. Correspondingly impact and exit angle and the definition of depth can change. The mean level indicates the surface as assumed for Ra . It has to be clear what the term surface and depth mean. Is the rough surface the flat projection area or the true local inclined surface? How does a large and a small beam-spot change the result?

284

7 Material Analysis and Testing

Angular variations of the local impact angle from the macroscopic impact angle alter the sample description as seen along the projectile path to what we would expect from a flat surface, in particular with grazing incidence methods. Exact surface profiles determined via profilometers or optical methods are often impractical as input for result interpretation. Several technical parameters exist to describe roughness in a reduced way based on statistical distributions. The value Ra , states the average deviation of the local surface from a mean level. Milling, drilling, and turning typically results in Ra in the order of μm, while fine grinding and polising reach values down to Ra ≈ 10 nm. The smaller Ra , the larger the range and the spatial resolution, the less impact the roughness will have. Consequently, microscopic analysis often requires mirror finished samples for scientific result quality.

7.1.2 X-ray Absorption and Scattering Analysis X-rays in the range of keV to about 100 keV offer a large variety of analytical methods categorized in three methodical groups: Imaging, spectroscopy, and scattering. Among the numerous methods in these categories this book can only discuss a few selected examples. The x-ray production for these applications to the largest extent relies on the bound electron type sources, namely of x-ray tubes (Sect. 4.3.1). Above a certain voltage threshold (in Germany 30 kV) many countries require special radiation protection measures and/or trained personnel, due to the increasing penetration depth of the x-rays resulting in relevant outside doses. Devices with lower voltages and appropriate shielding can usually operate without these costly additions and therefore provide the working horses of analytical x-ray applications. A few applications make use of free electron sources due to their higher brightness and wavelength variability opening up additional experimental optimisation dimensions, e.g. a kind of x-ray “colour”, and improved result quality to the analysis. For mobile and handheld applications also radioisotope x-ray and γ-ray source are applied. The analytical methods physics are independent of the x-ray source, but the bound electron source variants have to deal with reduced signal intensity, broad x-ray spectra, and worse spatial resolution compared to the free electron source variants. Imaging exploits the photons for projecting a sample in real space onto a 2D detector. These detectors are usually special types of digital camera chips, but also could be imaging plates, arrays of individual silicon x-ray detectors, or scintillators can be applied. Analytical imaging probes on all scales from macro- to micro- to nano-scale. For imaging x-rays offer a different contrast than visible light imaging due to the different absorption mechanisms in matter. More importantly, x-rays penetrate material enabling a 2D see-through and 3D tomographic imaging impossible with visible light. The physical difficulties of x-ray optics prevents using classical imaging techniques for most applications. Instead, imaging offers a purely geometric option for magnifying objects as depicted in Fig. 7.7. The spatial resolution using this technique mostly depends on the detector pixel pitch and the x-ray source dimension.

7.1 Ion-, Electron-, and Photon Beam Analysis

285

Fig. 7.7 Geometry results in a larger projected size of objects close to the x-ray source and distant to the detector. Via these distances a form of magnification can be set. Larger magnification goes hand in hand with an increase in image blurring, but smaller x-ray sources reduce this effect

The method of x-ray absorption imaging, regularly applied not only in medical diagnostics, works in a similar fashion also in industry and material analysis. In this context it is called radiography and is part of several industrial norms (e.g. DIN EN ISO 19232, 17636, 5579, 11699). The contrast derives from a contrast of different material (e.g. plastic vs. metal) and from the material thickness between source and detector. Defects, cracks, and porosity in a material reduce the effective thickness inducing a contrast for x-ray inspection, even when they are hidden inside the material or when they are smaller than the spatial resolution. The penetration property of xrays allows seeing into hollow objects such as tubes or chip packages (see Fig. 7.8) or compounds for locating problems without dismantling the objects. Practically the same method can deliver tomographic information by acquiring several images when rotating detector and source around the sample, similar to medical tomography (Sect. 6.1.1). In the technical context smaller scales are of interest compared to the medical application. Consequently, x-ray micro-tomography in its latest form reaches spatial resolutions down to a few 10 nm (Maire and Withers 2014). Improving the spatial resolution below the 1 μm requires x-ray focussing optics. These optics are not as compact as visible light optics, but several options such as Fresnel zone plates, gratings, or so-called K-B mirrors exist (Maire and Withers 2014). Computerized post-processing for extracting size distributions of for example inclusions or volume fractions of different contrast zones (≈materials) produces also quantitative information in addition to the qualitative imaging. Photon beams with narrow spectral distributions, such as monochromated synchrotron light, enable high contrasts for specific elements when spectrally placed on the K-absorption edge of the corresponding element. Using energy-resolving pixel detectors adds compositional information to the image using the spectroscopic methods discussed below. X-ray imaging allows for fast image acquisition in the order of several 10 ms to hours, depending on required contrast, spatial resolution, and imaged volume. This time-scale enables in-situ analysis of processes such as crack formation in mechanical testing. Figure 7.9 shows an example from biological science of analysing the effect of drugs on a parasite infection using a combination of x-ray imaging and spectroscopy

286

7 Material Analysis and Testing

Fig. 7.8 Example x-ray imaging enabling inspecting the wiring and components in an integrated circuit chip without opening the structure. Reproduced from X-Ray_Circuit_Board_Zoom.jpg: SecretDisc derivative work: Emdee under CC-BY-SA-3.0

Fig. 7.9 This x-ray image shows details such as the vacuole of the two Plasmodium parasites (artificially coloured in blue and green) inside an infected blood cell (red) acquired using multiple wavelength synchrotron x-ray tomography with sub-μm resolution and a combination of imaging and XRF. Reproduced from (Kapishnikov et al. 2019) published under CC-BY 4.0

methods (Kapishnikov et al. 2019). Mapping of certain elemental components such as Fe or Br contained in the cells and the drug in combination with highly resolved imaging enabled following the action of the drug and its metabolisation. This brings us directly to the spectroscopic methods. In addition to the straightforward analysis of x-ray absorption, spectroscopy analyses the energy spectra of

7.1 Ion-, Electron-, and Photon Beam Analysis

287

secondary particles emitted by x-ray absorption, providing additional information on the chemical composition of the sample. These spectroscopic methods exploit the energy of the photons and secondary particles released by them, requiring energy resolving detectors. These energies and their changes connect to chemical and elemental composition via the interaction of photons and bound electrons resulting in an elemental contrast. The x-ray fluorescence (XRF) method sends x-rays into the sample to produce secondary x-rays of lower energy. The primary x-rays ionise and excite the atoms as depicted in Fig. 4.16, similar to the working of EDX (7.1.3) and PIXE (Sect. 7.1.5) mentioned later on. Among this triumvirate of secondary x-ray based elemental analysis methods XRF represents the compromise in terms of spatial resolution and accuracy but with the lowest overall costs. EDX offers the best lateral resolution. PIXE provides the lowest detection limits and best accuracy. The electrons recombine emitting again x-rays of an energy characteristic to each element. Free electron light sources enable improved contrast and lateral resolution compared to bound electron sources through higher brilliance allowing for smaller beam spots and exact excitation wavelength positioning on the elemental absorption edges (Fig. 3.4) resulting in a relatively stronger contribution of the absorption edge to the total signal than with broad excitation spectra. Figure 7.10 shows an XRF spectrum of a stainless steel. Each peak represents a certain atomic transition specific to the elements present in the target. The signal integrates over the penetration depth of the x-rays in the sample, hence a depth information can only be acquired using tomographic methods. The typically good energy resolution of about 130 eV of modern silicon drift detectors enables separate quantification even of neighbouring elements. The secondary xray yields monotonically increase with incident x-ray energy, requiring a certain minimum x-ray energy for practically relevant detection limits. 30000

Fe Kα1

Intensity [counts]

25000 20000 15000 Mn Kα1

10000 Cr Kα1

5000

???

Ni Kα1

7

8

0 4

5

6

9

X-ray energy [keV]

Fig. 7.10 XRF spectrum of a 1.4301 stainless steel using the handheld device depicted in Fig. 7.43 with 40 kV, 10 μA and a Rhodium x-ray source target. XRF detects the main elements Fe, Cr, Mn, Ni via their Kα1 lines. Can you identify the unknown (???) peak using Fig. 3.4? Permission granted by Bruker Nano GmbH. (Fe Kβ1)

288

7 Material Analysis and Testing

X-ray photo-electron spectroscopy (XPS) exploits the information stored in secondary electrons emitted upon x-ray absorption, the so-called photo-electrons. The x-ray energies of typically around 10 keV emitted from K-α line emission, see Fig. 3.4, allow releasing electrons from the atoms. The photons transfer their enery to the electrons bound in the target atoms. The resulting electron energies are characteristic for the specific atomic energy levels. The electron kinetic energy results from the photon energy minus the electrons atomic binding energy. This results in keV range electron kinetic energy. This electron energy restricts the information depth to a few nanometres below the surface. Figure 3.4 shows us some of these binding energies, but it displays only part of the truth. The displayed values represent the ideal situation of an isolated atom, but in molecules and compounds, e.g. H2 O, the values change by some eV due to the binding situation. This chemical binding energy shift is the very reason we have stable molecules and compounds and provides the energy released for example by burning hydrocarbons. XPS allows determining these values. Different binding types with different energy shifts exist for the different molecules, see Fig. 7.12. Sometimes, a single stoichiometric composition can arrange in different binding configurations with different chemical and physical properties, enabling an identification and quantification of the chemical state via XPS. The technical realisation of XPS with a crystal monochromator analyser attached to a bound electron x-ray source, and sample handling and manipulation is shown in Fig. 7.11. The figure shows a XPS device with hemispherical electron energy analyser for increasing the secondary electron energy resolution to sub-eV. The high energyresolution in combination with the finite dynamic range of the detector requires setting it to a certain atomic binding level for each measurement, but the close energies Fig. 7.11 Technical implementation of XPS with the typical hemispherical electron energy analyser attached to a larger system with sample loading and other techniques. Copyright Forschungszentrum Jülich/Tobias Wegener

7.1 Ion-, Electron-, and Photon Beam Analysis

289

Fig. 7.12 Molecular bindings change the electron orbitals or electronic structures, respectively. The total amount of electrons and quasi-neutrality remain conserved, but the binding energy changes from the mono-atomic value. This is the very basis for the binding and chemistry. The exact differences depend on the binding type and the binding partners. The differences are accessible by XPS

of chemical binding states require this energy resolution level. A background signal level arises in the detector due to reactions of the ejected secondary electrons with the sample on their way towards the detector. Due to the numerous required calibrations and the background, XPS usually achieves a statistical uncertainty in quantification in the order of 15%. XPS easily implements into in-situ experiments for parameter studies. The method can even run in a few 10 mbar ambient pressures for studying the influence of the interaction of surfaces with atmosphere. Accessing this information and testing the influence factors for the formation of a certain binding configuration or its impact on processes reveals important details of the underlying chemical reactions. Figure 7.13 demonstrates such a study investigating the formation of carbohydrates in the Fischer–Tropsch process on a cobalt surface. XPS reveals not only the presence of carbon, but it can quantify the occurrence of distinct binding state in relation to process parameters. Scattering based methods represent a type of imaging accessing the reciprocal space via diffraction of x-rays on periodic structures in the sample material. These periodic structures have to be in the order of the x-ray wavelength, namely in the order of nano-metres. The reciprocal space working of scattering methods originates from larger x-ray scattering angles being induced by smaller sample structures, a reciprocal connection. These structures can be crystal lattice spacings, atomic positions in molecules, or microscopic precipitates/inhomogeneities of materials (e.g. pores in a solid). Figure 7.14 demonstrates the physics of diffraction. In this case, a lattice spacing d results in a constructive interference of a parallel wave front if the path length difference 2δ equals the photon wavelength multiplied by a positive integer. This so-called Bragg condition connects the reflected intensity with the discrete structural features in the sample. Tuning of wavelength and incident angle θ allows scanning for these features. With this said the method of X-Ray Diffraction analysis (XRD) is defined. XRD enables investigation mostly of crystallographic properties. Each element, compound, and phase (e.g. α vs. γ iron) features a specific set of distances in its

290

7 Material Analysis and Testing

Fig. 7.13 Carbon in-situ XPS spectra acquired with synchrotron light on a cobalt foil first loaded with CO gas then heated in vacuum a, b and in 0.133 mbar CO, c, d. a shows the electron binding in three different states ascribed to adsorbed CO molecules, C impurities and CHx molecules. b Summarises the signal levels, showing a reduction of adsorbed CO with temperature. c Demonstrates the separated quantification of adsorbed and gaseous CO and C impurities due to state dependent electron binding to the C atom. d Shows the decrease of adsorbed CO with temperature and a corresponding increase of CHx and C. Reprinted from (Salmeron 2018) with permission by Springer

Fig. 7.14 A typical diffraction geometry on a regular lattice of spacing d results in constructive and destructive interference of reflections from different layers depending on impact and exit angle θ

lattice due to the specific atomic arrangements. With a given wavelength, typically generated with an x-ray tube and a crystal monochromator, the Bragg peaks can be scanned by mechanically rotating the sample against the x-ray beam and the detector via the angle θ. These so-called 2θ spectra show several peaks as demonstrated in Fig. 7.15. Knowing θ and the x-ray wavelength, Bragg’s condition allows calculating the lattice spacing d. Not only the fundamental distances matter in a crystal as the situation in Fig. 7.14 depicts. Think of the crystal as an artificially planted forest where the trees have fixed distances. If you walk along the forest you will see all trees behind each other from several positions. For crystals this works similar and a pattern of Bragg peaks show up

7.1 Ion-, Electron-, and Photon Beam Analysis

291

40

Intensity (arb. units)

30

Fe 20

Y2O3

Fe+Y2O3

Fe

10

Eu97 Y2O3

0

10

20

30

40

50

60

70

80

90

100

110

2 (°)

Fig. 7.15 XRD spectrum of Eurofer-97 stainless steel coated with a thin Y2 O3 layer using a Copper x-ray spectrum. The lines below the spectrum indicate where the reflexes of each compound can be found. From the steel only Fe contributions are visible. The blue line at the bottom represens the difference between fit and experimental data. Courtesy of Anne Houben

in the 2θ spectrum. These peaks allow identifying the contributing structures in mixed compounds via fitting spacing databases for elemental and compound materials to the measured data. When walking through a natural forest where all trees grew randomly we cannot see such a long distance order, at best a limited order on short distances induced by a minimum distance between two trees required for them to be able to grow. In XRD this situation equals analysing an amorphous material. The disorder of amorphous materials makes them invisible to XRD, the method works only for ordered structures. XRD distinguishes whether a compounds constituents are present as separate phases (e.g. precipitates in a matrix) or as a single mixed/solute phase (e.g. intermetallic phases). Separate phases will show both individual reflexes in reduced intensities. Mixed phases typically show lattice spacings in between the pure elemental values of their components. The semi-empirical Vegards law states a linear scaling of the resulting spacing with the mixture ratio. Vegards law allows for a rough quantification of mixed phases, but generally technical limitations prevent a clear quantification of compositional aspects using XRD. Inhomogeneity, roughness, and surface layers (e.g. oxides) influence the XRD result. Together with the varying beam spot-size and penetration depth when varying θ, a sound quantification becomes nearly impossible. Some compounds such as CrO2 and FeO2 feature very similar Bragg peak patterns, preventing a separation of both. For the analysis of non-crystallographic structures Small Angle X-ray Scattering (SAXS) can be applied. Specialised literature for the application of small angle scattering for different tasks in material science and biology (Chaudhuri et al. 2017; Schnablegger and Singh 2017). The website (Brookhaven National Laboratory 2018)

292

7 Material Analysis and Testing

provides some basic information and evaluation software links. SAXS detects nm sized features through scattering of x-rays at electron density contrasts. For example a pore in a metal represents a maximum reduction of the electron charge density to zero. This changes the local refractive index for the x-rays. SAXS allows measuring for example porosity or precipitate size distributions, their shape, and orientation (for example elongated cylinders or spheres). Biological small angle scattering, a variant of the material science SAXS, extends this towards structural analysis of biological macro-molecules, polymers, and nano-composites. Figure 7.16 shows a synthetic SAXS result comparing three different sizes of pores present in a metal. The q coordinate represents the scattering angle. Due to the reciprocal nature of the measurement larger q equal smaller structure sizes. In other words, small objects induce large angle deflections while large objects induce small angle deflections. The graph has three distinct regions. At low reciprocal dimension q a more or less constant intensity limit occurs. This is followed by an exponential decay in the so-called Guinier region. The slope of this decay depends on the particle dimensions. We can see the Guinier region shifting towards higher q for smaller porosity scales. Towards higher q the Porod region with its hill and valley structure follows. The Porod region represents x-rays scattering on the inclusion to matrix interface. The slope of this region has a fixed value of −4 for smooth particle to matrix interfaces, but differs from this value for irregular or fractal inclusions. The absolute intensity in this region depends on the surface area per unit scattering volume of the inclusions. A small pore has less surface than a larger pore, resulting in an increase of the signal level with pore size for a constant pore number density. Together

Intensity [a.u.]

1E+14

100 nm

1E+12

10 nm

1E+10

1 nm

1E+08 1E+06 1E+04 1E+02 1E+00 1E-2

1E-1

1E+0

q [1/nm] Fig. 7.16 SASfit 0.94.11 simulation of a SAXS spectrum for three different pore sizes with Gaussian distribution functions and constant number density. The graphs show a clear difference in the scattering pattern allowing analysing measured data by fitting. The left box marks the Guinier region, the right box the Porod region

7.1 Ion-, Electron-, and Photon Beam Analysis

293

Fig. 7.17 GI-SAXS measurement principle. The x-ray beam reflects on the surface (green arrow) resulting in the specular peak. Scattering at electron density fluctuations in the material diverts the photons by a few degree (red arrow), resulting in a pattern representative for the size and property distribution of the scattering centres. Larger q equals larger scattering angle. From Smilgies, Public domain, via Wikimedia Commons

with information on the inclusion shape, this enables determining their size and form. At even higher q than displayed, the Bragg-region starts as discussed above. SAXS has several technical variants depending on the details of the geometry. SAXS requires a 2D detector behind the sample in beam direction. Figure 7.17 depicts the geometry of the grazing incidence (GI-SAXS) variant which enables analysis of thick samples (=non-transparent for the x-rays). Classical SAXS works in transmission, but the transmission geometry requires thin samples since the sample has to transmit a certain fraction of the photons. In the Wide Angle X-ray Scattering (WAXS) the detector is positioned very close to the sample for measuring larger scattering angles equal to smaller structures. Increasing detector to sample distance reduces the maximum detected scattering angle due to the geometrical reduction of solid angle. Detectors mounted on rotating and translating stages enable extending the range of detectable scattering angles, enabling selecting between the different variants in a single setup. In biological sciences, SAXS can provide insight into structural information on molecular assemblies. The structure of assembly of proteins has a strong influence on their function. The large amount of existing proteins and the development of new forms and applications makes structural analysis and with this understanding of a proteins working and function an ongoing and extensive task. The study (Hura et al. 2009) determined several protein structures using a high-throughput SAXS setup with samples consisting of the proteins of interest in liquid solutions of only 12 μl. The measurements used synchrotron light due to the higher brilliance requiring less sample material, yet providing more photon energy flexibility. Figure 7.18 shows

294

7 Material Analysis and Testing

Fig. 7.18 SAXS provides information on shape and assembly of proteins in solution. a Experimental scattering data (coloured lines) compare well to SAXS theoretical calculations (black) of the known structures of 10 proteins. b The measured protein envelopes derived from the SAXS data overlaid with the existing molecular structures (ribbons) from two perspectives with 90° molecule rotations as indicated by the arrows. Monodisperse samples were used. All monomeric units had a 9-amino-acid His tag attached. Protein colouring according to curve colouring in (a). Reprinted from (Hura et al. 2009) with permission by Springer

the comparison of the structures of 9 reference proteins analysed by SAXS in this study. Data analysis and molecular structure codes allow defolding the SAXS data to obtain structural envelopes of unknown molecules to a spatial resolution 0. In practice, different count rates due to the largely different cross-sections and significantly different product energies potentially require two specialised detectors for detecting the full energy of all particles. Figure 7.31 demonstrates the main problem of RBS: The signals of bulk elements overlap. In the measurement we obtain only a single value per product energy, therefore we cannot disentangle the individual elements contributing to it. Every element starts with an edge, the so-called surface peak, with a slope given by the energy resolution of the detector setup. This edge represents the sample surface. From here on the projectiles enter the sample and continuously lose energy by stopping. If we consider the sample as a sandwich of depth slices, every slice would feature its own projectile energy according to the stopping of the beam. Lower beam energies also result in lower product energies, consequently extending the RBS signal to lower product energy with every layer of the sandwich. The heavier an element, the higher its Rutherford cross-section and the higher its surface peaks energies (due to the backscattering nature of the measurement). Consequently, lighter elements are buried in the statistical counting uncertainties of heavier elements making it impossible to detect for example C in W.

312

7 Material Analysis and Testing

Elastic-Recoil-Detection (ERD), a variant of RBS in grazing incidence, improves mass and depth resolution to resolve this. Both Rutherford cross-section and stopping power increase with projectile proton number, resulting in improved detection limits (cross-section) and depth resolution (stopping-power) when using heavy projectile ions. By choosing incidence angles above 45°, the transmission direction moves to the sample front, allowing a detection of the light and heavy product particles which are otherwise stuck in thick samples due to conservation of forward momentum. The heavy product represents the target recoil in elastic scattering, hence ERD yields the best resolved compositional information. In contrast to RBS where the detector only receives particles of the projectile type and to NRA, where mostly protons and α’s are received, ERD releases product particles of all elements and isotopes present in the sample. Identifying not only the energy but also the mass (or momentum p = m * v) of the product therefore yields additional information. Time-of-flight detector tubes with a thin transmission (timing) and a thick energy detector in series produce the required energy versus velocity maps as depicted in Fig. 7.32. Each element/isotope has a typical curve in this picture originating from the connection of velocity and kinetic energy through the mass (E = mv2 /2). The top-left represents the surface and the lower right edge the maximum analysis depth. The intensity is connected to the Rutherford elastic scattering, allowing for an absolute quantification similar to RBS. The elastic scattering techniques are particular good in depth resolution, but several factors limit the depth resolution. The technological limit is given by the maximum stopping power (at the Bragg peak) and the energy-loss straggling. Technical limits arise from the dependence of the scattering energy on the scattering angle. The finite spatial extent of beam spot on the sample and the finite detector

Fig. 7.32 Time-of-flight elastic recoil detection analysis (ERDA) with an iodine beam. The Y-axis shows the detected particle energy and the X-axis its time-of-flight, together spanning the whole energy–momentum space of the detected particles. Reprinted from (Siketi´c et al. 2018) licensed under CC-BY 4.0

7.1 Ion-, Electron-, and Photon Beam Analysis

313

aperture diameter directly lead to acceptance angle windows for the scattered particles. The wider the acceptance angle window the wider the detected particle energy width, since projectiles with different scattering angles (see kinematics in equations (3.16) and (3.17)) and path length through the sample mix in the detector signal. Beam intensity and detector counting statistics imply lower limits for these technical dimensions, since smaller beam diameter results in smaller beam current for a given beam current density and smaller detector apertures reduce the detector solid angle. Steels are a good example for the limits of RBS. All steels contain C and H as critical components for their mechanical properties, but RBS will hardly be able to quantify these light elements. A better contrast in particular for light elements is achieved by Nuclear-reaction-analysis (NRA) of inelastic nuclear fusion reactions. The fusion reactions (Q > 0) introduce extra energy into the reaction, allowing a separation of NRA products from the elastically scattered particles which are limited in their kinetic energy by the beam energy and the elastic kinematics. Figure 7.31 demonstrates the proton peak of the D(3 He, p)4 He reaction lying isolated at ≈12 MeV in a basically background free region, while RBS ends already at 3 MeV with many different reactions overlapping each other. Light elements such as hydrogen and lithium often occur in energy related materials. Figure 7.33 shows a combined NRA+RBS analysis example of a Li-battery related material using a single passivated implanted silicon detector. The measurement contains quantity and depth distribution of all elements. The peak heights mostly 12,000

measured SimNRA sum Li O Ti Mn Co

10,000

Counts

8,000

50 45 40 35 30 25

6,000

20 4,000

15 10

2,000

5 0 2200

2400

2600

Product energy [keV]

2800

0 7300

7500

7700

7900

Product energy [keV]

Fig. 7.33 3 MeV Proton beam based NRA and RBS analysis of a Li–Co–Mn–O ceramic thin film deposited on Ti with SimNRA 7.02 interpretation. The reduced slope of the Ti edge at 2700 keV and the width of the O peak at 2300 keV demonstrate the formation of a 150 nm TiO2 film between substrate and the 500 nm thick ceramic layer. The element lines correspond to the perfect stoichiometry which slightly differs from the actual layer composition

314

7 Material Analysis and Testing

yield the element concentrations, the peak width contain information on layer thickness (=energy loss), and peak height (also rising and falling edge) variations contain information on intermixture of layers and elements (e.g. by diffusion). A critical factor for this high information content is the depth-resolution R(x), measured in units of atoms/m2 . For charged particle products it depends on the scaling of the product energy E product with depth in relation to the stopping power S. R(x) = E product /S

(7.7)

Equation (3.6) quantifies the decrease of stopping power S with projectile energy/velocity, or in other words the decrease of depth resolution with increasing projectile energy. The product energy width E product in (7.7) depends on several factors. Ideally, the mono-energetic projectiles result in a single product energy (Sect. 3.4), but the detector resolution, energy-loss straggling and geometrical aspects always lead to a window of possible projectile energies even for a mono-energetic beam. The depth resolution at the very surface depends on the ratio of stopping power to detector resolution plus geometrical broadening effects (here the energy-loss straggling is still zero). This so-called geometrical straggling originates from finite beam and detector sizes resulting in distributions of scattering angles and outgoing path length accepted by the detector, see Fig. 7.34. In Sect. 3.3.2 we discussed the impact of the reaction angle on the product energies (there E 3 and E 4 ). With increasing depth in the sample, energy-loss straggling induces a distribution of projectile energies further limiting the depth-resolution, see the right picture in Fig. 7.34. With increasing the projectile impact angle on the sample against the surface normal, the depth resolution increases due to the sine like increase of the projectile path length Beam opt ics

nom

Apert ure Spot

Target /Sam ple

Resolution [1019atoms/m²]

Det ect or

2500 2000 1500 1000 Total Depth resolution

500 0

Energy-loss straggling Detector resolution

0

5000 10000 15000 Depth [1019atoms/m²]

20000

Fig. 7.34 Left: Geometrical straggling/broadening arises from finite spot size connected to different product origins and from finite detector apertures accepting a range of reaction angles around the design angle. Right: At the surface the detector resolution (here 15 keV FWHM) limits the depth resolution, but with increasing depth the energy-loss straggling dominates (>1.2 × 1023 atoms/m2 ) the product energy-width

7.1 Ion-, Electron-, and Photon Beam Analysis 0-150 600-750

150-300 750-900

300-450 900-1050

315 Beam opt ics

450-600 1050-1200

Det ect or

Projectile energy [keV]

5000 4250

nom

Apert ure

Spot

3500 2750

Analysed Volum e Dept h 1

2000 1250 500 0

13

27

40

53

Incident angle [°]

66

Dept h 2

80 Target /Sam ple

Fig. 7.35 The depth resolution changes with energy, impact angle, and depth at the example of the 12 C(3 He, p0 )14 N reaction. Left: best resolutions in units of 1019 atoms/m2 are attained at low energies and large angles. Right: The same analysis from different sides of the sample or cross-cuts (white line) extends the tomographic range covering potentially the whole block

for reaching a certain depth (Fig. 3.15 in Sect. 3.4). With impact angle the beam spot size increases elliptically. Figure 7.35 left summarises the impact of projectile energy and impact angle on depth resolution. In the given example, the depth resolution improves by a factor 10 when changing from normal incidence with 5 MeV ions to 80° incidence with 1 MeV ions. Analysing the sample from different sides increases the accessible volume. Short pathways are desirable for the analysis, since straggling continuously destroys the information retained in the products by broadening their energy distribution. Investigating the sample from the side or on a cross-cut allows for probing the whole sample volume (Fig. 7.35 right), but so far the technical limits of focussing render this alternative the option with worse spatial resolution. The analysis requires the projectile to reach the analysed depth, but also the products have to leave the sample to be detected. On the lower end ( 1) in gaseous and solid media. Atomic Data 5(2) (1973) A. Xu, D.E. Armstrong, C. Beck, M.P. Moody, G.D. Smith, P.A. Bagot, S.G. Roberts, Ionirradiation induced clustering in W-Re-Ta, W-Re and W-Ta alloys: an atom probe tomography and nanoindentation study. Acta Mater. 124, 71–78 (2017) J.F. Ziegler, J.P. Biersack, M.D. Ziegler, SRIM—The Stopping and Range of Ions in Matter. (Chester, 2008)

Chapter 8

Energy Production and Storage

Abstract Nuclear fusion with a magnetically confined plasma is a candidate method to produce vast amounts of energy, neutrons and rare isotopes such as T and 3 He. Accelerator based realisations can be simpler compared to plasma devices, but they are considered to be unsuitable, due to the dominance of elastic scattering to the relevant inelastic reactions. This chapter discusses three aspects where accelerators potentially contribute to the energy source landscape: spallation, nuclear batteries, and nuclear fusion. All of them use depth-integrated nuclear reaction with hydrogen isotope ions. Their basic working and development pathways will be derived from this.

Nature is fair, for every good you also receive a bad

All energy applications of accelerators are, so far, theoretical or in early concept stages, but the ideas form an interesting testing ground for the understanding of the technical and physical knowledge presented in this book. The energy production concepts presented here rely on efficiency, optimising the ratios of different crosssections, stopping power, radiation protection, and nuclide selection. In Chaps. 4 and 5 we learned the physics of nuclear reactions, the Q-value (which is the essence of Fig. 8.1) and the decay of instable nuclides. The three aspects are the essence of the nuclear energy applications of accelerators. In fact, stopping power represents a fundamental limit, but technical concepts for mitigating it exist, as the reader will learn in the following sections. The interesting aspect of nuclear power in general, not limited to accelerator reactors, lies in the large amounts of energy connected with nuclear bonds, compared to chemical bonds. Figure 8.1 depicts the situation along the nuclide table. By fusion of light nuclei or fission of heavy nuclei, the binding energy increases up to the natural maximum of 8.79 MeV/nucleon in 56 Fe. For comparison, the formation of CO2 from C and O2 releases about 4.1 eV, more than 106 times less energy per reaction. This enormous difference in energy content of nuclear and fossil fuels makes nuclear power attractive. The masses of fuel required decrease accordingly, reducing environmental impact and costs. This high specific energy content makes nuclear energy in particular interesting for space applications, but even more importantly © Springer Nature Switzerland AG 2020 S. Möller, Accelerator Technology, Particle Acceleration and Detection, https://doi.org/10.1007/978-3-030-62308-1_8

343

344

8 Energy Production and Storage

Fig. 8.1 Average binding energy of nucleons. A maximum binding occurs at 56 neutrons and protons (56 Fe). The energy released upon separation of heavy nuclei or combination/fusion of light nuclei is in the ten to hundred MeV range, making nuclear fuels million times denser than chemical fuels with their eV binding energies

nuclear power promises also to be an ideal energy source with the perspective of saving humanity from one of its worst threats, the climate change. Climate change threatens our planets nature and our society’s economy. Fatalities related to fossil fuel consumption and the climate change were estimated to be already in the order of 315,000 per year (Annan et al. 2009), with numbers increasing steadily. Nuclear power can provide a carbon-free alternative, but nature is fair and this significant advantage comes with a price. The main risks associated with nuclear power relate to nuclear proliferation (the use of products for weapon or terror purposes) and the risks of nuclear accidents releasing radioactivity. The efforts invested into nuclear safety constitute a major, if not the main, part of the costs of nuclear devices and reactors with significant progress in operational safety until now. The perception of the risks drives nuclear industry as the biggest threat for its commercial success, yet certainly mistakes were made and are always possible in the future. The only nuclear power source successful until now is the fission of 235 U. With a binding energy of 7.59 MeV/nucleon it offers a potential of 235 × (8.79 − 7.59) = 236 MeV per 235 U nucleus for a complete fission down to the strongest binding (56 Fe). Typical reactions split the 235 U into two fragments around mass 90 and 140, releasing about 200 MeV and a few neutrons. These neutrons continue the reaction by splitting the next 235 U they find. If one or more neutrons per reaction survive the environment to induce the next fission reaction the reactor is called critical, the reaction is self-sustaining. Unfortunately, this depends on the composition of the fission fuel, which changes over time with progressing burn-up. In the end, in all current reactor designs (the so-called generations 1–3) the reactor fuel loses its criticality long before consuming the fissile fuel and we end up with tons of

8 Energy Production and Storage

345

radioactive waste. Future so-called generation 4 fission reactors aim at using a larger part of the fuel, improving economy and safety at the same time. The concepts are interesting, but technically more challenging compared to current designs. The social and economic acceptance is particularly challenging for nuclear power plants. This contradiction of the actual risks and the publicly perceived risks of fission power compared to the widely accepted fossil power is astonishing for an expert. It has to be admitted, though, this misbalance strongly influenced the headto-head race of renewables, fossil, and fission power in favour of the renewables. Regarding the costs per kWh renewables probably wouldn’t have reached their economic breakthrough without this public support. Consequently, the fundamental ideas of the next generation nuclear power plants must combine improved economics, low waste, proliferation safety, and public safety to bring nuclear power ahead in this competition. Currently operated fission reactor technology compares in its technological complexity to other nuclear technologies such as nuclear fusion or accelerator based technologies as cuneiform inscription to computer writing. In this sense not only uranium and thorium fission reactors, but also fusion reactors (burning mostly deuterium) follow the generation 4 concepts to offer the prospect of new acceptance of nuclear technologies by significant technological advances. Nuclear fusion research marks an outstanding example of the constraints social acceptance puts on the development and structure of technological thinking. Current technology would allow building a fusion power plant with positive output based on the tokamak or stellarator concept, but no one would do so. Fusion sounds like a good thing so why not do it? The answer is simple: Why risk billions of euros without chances for a return of investment? Fusion science keeps this question in mind allowing it to focus down the research on pathways fulfilling this necessary economic condition by focussing research e.g. on reduced activation materials, longer component service life or reactor down-sizing. Several technological options such as the use of carbon plasma-facing components, the p-11 B fusion or a primary helium cooling-cycle in a power reactor were already disregarded or at least postponed for this reason. Economic success needs a clear perspective for involving industry. This perspective has to be provided by public research in the form of a so-called DEMO fusion reactor. Scientists have to see these additional constraints as a chance to discover a clear path through the infinite amount of possible solutions for a technological problem. For the following accelerator based power technologies, this path has still to be revealed, in contrast to the established technologies discussed in the last chapters. Keep that in mind.

8.1 Spallation Fission Reactors Is it possible to bring a subcritical fission reactor into criticality by particle beams? Definitely yes, but can we realise it with positive energy output, considering stopping power (Sect. 3.2) and why should we do it if a fission reactors also runs fine without accelerator?

346

8 Energy Production and Storage

Pressurized water fission reactors (PWR) produce vast amounts of isotopes throughout the nuclide chart. A particular nasty subset of isotopes comes by the name minor actinides. Minor actinides is actually an unphysical term, because these actinides are in no way minor to the major (U and Pu) or the normal actinides. The term originates from the fission industry and the fact that these actinides are present in spent fuel from uranium fuelled and water moderated reactors in smaller quantities than U and Pu. Physically some are even major to U and Pu in terms of energy content, see Fig. 8.1. The minor and even the major actinides (Pu and U) will not be completely burnt by the PWR reactor type, since it has certain design limits for its criticality and power output. The criticality represents the number of neutrons produced from the fuel per neutron and has to stay >1 in order to keep the chain reaction running (Fig. 8.2). At the same time, these unused fissile nuclides contribute significantly to the radioactivity of the PWR waste. Consequently, a lot of radioactive and energy-rich waste and furthermore uranium fuel remains to be dumped or chemically separated. Here the idea of a spallation assisted fission reactor offers a solution. Figure 8.2 shows the decrease of criticality over operational time of a PWR. After time step 26 in Fig. 8.2 the criticality reduces below required value of 1. In other words, the reaction produces less than 1 neutron per consumed neutron (=criticality < 1). An accelerator could fill this criticality gap of the fissile fuel by adding neutrons from proton induced reactions to the equation (red area). The accelerator, not control rods, then defines the reaction rate. The combined fission device is often called Accelerator Driven System (ADS). The ADS makes use of ion induced fission and neutron emitting 1.5

Uranium concentra on [%] Pu + Minor ac nides / waste Poten al Cri cality

1.4

3 1.3 1.2

2

CriƟcality

Uranium and acƟnides [%]

4

1.1 1 1 0

1

5

9

13

17

21

25

29

33

0.9

Time [a.u.] Fig. 8.2 Sketch of the fundamental technological problem of PWRs. The criticality decreases with decreasing fissile U concentration and increasing fission products. Due to the production of non-fissile neutron absorbing isotopes criticality even decreases faster than U content. The control rods keep the criticality slightly below 1 until this becomes impossible due to insufficient fission neutrons; the fuel is spent for the PWR. A PWR has to stop operation at latest at time step 26

8.1 Spallation Fission Reactors

347

reactions summarized under the term spallation reactions. By becoming independent of requiring critical fission fuels (criticality > 1), the technological limit of PWRs, which limits to fuelling by enriched 235 U, vanishes and substantial amounts of new isotopes become available as nuclear fuel. A particular nice aspect of these fuels lies in the fact that PWRs and fission explosives rely on fission of the same isotopes with criticality > 1. Consequently, spallation fuels avoid the risk of nuclear proliferation by using fuels with criticality < 1. Technically, the ADS will be a different reactor for which the suitable fuel will be extracted from PWRs and optimised/enriched for the spallation process, but the principle of using the waste of PWRs remains untouched as depicted in Fig. 8.3. Similar to existing nuclear reprocessing, the minor actinides (elements 93–100, except for 94, which is “major” in the PWR speak) will be separated chemically from the spent fission fuel with up to 99.99% separation efficiency, significantly reducing the nuclear waste. These nuclides contribute most of the long-term activity of PWR waste with half-lives of up to 16 million years. In contrast to existing nuclear reprocessing in combination with PWRs, the ADS enables using a significantly larger portion of the reprocessed material. The spallation technology also works with uranium or thorium fission reactors, but the expensive accelerator technology would negate the economic condition for this approach compared to a critical reactor. Therefore, considerations of spallation currently involve a connection with nuclear fission power by using fission waste as spallation fuel. In addition to power production it also offers the potential for an intense scientific neutron source as discussed in Sect. 4.1. This connection generates additional synergistic value by providing an attractive option for waste treatment in addition to the energy production. Partitioning and transmutation technology of

Fig. 8.3 Fuel cycle and nuclear waste with fission and spallation. Fission reactors produce subcritical nuclear waste. A reprocessing and separation plant makes spallation fuel from this. Only a small amount has to be stored and additional energy is generated from the existing nuclear fuel

348

8 Energy Production and Storage

the fission waste therefore represents a major technological ingredient to spallation reactors. High energy ion beams, usually some hundred MeV protons, induce (p, xn) and fission reactions in the ADS as discussed in Sect. 4.1. Fission reactions of the released neutrons act as a multiplicator, increasing the energy release per incident proton. The criticality remains < 1, for example 0.95. The spallation reaction itself requires a different “criticality” limit: For a positive energy output, the energy multiplication factor of the overall system needs to be large enough to compensate the energy invested by the accelerator. In other words, the amount of induced fission reactions per projectile times the released energy per fission event times the energy conversion efficiency has to be larger than the beam energy times the accelerator efficiency. For spallation based neutron sources for scientific or isotope production, the multiplication only defines the beam power of the accelerator part required for a specific neutron flux, since a higher multiplication factor reduces the amount of protons required for emitting one neutron. The nuclear reactions induced by the beam itself cannot provide a positive energy output, since we learned earlier (Chap. 3) only a few percent of the projectiles induce nuclear reactions by beam-matter interactions and not all of them feature Q > 0. The higher the beam energy, the more energy lost per projectile, but also the better the ratio of stopping power to nuclear reaction cross-sections, similar to what was shown in Figs. 3.17 and 3.18. This results in a higher reaction probability or in other words more neutrons per projectile (see Sect. 3.4). An optimum needs to be found. Many of the numerous reaction cross-sections of the target’s heavy elements are not experimentally known. The reaction p+209 Bi already lists about 90 reactions possibly contributing to the spallation process in (Koning et al. 2015). All of these reactions can hardly be measured for all the involved target isotopes. The uncertainties of theoretical models demand experimental verification for a solid device layout, complicating finding this optimum. Three major demonstration projects for this technology are currently being implemented. The MYRRHA (Multi-purpose hYbrid Research Reactor for High-tech Applications) project with about 30 MWth aims at providing a research neutron source located at SCK-CEN in Belgium (SCK · CEN Studiecentrum voor Kernenergie 2017). The project China Initial Accelerator Driven System (CiADS) aims at a 10 MWth reactor in a first stage with a potential upgrade to 1 GW by a later addition of acceleration structures. The larger 800 MWth Japanese facility named J-PARC Transmutation Experimental Facility (TEF) intends to demonstrate the power reactor and waste reduction aspect. More details can be found for example in (Nakajima 2014). All facilities rely on linear AC accelerators delivering 500–1000 MeV protons with mA up to several 10 mA beam currents onto Lead-Bismuth eutectic liquid spallation targets. Figure 8.4 depicts the complex accelerator design of the MYRRHA facility. The TEF intends to demonstrate nuclear fission waste transmutation covering the output of up to 10 running pressurized water reactors. CiADS aims at a technology demonstrator for later commercialisation. MYRRHA will potentially substitute the BR-II research reactor as a new type of research neutron source not relying on the problematic highly enriched uranium.

8.1 Spallation Fission Reactors

349

Fig. 8.4 Schematic of the MYRRHA accelerator. The system features two alternative ion sources and about 240 m of RF acceleration cavities reaching 600 MeV with 3.5 mA. The required reliability necessitates redundancy and the use of semiconductor generated AC input power. Source SCK · CEN Studiecentrum voor Kernenergie (2017). Reprinted with permission by SCK

Besides the nuclear cross-sections for the different transmutation elements, numerous technical developments are necessary on the way to commercial ADS. In principle all technical components, accelerator, target, and reactor core differ significantly from existing designs. The application of accelerators in the production and safety oriented technological concepts induce additional constraints on the availability of the accelerator. In typical high-energy accelerators irregular malfunctions such as high voltage trips and regular events such as ion source maintenance easily take up ≤ 10 % of the possible yearly operational time. The MYRRHA accelerator is designed to have on average one incident with more than 3 s of downtime per 250 operational hours. This still sounds a lot considering for example the availability of internet connections or the power grid, but for an accelerator complex of this size it represents a technical milestone, which has yet to be proven possible. The followed strategy uses moderate specific acceleration (2.4 MeV/m), redundancy in the ion source and the acceleration part, and the application of semiconductor generators for the AC driving power instead of the typical Klystrons (Fig. 8.4). A beam window separates this accelerator part from the target and reactor core as depicted in Fig. 8.5. Technical difficulties of this ansatz were discussed in Sect. 7.4 and limit the ion beam power density and the beam window radiation damage. The target itself consists of a liquid lead-bismuth neutron source located directly behind the window. The heavy metals generate neutrons from the beam and allow for an

350

Steam generator

8 Energy Production and Storage

Primary pump

Inner barrel Beam duct Beam Window Reactor vessel Core barrel Guard vessel

Core support

Fig. 8.5 Proposed JAEA ADS design with 800 MWth . The beam enters the reactor from the top centre. A volume of liquid lead-bismuth generates neutrons through spallation. The fissile (MA) fuel in the core barrel around this central spallation target reacts with these neutrons. The heat generated in the core is dissipated via a flow of the liquid lead-bismuth indicated by the arrows. Reprinted with permission from JAEA-Review 2017-003

efficient cooling via pumping. An assembly of fuel rods similar to a PWR will surround this central target. For the TEF design, mixed minor actinides and Pu nitride pellets are envisaged. Finally, the hot liquid metal will power a steam generator. Development of the chemistry and corrosion features of the many different materials and the hot and reactive liquid metal represent major material challenges. The fuel also loses criticality during use, requiring compensation with higher beam current. A reduction of criticality from say 0.97 (33 neutrons per spallation neutron) to 0.93 requires increasing the beam current by a factor 2.3 to reach the same power output. Consequently, the flexibility in beam current limits the fuel cycle time.

8.1 Spallation Fission Reactors

351

The energy production by neutron production reactions is insufficient for a positive outcome, due to the high energy invested into the projectiles. For ADS intending to produce energy, the energy efficiency and energy balance of the spallation process are critical parameters. The neutrons emitted in the spallation reaction react with the fissile actinides, generating additional fission reactions in an independent process chain. Neutron sources with heavy element targets producing about 10 neutrons per ion via spallation (MYRRHA = 6 n/p), see Sect. 4.2. With a criticality of the fissile materials of for example 0.95, these neutrons will be multiplied by a factor 20. Each neutron induces fission reactions with outputs in the 100 MeV per neutron range, releasing about 12 GeV per projectile in the MYRRHA example (similar for JAEA design). Assuming an overall proton acceleration efficiency of 1/3, the 600 MeV projectiles will cost 1.8 GeV of electricity. From the 12 GeV only 4 GeV will become available as electrical power with a typical steam cycle. This results in a multiplication factor of 2.2 or in other words, the facility consumes 45% of its produced electricity only for the accelerator. Considering the above mentioned decay of criticality, 2.2 represents an alarmingly small margin. Coal or PWR plants consume about 10% of the generated power, indicating the technological challenge of an economical ADS power plant. The actual power output of the ADS will be defined by the beam current, but high currents are required for relevant power output (here 4 GW per 1 A beam current). The spallation and neutron reactions themselves produce a vast range of daughter nuclides, but the integrated decay time to natural uranium toxicity levels of the ADS waste potentially lies around only 300 years, 3 orders of magnitude less than PWR waste (Nakajima 2014).

8.2 Nuclear Batteries Radioactive isotopes contain vast amounts of energy, which constantly releases during their radioactive decay by emission of energetic particles. This physical fact allows seeing them as batteries. Nuclear batteries are primary batteries, meaning they cannot be recharged by an inversion of the discharging process. In contrast to electro-chemical batteries they feature a major disadvantage of having a fixed or load independent power output, respectively. An electro-chemical battery on the other hand adapts to the power requirement and delivers for example also zero output power, equivalent to a maximum usage factor. The big advantage of nuclear batteries on the other hand is their extreme specific energy density. The currently most common nuclear battery isotope, 238 Pu, has a theoretical energy density of 632 kWh/g (5.593 MeV α’s and 2.53 * 1021 atoms/g) while an electro-chemical cell based on LiCoO2 (also known as Li-Ion battery) can theoretically store 0.55 Wh/g. These distinct differences open up the application space for nuclear batteries. In contrast to electro-chemical batteries, nuclear batteries produce fast particles instead of electrical potentials. Technical applications require electrical potentials up to some hundred volts, hence the nuclear battery requires a particle-to-voltage converter. In addition most nuclear decay products feature energies of at least a

352

8 Energy Production and Storage

Fig. 8.6 A thermally glowing pellet of about 150 g of 238 plutonium-oxide emitting 62 W of αparticle power. The oxide form stabilizes the material at the high temperatures required for efficient energy conversion. These types of pellets were used in NASA’s Cassini and Galileo missions. Unless otherwise indicated, this information has been authored by an employee or employees of the Los Alamos National Security, LLC (LANS), operator of the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 with the U.S. Department of Energy. The U.S. Government has rights to use, reproduce, and distribute this information. The public may copy and use this information without charge, provided that this Notice and any statement of authorship are reproduced on all copies. Neither the Government nor LANS makes any warranty, express or implied, or assumes any liability or responsibility for the use of this information

few keV up to the above mentioned about 5.6 MeV, or in other words they provide extremely low currents at extremely high voltages. A direct conversion by using the particles to charge a capacitor becomes technically infeasible. First converting the particle energy via stopping and absorption into heat enables the usage of thermal energy conversion technology. The Carnot limit defines the maximum efficiency of the thermal approach, requiring high temperatures. 238 Pu easily reaches > 800 K as depicted in Fig. 8.6 (Draper point of incandescence). Thermoelectric generators such as BiTe offer a reliable and compact option for energy conversion at the drawback of typical efficiencies < 10%. Heat engines, in particular the Stirling type, offer higher efficiency close to the Carnot limit at the expense of limited lifetime and larger setups with moving parts, but an application ready setup was so far not developed. The thermal conversion route comes into application for isotopes emitting most power via α-decay, such as 238 Pu. The short range of a few μm hardly allows these decay products to leave the emitting material. On the other hand the small outside dose rates enable a use near humans without excessive shielding. For β- and γ -emitters the range is long enough for additionally considering nonthermal conversion techniques. Just like the solar radiation does, the emitted particles also separate charges in solar cells, producing electrical power. These so-called betavoltaic cells consist of μm thin films of radioactive material evaporated on solar cells (or p-n junctions). The limited amount of radioactive material present in the thin films limits the output to the μW range. This is sufficient to power electronics and MEMS, but hardly scalable for larger applications. While thermal conversion allows

8.2 Nuclear Batteries

353

for a certain distance, the unavoidable vicinity of emitter and generator introduces problems of radiation damage of the generator for betavoltaics. The most prominent use of nuclear batteries are NASA’s deep space missions. The satellite generators usually feature an output of some hundred watts. Due to the exponential nature of the decay of radioactivity, the generator first provides excess power, which decreases over the years. In consequence, systems of the satellite/spacecraft have to be deactivated to reduce its power consumption. At some point in time, the generator output defines the end of the design lifetime in connection with the minimal power requirement of the spacecraft. A few niche applications for powering pacemakers or remote lighthouses (yes…) existed before the development of powerful electro-chemical batteries. Traditionally, that means for the main applicant NASA, nuclear batteries are based on 238 Pu extracted from fission reactors. Its breeding requires special reactors, which were typically also required for nuclear weapon production. Its properties as nuclear battery are tempting with low external dose rates, high specific power output, and a technically relevant half-life of 87.7 years. Figure 8.7 displays its exponential decay of power output together with the radiation protection aspect. With the decay of the nuclear weapon activities also Pu-238 became rare (Witze 2014), due to the technical connection of both, requiring alternatives. One of the main approaches of this book is: Things that can be done using a fission reactor are also possible by using an accelerator. Now having the basic knowledge of the functioning of a nuclear battery, and (hopefully) also of accelerators and beammatter interaction, we can design an accelerator based version. Possible nuclides have to be selected with respect to radiation hazards and power output. Additionally input vs. output energy efficiency gains increased importance in accelerators, since it determines the economy and output power for a given accelerator. Decay details and the nuclear production cross-sections define the energy efficiency. In particular (p, xn) reactions are most relevant, due to the good performance of proton accelerators. Deuterons and Helium ions only slightly extend the set of reachable isotopes. In any 1E+12 1E+10

1E+0

1E+8

1E-1

1E+6

1E-2 1E-3

Power [W]

1E+4

Dose rate [μSv/h]

1E+2

Ac vity

0

200

400

600

800

Activity [Bq]

Power and dose rate

1E+1

1E+0 1000

Time [years]

Fig. 8.7 Output of 1 g of 238 Pu. The dose rate is given for 1 m distance. The initial output of 0.57 W/g falls by 10% within the first 14 years, but the decay is exponential (line in this logarithmic graph). Only after 800 years, the decay products (mostly 234 U) start contributing to the overall activity

354

8 Energy Production and Storage

case, stable isotopes (including Thorium and Uranium isotopes) will be the basis of the concept. Accelerators can produce α, β, and γ emitters. In particular the α emitters are interesting for nuclear batteries, but α emitting isotopes sitting in the upmost edge of the nuclide chart can only be reached by reactions with Bismuth, Thorium, or Uranium. If we consider the requirement of about 100 years of half-life, since electrochemical cells cover a few years and longer half-lives just reduce the activity of a given amount of produced isotopes, we end up with 209 Po, 232 U, and 238 Pu as reachable and interesting candidates. The latter ones are reached indirectly via a short-lived β-decay of an intermediate isotope. Table 8.1 lists these and a few other examples of possible nuclides. Below 1% of the projectiles actually produce a target nuclide in all cases. Figure 8.8 explains the problem of the production. At low energies the production probability strongly increases, but above a few 10 MeV the cross-sections again decrease and the slightly stronger decrease of stopping power can hardly compensate this. Dividing the production probability by the input power in order to consider the production per invested energy leaves us optima in the region < 100 MeV. Moreover, physics limits the energy released per decay to a few MeV. Only the decay of 232 U grants a higher value due to a chain of 8 relatively short-lived decays following the primary decay, adding up the released energy. The production of a quantity of 232 U releasing initially 1 W of radiation output power requires the impact of 0.149 MC = 9.3 * 1023 protons on a pure 232 Th target. This enormous amount of protons equals 420 h of irradiation with 100 mA beam current. These calculations intend demonstrating the fundamental considerations of energy efficiency, optimisation of beam properties, and isotope selection required for an early assessment of the chances of such a technology. Nobody so far developed accelerator based nuclear batteries, but this case study demonstrates a topic requiring combining all the fundamentals discussed in this book. P.S.: A charming, but physically questionable variant of nuclear batteries are beam triggered de-excitation isotopes. Long-lived excited state isotopes, most importantly the long lived 180m Ta from which every year potentially 10 GWh are mined from 600 ton of natural Ta, shall be de-excited by a LASER like process started e.g. by collisions with beam particles. The long lifetime of these excited nuclei originates from forbidden transitions of the excited to the ground state due to a mismatch in spin quantum number. The state inversion exists, but how to fulfil the other requirements of a LASER such as low absorption of the emitted photons?

8.3 Accelerator Based Nuclear Fusion The development of fusion power to give an unlimited supply of clean energy, and a switch to electric cars (Stephen Hawking)

Rising energy consumption, energy costs, and climate change are the most urgent problems of current energy production, endangering the industrialisation itself.

Na-22

Mn-54

Zn-65

Nb-91

Po-209

Po-208

U-232

Pu-238

Ne-22

Cr-54

Cu-65

Zr-91.92

Bi-209

Bi-209

Th-232

U-238

87.7

69.8

2.93

102.00

680.00

0.67

0.85

2.60

Half-life (Years)

Chain

Chain

2E-5

0.005

0.01

0.58

0.84

2.20

Mean γ-energy (MeV)

≤1.3E+21 4.94E+20

 = 50.5 (α)

1.59E+20

5.80E+21

1.55E+23

1.41E+20

1.75E+20

2.64E+20

Specific output power (Atoms/W)

 = 41.5 (α)

5.20 (α)

5.00 (α)

1.25 (β)

1.35 (β)

1.38 (β)

2.80 (β)

Energy per decay (MeV)

0.12

0.14

0.18

0.05

0.20

0.23

0.33

0.33

Production  probability σ /S dE (%)

6.58E+04

1.49E+05

1.41E+04

1.86E+06

1.24E+07

9.82E+03

8.49E+03

1.28E+04

Proton dose per output (C/W)

2.02E−03

8.54E−03**

4.68E−04*

1.25E−04*

8.33E−05

1.04E−04

1.52E−04

3.08E−04

Efficiency at 30, 20* or 6.8** (MeV)

The evaluation considers (p, n) and (p, 2n) reactions producing ground state nuclei, since these offer the highest cross-sections in the 30 MeV range of protons. In case several primary isotopes can be used for production the one with the highest cross-section is taken and highlighted. Nuclear data are extracted from ENDF/B-VII.1, cross-sections are taken from (Koning et al. 2015). All isotopes are assumed to be produced instantly (with infinite beam current). Calculations for Bi-209 based on 20 MeV and Th-232 on 6.8 MeV protons Note The high TENDL-2015 cross-sections of the Th-232 reaction are questionable and not verified

Product nuclide

Primary nuclide

Table 8.1 Estimated properties of a few selected nuclear battery products generated by proton irradiation of bulk primary nuclide materials

8.3 Accelerator Based Nuclear Fusion 355

356

8 Energy Production and Storage 0,01

Integral production probability

1E-3

209Bi(p, 2n)208Po 1E-4

1E-5

1E-6

1E-7

1E-8 20M

40M

60M

80M

100M

Beam Energy [eV]

Fig. 8.8 The production probability of Po-208 from irradiation of Bi-209 by protons and its relation to the proton beam energy. At around 20 MeV the production efficiency starts to saturate at some 0.1%

Nuclear fusion has become the metaphor for a clean and sustainable, yet prospering energy future. Fusion would enable us to use water as a primary source of energy by “burning” in particular the deuterium isotope present in water (and other hydrogen containing molecules) in a nuclear fusion reaction. This opens up deuterium as a new environmentally and politically uncritical primary energy source. Making a cheap and easily available resource such as water the main input for our economy would end conflicts and solve pollution and climate change problems. Unfortunately, the technology required for an economical and safe usage of nuclear fusion is as far away from current energy technologies as an abacus is from a super computer. On the journey for finding the easiest technology for nuclear fusion power plants only a few options have remained realistic. The most promising technologies are the stellarators and the tokamaks (Ongena et al. 2016). These devices confine hot plasmas through the use of complex closed magnetic fields. The fundamental concept of these reactors works similar to the stars, which confine their plasma through gravity. The technical concepts feature the technological disadvantage of requiring a wall and a vacuum vessel around the plasma and mechanisms other than light emission to remove the released fusion energy from the vessel. The plasma production and heating requires an initial energy input (gravity for stars and electricity in fusion reactors), but once the temperature reaches a few ten keV (1 keV = 11,600,000 K) also the technical fusion can be self-sustaining. These most promising reaction is the D-T reaction D(T, n)4 He as it offers a high energy output together with a low reaction barrier (=required plasma temperature).

8.3 Accelerator Based Nuclear Fusion

357

The D-D reaction would remove the problem of breeding tritium, which is not naturally available, but on the other hand it features about 7 times smaller energy release per reaction. The D-D fusion actually consists of two reactions, namely D(d,n)3 He and D(d, p)T. In D-D and D-T reactors problems arise from the significant transport of energy and particles perpendicular to the magnetic field lines leading to heat loads in the order of MW/m2 and plasma particle fluxes > 1024 /m2 s in the most loaded areas together with about 1019 n/m2 s. The necessity for fragile technology such as superconductors together with the materials suffering from the intense neutron radiation from the fusion reactions further complicates the situation. Accelerators are nowadays applied in a few different roles in nuclear fusion. Direct applications involve the heating of the fusion plasma via neutralized accelerated beams using large DC accelerators as depicted in Fig. 8.9. The process of stopping transfers the beam energy to the plasma. Besides heating via high-frequency absorption in the plasma, this neutral beam heating represents a fundamental technology of reaching the required plasma temperatures. Indirect applications include the application of the analysis methods discussed in Chap. 7 for quantifying the plasma-surface interaction and studying radiation damage and the behaviour of materials under the involved loading conditions. Fig. 8.9 The ELISE neutral beam injector source. It delivers 23 A of negatively charged protons with a beam area of 1 m2 which will be neutralised by a stripping gas before injection into the ITER tokamak. Courtesy of Robert Haas, Max-Planck-Institut für Plasmaphysik, Germany

358

8 Energy Production and Storage

In contrast to the plasma reactors, we have already seen nuclear fusion to be easily induced with particle accelerators. Unfortunately, in accelerators elastic nuclear reactions, which do not produce energy, impose a problem to the particle confinement. For plasma devices elastic collisions have less importance, since the collision partners and their energy remains in the plasma. These elastic reaction cross-sections dominate over the inelastic fusion reactions by orders of magnitude, therefore prohibiting a direct positive energy output. However, this numeric comparison fails to take the energy and particle confinement properties of accelerators, the so-called acceptance (Sect. 2.3), into account. This section will apply the knowledge gained throughout this book in order to make back-of-an-envelope level calculations for the feasibility of accelerator fusion. Different approaches and possible reactions for accelerator reactors will be considered in this case study. The production of rare isotopes, neutrons and energy are addressed as possible alternative applications. Finally, physical parameters and ratios are developed for a reactor approach to quantify the qualitative understanding of energy confinement and check whether current technology could provide an accelerator reactor operating with positive energy output and if not which technology might be sufficient. The potpourri of complex technologies involved in the confinement plasma reactor represents the major drawback of these technologies. In the accelerator language, these devices could be described as multi-species storage rings with zero beam energy and an extremely high emittance. If nuclear fusion works on this end of the accelerator parameters, why should it not work on the other end, namely a storage ring with low emittance and finite beam energy? Well, let us assume this as the basis of a technological concept. The technology of an accelerator reactor features some advantages and disadvantages over the plasma reactor. The main problem induced by the finite beam energy is the relevance of elastic scattering events trying to sabotage it. As we learned earlier, elastic collisions transfer energy from the directed (beam energy) to the directionless (emittance) component, forming a major difference between both concepts. While for zero beam energy the confined energy lies solely in the emittance, for finite beam energy the confined energy mostly lies in the beam energy. So far so good, but this technological disadvantage of accelerators is countered by a technological advantage of an easier confinement of the directed beam energy compared to the chaotic energy of the emittance. Physically it is impossible to separate elastic and inelastic (fusion) reactions of the beam particles, hence we have to find a workaround for the elastic part. The main problem of accelerator fusion therefore reduces to the competition of elastic and inelastic reactions and their cross-sections. In a zeroth order, this is the ratio of fusion to Rutherford cross-section. In reality this simplifies the physics too extensively. As we learned in the ion-beam analysis Sect. 7.1.4, the elastic cross-sections starts to deviate from the Rutherford value at about 50% of the coulomb barrier of 389 keV for H to H nuclei. In this case it apparently deviates at even lower energy, otherwise the D-T fusion would be impossible with its cross-section maximum at 64 keV. In order to optimize a setup and investigating its potential, the basic nuclear reactions, their cross-section

8.3 Accelerator Based Nuclear Fusion

359

for the energy producing fusion reactions, and the counteracting elastic energyloss reactions are required. The energy loss reactions are, depending on the reactor concept, electronic stopping and elastic scattering. Therefore, reactions of two light ions are in principle best suited. They require lower initial energy (coulomb barrier) and suffer lower energy-loss cross-sections (mostly proportional to nuclear charge) compared to heavier reaction partners. An accelerator reactor can be used to produce neutrons, isotopes, or energy. All of these products are basically proportional in production rate and rely on the same reactions, but energy is the only parameter with a threshold (energy production minus device upkeep). Therefore, the following calculations focus on energy production, without loss of relevance for the other products. Table 8.2 investigates a few light particle reactions by comparing their energy release Q with the approximately required beam energy based on the lowest energy cross-section maximum and a ratio of the Q-value to a ratio representing the invested beam energy E and the charges Z i responsible for elastic loss reactions. Among the feasible reactions depicted in Table 8.2, the D+T fusion promises by far the highest output efficiency, similar to the situation in plasma reactors. The table illustrates the principle limitation of an accelerator reactor with its different confinement compared to a plasma reactor: For each ion, elastic scattering can lead to a total loss of the invested projectile energy by removing the particle from the beam, therefore each fusion reaction needs to yield a multiple of the projectile energy in order to allow for positive output with finite efficiencies. In other words, the projectile energy scales the loss related to the elastic scattering and the Q-value the gain of the fusion crosssection. In a simple estimation, this factor is 275 for D+T, due to this reactions high Q-value and low required beam energy. The D-T, D-D and D-3 He reaction cross-sections, in the centre-of-mass (CM) system, were parametrized in (Bosch and Hale 1992) with a precision of 2% up to 1 MeV to σ F (E) =

1 A1 + E(A2 + E( A3 + E( A4 + E ∗ A5))) √ ∗ 1 + E(B1 + E(B2 + E(B3 + E ∗ B4))) E ∗ eBG / E

(8.1)

Table 8.2 Compilation of feasible reactions for an accelerator fusion reactor Reaction

D+D

D+3 He

D+T

T+T

P+11 B

Energy gain Q (keV)

3650

18,353

17,589

11,332

8682

1st Cross-section peak (keV)

3000

270

64

2100

625

Q/(E(σ max ) * Z 1 Z 2 )

1.2

34

275

5.4

2.8

The rows depict the reaction energy gain Q, the 1st maximum in fusion reaction cross-section and a ratio demonstrating their principle feasibility based on a simplified view on the competition of fusion and elastic reactions in relation to the invested projectile energy. From this ratio it becomes clear, that only D+T and T+T are feasible from a practical point of view with typical technical energy conversion efficiency of up to 50% and a closed fuel cycle. D+3 He offers an attractive option, if 3 He supply can be provided from other sources

360

8 Energy Production and Storage

with E in keV and σ in barn and all other variables being fitting parameters for the individual reactions to the cross-section data. These following calculations are based on these semi-empirical cross-sections.

8.3.1 Neutral Target Reactors Accelerator reactors with neutral targets, i.e. gas, liquid, or solid, bombarded by accelerated ions are the first considered option. These neutral target reactors are state-of-the-art in many industrial, medical, and scientific applications as discussed above. In Sect. 8.2, we assumed exactly this situation for the production of nuclear batteries. D+T reactors (Sect. 4.2.1) of this type often apply a Ti or TiD2 target, which is bombarded by 10–120 keV T+ and D+ ions yielding outputs of 105 –1012 neutrons/s. Most of the energy input to the ions is lost by the electronic stopping in the target, which can be calculated by SRIM (Ziegler et al. 2008) for different target situations. Along with the stopping, the fusion reaction cross-sections from (8.1) are applied, after transformation to the lab system. It is always more efficient to accelerate the heavy reaction (i.e. T in D+T) partner due to momentum conservation, thus the calculations are based on this consideration. For the D-T reaction, Fig. 8.10 compares three different target options to investigate the influence of the stopping power on the reactor performance. A composition with relatively high stopping power is TiD2 , a medium case is given by LiD and the optimum, but gaseous, case is D2 gas. Figure 8.10 demonstrates higher stopping TiD LiD D2 gas

Reaction probability

2,0x10-4 1,6x10-4 1,2x10-4 8,0x10-5 4,0x10-5 0,0

0

200

400

600

800

T ion energy [keV]

Fig. 8.10 Calculated D+T→4 He + n reaction probability for three different neutral targets. The smaller the stopping power, the higher the reaction probability. The optimum energy efficiency is always in the region of 170–180 keV T ion energy, since the stopping decreases slower than the projectile energy rises

8.3 Accelerator Based Nuclear Fusion

361

powers lead to a lower reaction probability, since fewer nuclei are passed with the same starting energy. For obtaining a high energy-efficiency, a high reaction probability per invested T ion-energy is required, therefore the reaction probability has to be divided by the projectile energy E for obtaining the point of maximum efficiency. This point is 170 keV (TiD2 ), 177 keV (LiD), and 180 keV (D2 ). The optimal case (D2 ) yields a reaction probability of 8.9 * 10−5 . Multiplying this with the D+T Qvalue of 17,589 keV, each incident T-ion releases on average an energy of 1.6 keV, while at least 180 keV are invested for acceleration of the T projectile. The numbers demonstrate clearly: Even with the optimal target, the reactor is not very efficient at neutron production and over a factor 100 away from multiplying the invested energy by a factor 2. Considering typical conversion efficiencies a factor 2 represents the absolute minimum, realistically rather a factor 10 would be required. The presented calculation of reaction efficiency allows an optimisation by optimal choice of projectile energy and target (=giving the maximum number of reactions per input power), but fundamental physics will always prevent positive energy output for this situation. Nevertheless, D+T neutron sources can be optimized in output and efficiency significantly, using the presented considerations. At 180 keV, the nuclear stopping or elastic collision, respectively, constitute only 0.13% to the total stopping for a D2 target (Ziegler et al. 2008). Therefore, removing the electrons from the system would provide about 2 orders of magnitude gain in efficiency of the reactor. Another factor ≈ 5/3 can be gained by accelerating both particles in a collider instead of a fixed target, avoiding excess centre-of-mass (CM) energy in the laboratory frame. These two factors can only bring us somewhere near the breakeven. For a significant positive output it is not efficient enough to run down the projectile energy from its initial value down to zero as it happens in a neutral target reactor, but we have to keep the projectiles close to the optimal ratio of fusion cross-section to elastic scattering by re-accelerating the particles after nonfusion interactions. With this, the case study of the neutral target reactor quite directly pointed out the technological choices necessary to have a chance for a positive output.

8.3.2 Ion-Beam Collider Reactors Concluding from the last section, accelerator reactors require removing the electrons from the equation and have both projectile and target moving. Ion-beam colliders, cyclotrons, or Electrostatic-Ion-Beam-Traps (EIBT) constitute technological example implementing these requirements. To provide a constant energy, the accelerator has to be designed in a way that particle energies are refilled after a certain amount of non-fusion interactions. These features are usually called phase-focussing when the ion-beam is operated in AC mode (bunches) confining it in a certain phase relation to the driving wave. Because of this, the targeted energy and technical details differ from the concepts above. The targeted energy resembles the maximum ratio of fusion to elastic scattering cross-section, but not exactly. For keeping the longitudinal energy constant, a DC

362

8 Energy Production and Storage

beam is not possible, but a bunched beam with AC drivers has to be applied. As the longitudinal phase coherence of the ion-beam adds additional complexity, the whole accelerator design becomes more complex. In the following, the elastic scattering is calculated, based on Rutherford scattering cross-sections, as elastic scattering imposes the main scientific challenge for accelerator reactors. This time, the calculations will go beyond simply comparing Rutherford to fusion reaction cross-section for analysing accelerator based fusion. The mistake connected with this view is, that the projectile is not necessarily lost after a Rutherford scattering event, since small angular deviations are still confined by the accelerator, certain emittance values are uncritical depending on accelerator acceptance. Most Rutherford collisions are small angle scattering collisions, which induce small momentum and energy transfer, see the scaling with scattering angle ϕ in the Rutherford cross-section (3.1). Therefore, the projectile still has sufficient energy for a fusion reaction after most Rutherford scattering events and it can stay confined in the beam. In conclusion, similar to a tokamak or stellarator, also an accelerator reactor provides an energy and particle confinement which is in the following quantitatively compared to the requirements of positive energy production. This confinement depends on the ion optical system acceptance in real and, for AC systems, phase space. For a maximum efficiency of an accelerator reactor, maximal energy and particle confinement/acceptance are required, allowing tolerating pluralscattering and larger ϕ. The confinement is defined by the acceptance A (Hinterberger, Physik der Teilchenbeschleuniger und Ionenoptik 2008): 2 /βmax A = π max = π ∗ ymax

(8.2)

The acceptance defines a maximum emittance in the reactor, which is determined by the minimum in extent of the beam tube diameter ymax over the betatron function β max in that position. The equation implies the reactor design should feature a minimal betatron function at a maximal tube diameter throughout the beamline in order to achieve minimal losses. Obvious, but now we have it quantified. An accelerator reactor can be build short and with large diameter, compared to typical particle physics accelerators, since the required kinetic energy is small (64 keV for D+T). In an example of ymax = 3 m and β max = 10 m the 1σ acceptance is about 106 π mm mrad, but which acceptance is required to reach a positive energy output? Acceptance and Confinement of Accelerator Reactors The increase in emittance or transversal particle energy, respectively, challenges the acceptance. Conserving the small angle part of the scattering events increases the beam emittance, as this transfers energy from the longitudinal to the transversal movement. The transversal energy is in fact lost for the system as it cannot be recovered to the longitudinal direction, but it has to be removed by beam cooling or particle losses to avoid de-confining the beam. The beam emittance increase per elastic scattering event is given by (Hinterberger, Physik der Teilchenbeschleuniger und Ionenoptik 2008):

8.3 Accelerator Based Nuclear Fusion

363

 =

1 β 2rms 2

(8.3)

Here β is the betatron function in the position of the event and θ2rms is the variance of the root-mean-square (Rutherford) scattering angle. Equation (8.3) illustrates that large angle events contribute over-proportionally to the emittance growth compared to small angle events. For calculating Δε we have to integrate the Rutherford crosssection, which requires a lower integration limit due to its divergent nature towards small angles. Here a minimum angle of 1π mrad is chosen since this is in the order of the divergence of a beam coming from an ion source. Particles within this divergence are thus assumed to have a negligible influence on the overall beam divergence. Increasing this minimum angle leads to higher Δε with roughly a square-root dependence. Assuming that all ions with a Rutherford scattering angle < ϕ max are confined by the acceptance (thus neglecting the Gaussian shape of the beam), we can determine the smallest ϕmax with a potential positive energy output (Yield > 0). Using the fusion σ F , Rutherford loss (scattering angle > ϕ max ) cross-sections σ RL and the invested projectile centre-of-mass energy E, this average excess energy yield normalised to the elastic reactions is given by Yield = (E + Q) ∗ η − E ∗ σ R L /σ F

(8.4)

In this calculation, the energy conversion-efficiency η of fusion heat to electricity becomes relevant. A conservative assumption is η = 1/3 and an optimistic estimate is η = 2/3. To keep the dimensional freedom in 2D limits, the energy efficiency in producing the ion beams is assumed to be 1, as it can also be absorbed in η. The Rutherford loss cross-section is given by the integral of equation (3.1) from the given ϕ max to 360° − ϕ max . Figure 8.11 plots the resulting yield for 3 different η. For η = 0.33 the turnover from negative to positive energy output happens at ϕ max = 7.7° and levels off at about 6 MeV per reaction at 20° acceptance. The energy multiplication, given by the yield divided by the input energy, reaches a value >2 at angles >7.8°. The higher the η, the less large angle scattering event have to be confined. We can use this acceptance angle to determine the average increase in emittance per elastic reaction Δε using (8.5):

 =

1 2 β

2 rms

⎛⎛ ⎞2 ⎞0.5 φmax φmax  φ 1 β⎜ ⎟ = ⎝⎝  φ  dφ ⎠ /   dφ ⎠ 8 4 2 sin 2 sin φ2 1π mrad

β = (75 mrad)2 2

1π mrad

(8.5)

In this calculation, the beam energy and the particle charges drop out, as these are only relevant for the probability scaling not its distribution. We obtain a RMS scattering angle of 75 mrad = 4.3°. As an example, β = 10 m produces an emittance

364

8 Energy Production and Storage

Fig. 8.11 Mean energy yield per D+T reaction versus maximum confined scattering angle ϕ max at optimal CM energy for 3 different energy efficiencies. At least 6.2° are required for a positive energy output. Higher angles lead to higher mean energy gain, but the benefit levels off at ≈15°. For D+3 He at least 17° are required with 470 keV CM energy. Higher accepted angles lead to increased beam emittance, requiring increased beam tube diameters or beam cooling for compensation

of 9 * 103 π mm mrad on average per elastic scattering of the beam particles. For example having an acceptance of 106 π mm mrad (see example above) and a negligible initial emittance, which is typical for ion sources (Lejeune, THEORETICAL AND EXPERIMENTAL STUDY OF THE DUOPLASMATRON ION SOURCE Part 2: Emissive properties of the source 1974), would allow for A/Δε = 110 scattering events on average, before the beam emittance is larger than the acceptance and a relevant portion of ions gets lost to the walls. The ratio of the confined Rutherford scattering (scattering angle ≤ ϕ max ) to the fusion cross-section, σ R /σ F , on the other hand indicates how many times particles elastically scatter on average before they undergo a fusion reaction. As the first ratio is independent of particle energy and the second is dependent on it, we now optimize the ion energy in order to determine the relative importance of these two associated particle loss mechanisms. We can maximize the yield by choosing an optimal centre-of-mass energy at the given ϕ max , see Fig. 8.12. A minimum ratio of σ R /σ F ≤ 20 can only be reached with CM energies >500 keV, but at these energies the energy multiplication becomes too small, see Fig. 8.12. At 81 keV a maximum in the energy multiplication is located. In between the D+T reaction cross-section peak and 81 keV, the Rutherford cross-section decays faster than the fusion crosssection, while the CM energy multiplied with the reacting beam fraction σ F /σ F + σ R is still small compared to the Q-value. In a small window from 64 to 120 keV this leads to a positive energy output. This small energy window and the sharp energy

8.3 Accelerator Based Nuclear Fusion

365

Fig. 8.12 The parameters relevant for the energy production in a D-T reactor with an acceptance of 8.4° and η = 1/3 for centre-of-mass energies up to 500 keV. After reaching the D-T cross-section maximum at 64 keV also the excess energy per reaction (Yield/E) reaches a maximum of 1.4 MeV at 81 keV, with an only 56 keV wide window of positive output. At energies above this, too much energy is invested for acceleration, as still >95% of the particles are lost before reacting. At energies below the maximum, too many particles are lost before reaction due to low fusion cross-section

multiplication maximum of Fig. 8.12 demonstrate the importance of a fixed CM energy for an efficient accelerator reactor, otherwise the projectiles quickly leave this window towards region of insufficient multiplication factors. The energy of both beams can be calculated by assuming equal momenta in the CM system. For D+T, the optimum centre-of-mass energy was determined to W = 81 keV. Hence we can calculate the deuteron energy E D and the triton energy E T with m T ∗ vT = m D ∗ v D 2E T 2E D => m T = mD mT mD => m T E T = m D E D with E T + E D = W mD W W − ED  = ED = => => mD ED mT +1 mT

E D = 48.6 keV; E T = 32.4 keV

(8.6)

366

8 Energy Production and Storage

This multivariate optimisation leads to a working space in ymax and E. Also D+3 He and T+T have a small working space, but they require much larger ϕ max . A ϕ max > 7.7° already represents a substantial number imposing a significant technical challenge, rendering other reactions completely unrealistic. The real number of ϕ max will be slightly lower, since we assumed every elastically scattered particle to be lost by using σ R /σ F . An approximate wall-loss term Δε/A can now be added as a negative term to the energy balance in (8.4):  σR  σRL + Yield = (E + Q) ∗ η − E ∗ σF σF A Eσ R L σ R E ∗ ββmax ∗ 2rms = (E + Q) ∗ η − − 2 σF 2π σ F ymax 

(8.7)

Equation (8.7) defines the minimum device size ymax , for a positive energy yield. Devices with a size < ymax cannot produce power but e.g. generate neutrons. Insertion of ϕ max from (8.5) into (8.7) yields a convergent solution and the modified minimum angle. These parameters strongly depend on the overall reactor properties. The betatron function now comes in with a square relation, making the reactors beam optical design a key point for a technical realisation. Reducing β max with better beam optical technology brings the situation closer to the fundamental physical limit discussed before. Adding emittance cooling could substantially improve the situation since it adds a counteracting effect to the emittance increase by elastic collisions. Luminosity Runaway Above we verified that an accelerator can provide sufficient confinement of the elastic losses, but it still needs to be clarified if also significant power output can be generated under those boundary conditions. This question of power output is equivalent to calculating which fraction of the injected ions actually is involved in a fusion reaction before it is lost to de-confining events other than elastic beam scattering. We term this issue luminosity runaway, because a lower luminosity directly leads to a longer dwell time (lower beam density = lower interaction probability) for the beam particles which leads to a higher probability of undergoing these other loss mechanisms. Therefore, it is required to calculate the number of reactions per time dN R /dt: dN R = L ∗σ dt

(8.8)

With the reaction cross-section σ and the luminosity of the accelerator reactor L for AC and DC accelerators: N B ∗ f ∗ N D ∗ NT 4π ∗ r x ∗ r y I D ∗ IT = 2 e ∗ πr x r y

L AC = L DC

(8.9)

8.3 Accelerator Based Nuclear Fusion

367

With the number of bunches in the accelerator N B , the particle circulation frequency f , the average number of particles in the D and T bunches N D and N T (D+T reactor), the intensities I of particles per time, and the 1σ values (beam dimension) in the x and y direction r x , r y assuming a Gaussian beam profile. The dimensions of the beam are connected to the beam emittance ε and betatron function β r x,y =



x,y ∗ βx,y

(8.10)

The luminosity allows us to calculate the extinction length of the ion-beam by combining the integral of equation (8.8) with (8.9) and (8.10): N (x) = N0 ∗ e

σ ∗x∗I

F − vπr x ry

=> x = ln



N0 N (x)

 ∗

  π vr x r x N0 π vβ(0 + ) ≈ ln ∗ σF I N (x) σF I

(8.11)

with the D+T cross-section σF (80 keV) = 4.49 barn, the passed distance x and the total velocity v(80 keV) = 2164 km/s (D ions) + 1442 km/s (T ions) = 3606 km/s. Assuming an emittance of 106 π mm mrad (acceptance limit discussed above), an intensity equal to 1 A in each beam and β = 1 m, (8.11) yields an extinction length for 90% of the beam to react of 8 * 10−18 particles/m equal to a range for 90% loss of 1.3 * 1016 m, corresponding to about one year of necessary confinement time with the given velocities. The emittance increase thus prevents a reaction of the injected ions within their confinement time, because the elastic collisions thin out the beam/reaction partner density too strongly. A higher acceptance can confine them, but they will not induce fusion reactions anymore. On the other end of the technically possible emissivity/beam density we have the initial beam produced by the ion source. Developments of neutral beam injectors for tokamaks report emissivities of ε0 = 0.2 π mm mrad and current densities in excess of 1 A/cm2 . With this number a 90% reaction probability is reached after 160 km, keeping the other input parameters as stated above and neglecting the increase of emittance. Therefore a feasible confinement time of 1 A) are required 2. The beam interaction zone can be a large part up to the whole circumference, since no detectors or the like are required in this reactor type 3. A maximum luminosity is not of prime importance, as the circumference can be small (1 m) without risking unfeasible device cost. The beam confinement can be realised in a ring structure, a mirror setup, or a combination of both. The collider ring reactor could be realized by two accelerator storage rings in counter-current operation, similar to the LHC design with its two proton beams. The two rings could be connected, providing the central reaction chamber where the fusion reactions take place surrounded by breeding modules for tritium generation similar to a tokamak reactor. The fusion reaction consumes tritium, therefore it has to be produced via nuclear reactions of the emitted fusion neutron with e.g. lithium. The remaining parts of the two rings offer space for acceleration and emittance cooling. A single magnetic dipole element can combine and separate the counter-propagating beams from their respective rings, since it deflects both beams in the same direction as seen along the direction of motion. Also beam focussing using e.g. multiple quadrupole magnets acts similarly, regardless of the direction the beam enters. For electro-static beam optics the different beam energies prevent a dual use of optical elements. Acceleration facilities, e.g. RF-cavities, required to maintain the design energy rely on a fixed frequency. With two rings the same rotation frequency

8.3 Accelerator Based Nuclear Fusion

369

can be achieved for both species in spite of their different velocity by adjusting the individual ring length accordingly. A physical feasibility of a positive energy output is not necessarily a technical feasibility, due to conversion efficiencies being 100% and the aggregates running the reactor also consume power. Typically, aggregates consume ≈10% of the power output, but as we saw in Sect. 8.1 this very much depends on the overall conception. Four main aggregates are required to run the accelerator reactor: The ion sources, the gas/vacuum system, beam optics, and energy conversion and T-breeding. This section provides a short overview of the associated details to point out that these aggregates do not prohibit an efficient technical realisation but raise the required energy multiplication factor. For simplicity, the numbers required for this consideration are based on a 1 A ion current system. At many points technical details are based on specifications of state-of-the-art industrial products at others it’s just estimations, therefore the numbers must be handled with care. The D and T ions sources are required to provide maximum beam density at minimum emittance. Currently several options as hot-cathode multi-cusp, duoplasmatron ion sources, or ECR sources provide feasible options. A single ion source can be used to produce D and T ions together with a mass separating beam optical element in order to avoid the need for D-T gas separation in the gas system. According to (Hinterberger, Physik der Teilchenbeschleuniger und Ionenoptik 2008) a duoplasmatron source providing 1 A of positive ion current consumes about 1 kW of electrical power. The initial acceleration can be realized by ion source biasing with the required DC potential, since in the D+T case only