Stochastic Approaches to Electron Transport in Micro- and Nanostructures [1st ed. 2021] 9783030679170, 9789814774222

The book serves as a synergistic link between the development of mathematical models and the emergence of stochastic (Mo

238 39 2MB

English Pages [217] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Introduction to the Parts
Contents
Part I Aspects of Electron Transport Modeling
1 Concepts of Device Modeling
1.1 About Microelectronics
1.2 The Role of Modeling
1.3 Modeling of Semiconductor Devices
1.3.1 Basic Modules
1.3.2 Transport Models
1.3.3 Device Modeling: Aspects
2 The Semiconductor Model: Fundamentals
2.1 Crystal Lattice Electrons
2.1.1 Band Structure
2.1.2 Carrier Dynamics
2.1.3 Charge Transport
2.2 Lattice Imperfections
2.2.1 Phonons
2.2.2 Phonon Scattering
3 Transport Theories in Phase Space
3.1 Classical Transport: Boltzmann Equation
3.1.1 Phenomenological Derivation
3.1.2 Parametrization
3.1.3 Classical Distribution Function
3.2 Quantum Transport: Wigner Equation
3.2.1 Operator Mechanics
3.2.2 Quantum Mechanics in Phase Space
3.2.3 Derivation of the Wigner Equation
3.2.4 Properties of the Wigner Equation
3.2.5 Classical Limit of the Wigner Equation
4 Monte Carlo Computing
4.1 The Monte Carlo Method for Solving Integrals
4.2 The Monte Carlo Method for Solving Integral Equations
4.3 Monte Carlo Integration and Variance Analysis
Part II Stochastic Algorithms for Boltzmann Transport
5 Homogeneous Transport: Empirical Approach
5.1 Single-Particle Algorithm
5.1.1 Single-Particle Trajectory
5.1.2 Mean Values
5.1.3 Concept of Self-Scattering
5.1.4 Boundary Conditions
5.2 Ensemble Algorithm
5.3 Algorithms for Statistical Enhancement
6 Homogeneous Transport: Stochastic Approach
6.1 Trajectory Integral Algorithm
6.2 Backward Algorithm
6.3 Iteration Approach
6.3.1 Derivation of the Backward Algorithm
6.3.2 Derivation of Empirical Algorithms
6.3.3 Featured Applications
7 Small Signal Analysis
7.1 Empirical Approach
7.1.1 Stationary Algorithms
7.1.2 Time Dependent Algorithms
7.2 Iteration Approach: Stochastic Model
7.3 Iteration Approach: Generalizing the Empirical Algorithms
7.3.1 Derivation of Finite Difference Algorithms
7.3.2 Derivation of Collinear Perturbation Algorithms
8 Inhomogeneous Stationary Transport
8.1 Stationary Conditions
8.2 Iteration Approach: Forward Stochastic Model
8.2.1 Adjoint Equation
8.2.2 Boundary Conditions
8.3 Iteration Approach: Single-Particle Algorithm and Ergodicity
8.3.1 Averaging on Before-Scattering States
8.3.2 Averaging in Time: Ergodicity
8.3.3 The Choice of Boundary
8.4 Iteration Approach: Trajectory Splitting Algorithm
8.5 Iteration Approach: Modified Backward Algorithm
8.6 A Comparison of Forward and Backward Approaches
9 General Transport: Self-Consistent Mixed Problem
9.1 Formulation of the Problem
9.2 The Adjoint Equation
9.3 Initial and Boundary Conditions
9.3.1 Initial Condition
9.3.2 Boundary Conditions
9.3.3 Carrier Number Fluctuations
9.4 Stochastic Device Modeling: Features
10 Event Biasing
10.1 Biasing of Initial and Boundary Conditions
10.1.1 Initial Condition
10.1.2 Boundary Conditions
10.2 Biasing of the Natural Evolution
10.2.1 Free Flight
10.2.2 Phonon Scattering
10.3 Self-Consistent Event Biasing
Part III Stochastic Algorithms for Quantum Transport
11 Wigner Function Modeling
12 Evolution in a Quantum Wire
12.1 Formulation of the Problem
12.2 Generalized Wigner Equation
12.3 Equation of Motion of the Diagonal Elements
12.4 Closure at First-Off-Diagonal Level
12.5 Closure at Second-Off-Diagonal Level
12.5.1 Approximation of the fFOD+ Equation
12.5.1.1 Contribution from fSOD++,
12.5.1.2 Contribution from fSOD+,-
12.5.1.3 Correction from fSOD+-,
12.5.1.4 Correction from fSOD+,+
12.5.2 Approximation of the fFOD- Equation
12.5.3 Closure of the Equation System
12.6 Physical Aspects
12.6.1 Heuristic Analysis
12.6.2 Phonon Subsystem
13 Hierarchy of Kinetic Models
13.1 Reduced Wigner Function
13.2 Evolution Equation of the Reduced Wigner Function
13.3 Classical Limit: The Wigner-Boltzmann Equation
14 Stationary Quantum Particle Attributes
14.1 Formulation of the Stationary Problem
14.1.1 The Stationary Wigner-Boltzmann Equation
14.1.2 Integral Form
14.2 Adjoint Equation
14.3 Iterative Presentation of the Mean Quantities
14.4 Monte Carlo Analysis
14.4.1 Injection at Boundaries
14.4.2 Probability Interpretation of K̃
14.4.3 Analysis of Ã
14.5 Stochastic Algorithms
14.5.1 Stationary Wigner Weighted Algorithm
14.5.1.1 Weight Generation
14.5.1.2 Weight Accumulation
14.5.2 Stationary Wigner Generation Algorithm
14.5.2.1 Particle Generation
14.5.2.2 Particle Annihilation
14.5.2.3 Inclusion of Scattering
14.5.3 Asymptotic Accumulation Algorithm
14.5.4 Physical and Numerical Aspects
15 Transient Quantum Particle Attributes
15.1 Bounded Domain and Discrete Wigner Formulation
15.1.1 Semi-Discrete Wigner Equation
15.1.2 Semi-Discrete Wigner Potential
15.1.2.1 Sine Function Representation
15.1.2.2 Single-Dimensional Linear Potential
15.1.2.3 Two-Dimensional Linear Potential
15.1.3 Semi-Discrete Evolution
15.1.4 Signed-Particle Algorithm
15.2 Simulation of the Evolution Duality
15.3 Iteration Approach: Signed Particles
Appendix A
A.1 Correspondence Relations
A.2 Physical Averages and the Wigner Function
A.3 Concepts of Probability Theory
A.4 Generating Random Variables
A.5 Classical Limit of the Phonon Interaction
A.6 Phonon Modes
A.7 Forward Semi-Discrete Evolution
References
Recommend Papers

Stochastic Approaches to Electron Transport in Micro- and Nanostructures [1st ed. 2021]
 9783030679170, 9789814774222

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Modeling and Simulation in Science, Engineering and Technology

Mihail Nedjalkov Ivan Dimov Siegfried Selberherr

Stochastic Approaches to Electron Transport in Micro- and Nanostructures

Modeling and Simulation in Science, Engineering and Technology Series Editors Nicola Bellomo Department of Mathematical Sciences Politecnico di Torino Torino, Italy

Tayfun E. Tezduyar Department of Mechanical Engineering Rice University Houston, TX, USA

Editorial Board Members Kazuo Aoki National Taiwan University Taipei, Taiwan Yuri Bazilevs School of Engineering Brown University Providence, RI, USA Mark Chaplain School of Mathematics and Statistics University of St. Andrews St. Andrews, UK Pierre Degond Department of Mathematics Imperial College London London, UK Andreas Deutsch Center for Information Services and High-Performance Computing Technische Universität Dresden Dresden, Sachsen, Germany Livio Gibelli Institute for Multiscale Thermofluids University of Edinburgh Edinburgh, UK Miguel Ángel Herrero Departamento de Matemática Aplicada Universidad Complutense de Madrid Madrid, Spain

Petros Koumoutsakos Computational Science and Engineering Laboratory ETH Zürich Zürich, Switzerland Andrea Prosperetti Cullen School of Engineering University of Houston Houston, TX, USA K. R. Rajagopal Department of Mechanical Engineering Texas A&M University College Station, TX, USA Kenji Takizawa Department of Modern Mechanical Engineering Waseda University Tokyo, Japan Youshan Tao Department of Applied Mathematics Donghua University Shanghai, China Harald van Brummelen Department of Mechanical Engineering Eindhoven University of Technology Eindhoven, Noord-Brabant, The Netherlands

Thomas J. R. Hughes Institute for Computational Engineering and Sciences The University of Texas at Austin Austin, TX, USA

More information about this series at http://www.springer.com/series/4960

Mihail Nedjalkov • Ivan Dimov • Siegfried Selberherr

Stochastic Approaches to Electron Transport in Micro- and Nanostructures

Mihail Nedjalkov Institute for Microelectronics, Faculty of Electrical Engineering and Information Technology, Technische Universität Wien Wien, Austria

Ivan Dimov Institute for Information and Communication Technologies Bulgarian Academy of Sciences Sofia, Bulgaria

Institute for Information and Communication Technologies Bulgarian Academy of Sciences Sofia, Bulgaria Siegfried Selberherr Institute for Microelectronics, Faculty of Electrical Engineering and Information Technology, Technische Universität Wien Wien, Austria

ISSN 2164-3679 ISSN 2164-3725 (electronic) Modeling and Simulation in Science, Engineering and Technology ISBN 978-3-030-67916-3 ISBN 978-3-030-67917-0 (eBook) https://doi.org/10.1007/978-3-030-67917-0 Mathematics Subject Classification: 45B05, 45D05, 37M05, 8108, 6008, 60J85, 60J35, 65Z05, 65C05, 65C35, 65C40 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com, by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Computational modeling is an important subject of microelectronics, which is the only alternative to expensive experiments for design and characterization of the basic integrated circuit elements, the semiconductor devices. Modeling comprises mathematical, physical, and electrical engineering approaches needed to compute the electrical behavior in these structures. The role of the computational approach is twofold: (1) to derive models describing the current transport processes in a given structure in terms of governing equations, initial and boundary conditions, and relevant physical quantities, and (2) to derive efficient numerical approaches for performing their evaluation. They are considered as the two sides of the problem, since physical comprehension is achieved on the expense of increased numerical complexity. Stochastic approaches are most widely used in the field in particular, because they reduce memory requirements on the expense of longer simulation time and avoid approximation procedures, requiring additional regularity. The primary motivation of our book is to present the synergistic link between the development of mathematical models of current transport in semiconductor devices and the emergence of stochastic methods for their simulation. The book fills the gap between other monographs, which focus on the development of the theory and the physical aspects of the models [1, 2], their application [3, 4], and the purely theoretical Monte Carlo approaches for solving Fredholm integral equations [5]. Specific details about this book are given in the following. The golden era of classical microelectronics is characterized by models based on the Boltzmann transport equation. Their physical transparency in terms of particles featured the widespread development of Monte Carlo methods in the field. At the beginning, almost 50 years ago, a variety of phenomenological algorithms, such as the Monte Carlo Single-Particle algorithm for stationary transport in the presence of boundary conditions, the Monte Carlo Ensemble algorithm relevant for transient processes with initial or boundary conditions, and Monte Carlo algorithms for small signal analysis, were derived using the probabilistic interpretation of the processes described. Thus, the stochastic method is viewed as a simulated experiment, which emulates the elementary processes in the electron evolution. The fact that these algorithms solve the transport equation was proved v

vi

Preface

afterward. The inverse perspective, algorithms from the transport model, developed during the last 15 years of the twentieth century, gave rise to a universal approach based on the formal application of the numerical Monte Carlo theory on the integral form of the transport model. This is the Iteration Approach, which allows to unify the existing algorithms as particular cases as well as the derivation of novel algorithms with refined properties. These are the high precision algorithms based on backward evolution in time and algorithms with improved statistics based on event biasing, which stimulates the generation of rare events. As applied to the problem of self-consistent coupling with the Poisson equation, the approach gave rise to self-consistent event biasing and the concept for time-dependent particle weights. An important feature of the Iteration Approach is that the original model can be reformulated within the context of a Monte Carlo analysis in a way that allows for a novel improved model of the underlying physics. The era of nanoelectronics involves novel quantum phenomena, involving quantities with phases and amplitudes that give rise to resonance and interference effects. This dramatically changes the computational picture. • Quantum phenomena cannot be described as a cumulative sum of probabilities and thus by phenomenological particle models. • They demand an enormous increase of the computational requirements and need efficient algorithms. (1) A little difference in the input settings can lead to very different solutions and (2) the so-called sign problem of quantum computations causes that the evaluated values are often results of cancellation of large numbers with different signs. • The involved physical phenomena resulting from a complicated interplay between quantum coherence and processes of decoherence due to interaction with the environment are not well understood. As a consequence, the mathematical models are still in the process of development and thus the corresponding algorithms. A synergistic relationship between model and method is therefore of crucial importance. A promising strategy is the application of the Iteration Approach in conjunction with the Wigner formulation of quantum mechanics. This is the formalism of choice, since many concepts and notions of classical mechanics, like phase space and distribution function, are retained. A seamless transition between quantum and classical descriptions is provided, which allows an easy identification of coherence effects. The considered system of electrons interacting with the lattice vibrations (phonons) can be formally described by an infinite set of linked equations. A hierarchy of assumptions and approximations is necessary to close the system and to make it numerically accessible. Depending on these assumptions, different quantum effects are retained in the derived hierarchy of mathematical models. The homogeneous Levinson and Barker-Ferry equations have been generalized to account for the spatial electron evolution in quantum wires. A Monte Carlo Backward algorithm has been derived and implemented to reveal a variety of quantum effects, like the lack of energy conservation during electron-phonon interaction, the intra-collisional field effect, and the ultra-fast spatial transfer. Numerical analysis

Preface

vii

shows an exponential increase of the variance of the method with the evolution time, associated to the non-Markovian character of the physical evolution. Further approximations give rise to the Wigner-Boltzmann equation, where the spatial evolution is entirely quantum mechanical, while the electron-phonon interaction is classical. The application of the Iteration Approach to the adjoint equation leads to a fundamental particle picture. Quantum particles retain their classical attributes like position and velocity but are associated with novel attributes like weight that carries the quantum information. The weight changes its magnitude and sign during the evolution according to certain rules, and two particles meeting in the phase space can merge into one with their weights. Two algorithms, the Wigner Weighted and the Wigner Generation algorithms, are derived for the case of stationary problems determined by the boundary conditions. The latter algorithm refines the particle attributes by replacing the weight with a particle sign, so that now particles are generated according to certain rules and can also annihilate each other. These concepts have been generalized during the last decade for the transient Wigner-Boltzmann equation, where also a discrete momentum is introduced. The corresponding Signed-Particle algorithm allowed the computation of multidimensional problems. It has been shown that the particle generation or annihilation process in the discrete momentum space is an alternative to Newtonian acceleration. Furthermore, the concepts of signed particles allow for an equivalent formulation of the Wigner quantum mechanics and thus to interpret and understand the involved quantum processes. Recently, the signed-particle approach has been adapted to study entangled systems, many body effects in atomic systems, neural networks, and problems involving the density functional theory. However, such application aspects are beyond the scope of this book. Instead, we focus on important computational aspects such as convergence of the Neumann series expansion of the Wigner equation, the existence and uniqueness of the solution, numerical efficiency, and scalability. A key message of the book is that the enormous success of quantum particle algorithms is based on computational experience accumulated in the field for more than 50 years and rooted in the classical Monte Carlo algorithms. This book is divided into three parts. The introductory Part I is intended to establish the concepts from statistical mechanics, solid-state physics, and quantum mechanics in phase space that are necessary for the formulation of mathematical models. It discusses the role and problems of modeling semiconductor devices, introduces the semiconductor properties, phase space and trajectories, and the Boltzmann and Wigner equations. Finally, it presents the foundations of Monte Carlo methods for the evaluation of integrals, solving integral equations, and reformulating the problem with the use of the adjoint equation. Part II considers the development of Monte Carlo algorithms for classical transport. It follows the historical layout, starting with the Monte Carlo Single-Particle and Ensemble algorithms. Their generalization under the Iteration Approach, its application to weak signal analysis, and the derivation of a general self-consistent Monte Carlo algorithm with weights for the mixed problem

viii

Preface

with initial and boundary conditions are discussed in detail. The development and application of the classical Monte Carlo algorithms is formulated with the help of seven asserts, ten theorems, and thirteen algorithms. Part III is dedicated to quantum transport modeling. The derivation of a hierarchy of models ranging from the generalized Wigner equation for the coupled electronphonon system to the classical Boltzmann equation is presented. The development of respective algorithms on the basis of the Iteration Approach gives rise to an interpretation of quantum mechanics in terms of particles that carry a sign. Stationary and transient algorithms are presented, which are unified by the concepts of the signed-particle approach. Particularly interesting is the Signed-Particle algorithm suitable for transient transport simulation. This part is based on five theorems and three algorithms, which in particular shows that stochastic modeling of quantum transport is still in an early stage of development as compared to the classical counterpart. Auxiliary material and details concerning the three parts are given in the Appendix. The targeted readers are from the full range of professionals and students with pertinent interest in the field: engineers, computer scientists, and mathematicians. The needed concepts and notions of solid-state physics and probability theory are carefully introduced with the aim at a self-contained presentation. To ensure a didactic perspective, the complexity of the algorithms raises consecutively. However, certain parts of specific interest to experts are prepared to provide a standalone read. Vienna, Austria Sofia, Bulgaria Vienna, Austria April 2020

Mihail Nedjalkov Ivan Dimov Siegfried Selberherr

Introduction to the Parts

The introductory Part I aims to present the engineering, physical, and mathematical concepts and notions needed for the modeling of classical and quantum transport of current in semiconductor devices. It contains four chapters: “Concepts of Device Modeling” considers the role of modeling, the basic modules of device modeling, and the hierarchy of transport models which describe at different levels of physical complexity the electron evolution, or equivalently, the electron transport process. This process is based on the fundamental characteristics of an electron, imposed by the periodic crystal lattice where it exists. The lattice affects the electron evolution also by different violations of the periodicity, such as impurity atoms and atom vibrations (phonons). Basic features of crystal lattice electrons and their interaction with the phonons are presented in the next chapter “The Semiconductor Model: Fundamentals”. The third chapter “Transport Theories in Phase Space” is focused on the classical and quantum transport theory, deriving the corresponding equations of motion that determine the physical observables. In Chapter 4, we introduce the basic notions of the numerical Monte Carlo theory that utilizes random variables and processes for evaluation of sums, integrals, and integral equations. The needed concepts of the probability theory are presented in the Appendix. We stick to a top-level description to introduce to nonexperts in the field and to underline the mutual interdependence of the physical and mathematical aspects. Looking for a self-contained presentation, we need sometimes to refer to text placed further in the sequel. Our opinion is that this is a better alternative to many references to the specialized literature in the involved fields. Part II considers the Monte Carlo algorithms for classical carrier transport described by the Boltzmann equation, beginning with the first approaches for modeling of stationary or time-dependent problems in homogeneous semiconductors. The homogeneous phase space is represented by the components of the momentum variable, which simplifies the physical transparency of the transport process. The first algorithms, the Monte Carlo Single-Particle and the Ensemble algorithms, are derived by phenomenological considerations and thus are perceived as emulation of the natural processes of drift and scattering determining the carrier distribution in the momentum space. Later, these algorithms are generalized for the ix

x

Introduction to the Parts

inhomogeneous task, where the phase space is extended by the spatial variable. In parallel, certain works evolve the initially intuitive link between mathematical and physical aspects of these stochastic algorithms by proving that they provide solutions of the corresponding Boltzmann equations. The next generation algorithms are already devised with the help of alternative formulations of the transport equation. These are algorithms for statistical enhancement based on trajectory splitting, of a backward (back in time) evolution and of a trajectory integral. It appears that these algorithms have a common foundation: They are generalized by the Iteration Approach, where a formal application of the numerical Monte Carlo theory to integral forms of the Boltzmann equation allows to devise any of these particular algorithms. This approach gives rise to a novel class of algorithms based on event biasing or can be used to novel transport problems such as for small signal analysis discussed in Chap. 7. The computation of important operational device characteristics imposes the problem of stationary inhomogeneous transport with boundary conditions. The Monte Carlo Single-Particle algorithm used for this problem, initially devised by phenomenological considerations and by an assumption for ergodicity of the system, is now obtained by the Iteration Approach, and it is shown that the ergodicity follows from the stationary physical conditions: Namely, it is shown that the average over an ensemble of nonequilibrium, but macroscopically stationary system of carriers, can be replaced by a time average over a single carrier trajectory. From a physical point of view, the place of the boundaries is well defined by the device geometry. From a numerical point of view, an iteration algorithm (but not its performance) should be independent of the choice of place and shape of the boundaries. An analysis is presented, showing that the simulation in a given domain provides such boundary conditions in an arbitrary subdomain and that if the latter are used in a novel simulation in the subdomain, the obtained results will coincide with those obtained from the primary simulation. The most general transport problem, determined by initial and boundary conditions, is considered at the end of Part II. The corresponding random variable is analyzed and the variance evaluated. Iteration algorithms for statistical enhancement based on event biasing are derived. Finally, the self-consistent coupling scheme of these algorithms with the Poisson equation is derived. Part III is devoted to the development of stochastic algorithms for quantum transport, which is facilitated by the experience accumulated from the classical counterpart. The same formal rules for application of the Monte Carlo theory hold for the quantum case; in particular, an integral form of the transport equation is needed. Furthermore, a maximal part of the kernel components must be included in the transition probabilities needed for construction of the trajectories. The peculiarity now is that there is no probabilistic interpretation of the kernel components, inherent for classical transport models. Quantum kernels are in principle oscillatory. The way to treat them is to decompose the kernel into a linear combination of positive functions and then to associate the corresponding signs to the random variable. This is facilitated by the analogy between the classical and the quantum (Wigner) theories and the fact that most of the concepts and notions remain valid

Introduction to the Parts

xi

in both formalisms. The level of abstraction of the forward in time algorithms corresponds to that of the classical Weighted Ensemble algorithm, while the backward algorithms are formal enough not to discriminate classical from quantum tasks. In this respect, Part III focuses on the derivation of quantum transport models that account on a different level of approximation for the important physical phenomena composing the transport process. Backward algorithms used for the quantum carrier-phonon models are not discussed in favor of the forward approach to the Wigner-Boltzmann equation. The forward approach allows for alternative kernel decompositions and their stochastic interpretation. The existence of trajectories is associated with particles, whose properties are dictated by the particular decomposition. Quantum particle concepts and notions are consecutively introduced and unified into particle models, composing the Wigner signed-particle approach. Synergistically, Wigner signed particles provide a heuristic interpretation and explanation of the physical processes behind complicated mathematical expressions. In contrast to the classical algorithms, which give mathematical abstraction of the involved physics, here we observe the inverse situation: Quantum algorithms give insight into the highly counterintuitive quantum physics. The signed-particle approach has been used with alternative computational models successfully applied to analyze transport in modern multi-dimensional nanostructures.

Contents

Part I Aspects of Electron Transport Modeling 1

Concepts of Device Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 About Microelectronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Role of Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Modeling of Semiconductor Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Basic Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Transport Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Device Modeling: Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 6 7 7 8 10

2

The Semiconductor Model: Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Crystal Lattice Electrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Band Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Carrier Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Charge Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Lattice Imperfections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Phonons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Phonon Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 15 15 17 18 18 19 21

3

Transport Theories in Phase Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Classical Transport: Boltzmann Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Phenomenological Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Classical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Quantum Transport: Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Operator Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Quantum Mechanics in Phase Space . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Derivation of the Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Properties of the Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Classical Limit of the Wigner Equation . . . . . . . . . . . . . . . . . . .

25 25 25 27 28 30 31 33 34 36 37

xiii

xiv

4

Contents

Monte Carlo Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 The Monte Carlo Method for Solving Integrals . . . . . . . . . . . . . . . . . . . . 4.2 The Monte Carlo Method for Solving Integral Equations . . . . . . . . . . 4.3 Monte Carlo Integration and Variance Analysis . . . . . . . . . . . . . . . . . . . .

39 39 40 42

Part II Stochastic Algorithms for Boltzmann Transport 5

Homogeneous Transport: Empirical Approach . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Single-Particle Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Single-Particle Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Mean Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Concept of Self-Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Ensemble Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Algorithms for Statistical Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47 47 48 49 50 50 51 52

6

Homogeneous Transport: Stochastic Approach . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Trajectory Integral Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Backward Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Iteration Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Derivation of the Backward Algorithm . . . . . . . . . . . . . . . . . . 6.3.2 Derivation of Empirical Algorithms . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Featured Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55 55 56 58 58 59 61

7

Small Signal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Empirical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Stationary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Time Dependent Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Iteration Approach: Stochastic Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Iteration Approach: Generalizing the Empirical Algorithms. . . . . . . 7.3.1 Derivation of Finite Difference Algorithms . . . . . . . . . . . . . . . 7.3.2 Derivation of Collinear Perturbation Algorithms . . . . . . . . .

63 64 64 65 66 69 69 70

8

Inhomogeneous Stationary Transport. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Stationary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Iteration Approach: Forward Stochastic Model. . . . . . . . . . . . . . . . . . . . . 8.2.1 Adjoint Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Iteration Approach: Single-Particle Algorithm and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Averaging on Before-Scattering States . . . . . . . . . . . . . . . . . . . . 8.3.2 Averaging in Time: Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 The Choice of Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Iteration Approach: Trajectory Splitting Algorithm . . . . 8.5 Iteration Approach: Modified Backward Algorithm . . . . . . . . . 8.6 A Comparison of Forward and Backward Approaches. . . . . . . . . . . . .

73 74 76 77 78 81 82 83 85 89 89 91

Contents

xv

9

General Transport: Self-Consistent Mixed Problem . . . . . . . . . . . . . . . . . . . 9.1 Formulation of the Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 The Adjoint Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Initial and Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Initial Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Carrier Number Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Stochastic Device Modeling: Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93 94 95 100 100 101 102 102

10

Event Biasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Biasing of Initial and Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Initial Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Biasing of the Natural Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Free Flight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Phonon Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Self-Consistent Event Biasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

107 108 108 108 110 111 111 112

Part III Stochastic Algorithms for Quantum Transport 11

Wigner Function Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

12

Evolution in a Quantum Wire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Formulation of the Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Generalized Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Equation of Motion of the Diagonal Elements. . . . . . . . . . . . . . . . . . . . . . 12.4 Closure at First-Off-Diagonal Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Closure at Second-Off-Diagonal Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Approximation of the fF+OD Equation . . . . . . . . . . . . . . . . . . . . 12.5.2 Approximation of the fF−OD Equation . . . . . . . . . . . . . . . . . . . . 12.5.3 Closure of the Equation System . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Physical Aspects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Heuristic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.2 Phonon Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

123 123 124 127 128 131 132 141 142 142 142 144

13

Hierarchy of Kinetic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Reduced Wigner Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Evolution Equation of the Reduced Wigner Function . . . . . . . . . . . . . . 13.3 Classical Limit: The Wigner-Boltzmann Equation . . . . . . . . . . . . . . . . .

147 148 150 151

14

Stationary Quantum Particle Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Formulation of the Stationary Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1 The Stationary Wigner-Boltzmann Equation . . . . . . . . . . . . . . 14.1.2 Integral Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Adjoint Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Iterative Presentation of the Mean Quantities . . . . . . . . . . . . . . . . . . . . . . .

153 155 156 157 158 159

xvi

Contents

14.4

Monte Carlo Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Injection at Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.2 Probability Interpretation of K˜ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.3 Analysis of A˜ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stochastic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.1 Stationary Wigner Weighted Algorithm . . . . . . . . . . . . . 14.5.2 Stationary Wigner Generation Algorithm . . . . . . . . . . 14.5.3 Asymptotic Accumulation Algorithm . . . . . . . . . . . . 14.5.4 Physical and Numerical Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . .

160 160 161 162 163 163 165 167 172

Transient Quantum Particle Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Bounded Domain and Discrete Wigner Formulation . . . . . . . . . . . . . . . 15.1.1 Semi-Discrete Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.2 Semi-Discrete Wigner Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.3 Semi-Discrete Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.4 Signed-Particle Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Simulation of the Evolution Duality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Iteration Approach: Signed Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

175 176 177 178 182 183 186 188

Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Correspondence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Physical Averages and the Wigner Function . . . . . . . . . . . . . . . . . . . . . . . . A.3 Concepts of Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Generating Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5 Classical Limit of the Phonon Interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . A.6 Phonon Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.7 Forward Semi-Discrete Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

191 191 192 193 200 201 202 204

14.5

15

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

Part I

Aspects of Electron Transport Modeling

Chapter 1

Concepts of Device Modeling

The historical development of microelectronics is characterized by the growing importance of modeling, which gradually embraced all stages of preparation and operation of an integral circuit (IC). Device modeling focuses on the processes determining the electrical properties and behavior of the individual IC elements. The trends towards their miniaturization imposes the development of refined physical models, aware of a growing variety of processes occurring in the range from microscopic to nanoscopic spatial scales with picoseconds to femtoseconds time duration. Device modeling comprises two fundamental modules of coupled electromagnetic and transport equations. The former provides the governing forces, which arise primary due to the differences of the potentials applied to the contacts. The latter links the material properties of the semiconductor with the carrier dynamics caused by these forces depending on the transport description [2]. The material properties such as band structure and effective mass, characteristic vibrations of the lattice dielectric, and other characteristic coefficients are specified by the semiconductor model. Good understanding of semiconductor physics is needed to synthesize the semiconductor model, which is then further used to formulate a hierarchy of transport models. This hierarchy unifies fundamental concepts and notions of solid-state physics, statistical mechanics, and quantum mechanics in phase space, introduced in this section.

1.1 About Microelectronics The development of the semiconductor electronic industry can be traced back to 1947, to the invention of the transistor by a Bell Telephone Labs group of scientists jointly awarded the 1956 Nobel Prize in Physics for their achievement [6]. Subject of initial interest were bipolar devices based on poly-crystalline germanium, but soon the focus migrated towards mono-crystalline materials and especially © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_1

3

4

1 Concepts of Device Modeling

silicon. The MOSFET (Metal Oxide Semiconductor Field Effect Transistor) has been established as a basic circuit element and the field effect as fundamental principle of operation of electronic structures [7]. Since the very beginning, circuit manufacturers have been able to pack more and more transistors onto a single silicon chip. The following facts illustrate the revolutionary onset of the process of integration: (1) A chip developed in 1962 has 8 transistors; (2) the chips developed each next year from 1963 to 1965 are with 16, 32 and 64 transistors respectively [8]. MOS memories appeared after 1968, the first microprocessor in 1971, and 10 years later there are already computers based on large scale integration (LSI) circuits. Gordon Moore [9], one of the founders of Intel, observed in accordance with (1) and (2) that the number of transistors was doubling every year and anticipated in 1965 that this rate of growth will continue at least a decade. Ten years later, looking at the next decade he revised his prediction to doubling at a 2 years base. This tendency continued for more than 40 years, so that today the most complex silicon chips have 10 billion transistors—a billion times increase of the transistor density for the period. The dimensions of the transistors shrink accordingly. Advances of the lithography, that is to say the accuracy of patterning, evolves at steps called technology nodes. The latter refers to the size of the transistors in a chip or half of the typical distance between two identical circuit elements. The technology node of 100 nm and below started to be admissible for the industry around 2005. The Intel Pentium D processor, one of the early workhorses for desktop computers, is based on a 90 nm technology, while during the last decade lithographic processes at 45, 22 and 11 nm became actual [10]. The wide accepted definition of nanotechnology involves critical dimensions below 100 nm, so that we entered the nanoera of the semiconductor technology during the first decade of the twenty-first century. New architectures are needed as the dimensions of transistors are scaled down to keep the same conventional functions and compensate for new phenomena appearing as the thickness of the gate decreases. Accordingly, the structure of the IC elements gets more complicated aiming to maintain the performance and functionality. The planar in design complementary metal–oxide–semiconductor (CMOS) device has been suggested in 1963 as an ultimate solution for ICs [11]. By the early 1970s CMOS transistors gradually became the dominant device type for many integrated electronic applications [8]. The third dimension of the device design was added to the planar technology in 1979 with the three-dimensional (3D) CMOS transistor pair, called also CMOS hamburger [11]. Shrinking the sizes raises the influence of certain physical phenomena on the operational characteristics of the transistors. Among them are the so-called ’short channel’ effects which impact negatively the electrical control. Novel 3D structures, FinFETs also known as Tri-gate transistors, have been developed at the end of the century to reduce short channel effects. The active region of current flow, called channel, is a thin semiconductor (Si, or a high mobility material like Ge or a IIIV compound) “fin” surrounded by the gate electrode. This allows for both a better electrostatic control and for a higher drive current. The FinFET architecture became the primary transistor design under the 22 nm technology node. Here however

1.1 About Microelectronics

5

quantum processes begin to modify fundamental classical transport notions. This is related to the transition from delocalized carriers, featuring the transport in a bulk semiconductor to confined in one or more directions carriers. Energy quantization of the confined carrier states gives rise to complicated energy selection relations. Furthermore, the carrier is localized in the directions of confinement, which leads to a momentum uncertainty and thus to a lack of momentum conservation [12]. The latter characterizes the next promising design of quantum electronic devices, with prototypical implementations based on nanosheets, nanowires, and quantum dots [13]. An important negative effect which appears on the nanometer scale is related to heat generation, which poses a further limitation to packing more transistors onto a chip. While transistors get smaller, their power must remain constant, which poses a barrier to the clock speed. The frequency is limited to around 4 GHz since 2005 because of power density and heat issues. Furthermore, the granularity of matter begins to influence the transistor functionality. The radius of a silicon atom is around 0.2 nm, so that the active regions of the current devices are composed by few tens atomic layers. The statistical and lithography induced imperfections of the crystal lattice at this scale cause a significant variability of the operational characteristic of the transistor [12]. It is thus not surprising, that GlobalFoundries, a company which produces very advanced silicon chips like Intel, AMD, Qualcomm and others, are reluctant with efforts for further shrinking of the transistors below the 7 nm node [14]. The combination of these physical and technological limitations and the related economical aspects such as exploding fabrication costs indicate the termination of the computer power growth by means of miniaturization. Other types of innovations are desired to replace the 60 years tradition of growing the number and speed of transistors. The number of the latter, is only 10 times less than the number of neurons featuring human brains for ‘at least 35,000 years’ [14]. Yet the evolution of civilization suggests to expect innovations at first place from new applications and new features due to software and interfaces in terms of displays and sensors. Emerging high performance, cloud computing, recognition, and navigation technologies are the first examples. Furthermore, new chip architectures designed for special purposes and new ways for packaging and interconnecting the chips will be pursued to increase the efficiency. Finally new types of devices and memories based on novel operational principles will be developed. All these ways of innovation rely on the detailed knowledge of the processes which govern the nanometer and femtosecond physics. Furthermore, these processes depend not on the agglomeration of the individual phenomena, as is the case of agglomeration of probabilities. Important now is their particular interrelation: quantum processes are ruled by superposition of amplitudes. A small change of a characteristic value of one of them can give rise to a very different physical picture. A good example is the superposition of the waves originating from two sources. A small change of the frequency of one of them gives rise to a macroscopically different interference pattern. The experimental approach to the analysis of such processes by repetition of many similar measurements is a challenging task. On contrary such knowledge is attainable by approaches based

6

1 Concepts of Device Modeling

on simulation and modeling, which shows their importance for global physical analysis. In particular, their growing significance for the semiconductor industry can be traced back in time in the published since 1992 roadmap [15], where traditionally exists a chapter on modeling and simulation. Further details are given in the next section.

1.2 The Role of Modeling We summarize the factors which move forward the need of development and refinement of modeling approaches for the semiconductor electronics. • Raising importance for economy and society give rise to a rapid development. Currently MOSFET based VLSI circuits give about 90% contribution to the market of semiconductor electronics. The mutual tendency for reducing the price/functionality ratio has been attained by mainly reducing the dimensions of the circuit elements. • The implementation of a smaller technology node requires novel equipment which is related to considerable investments. Typically, the manufacturing plant cost doubles every 3 years or so [10]. Currently the costs exceed 10 billion dollars, which gives one dollar per transistor—a ratio characterizing the novel technologies. • The cycles of production become shorter with every new IC generation. The transition from design to mass production (time-to-market) must be shortened under the condition that the time for fabrication of a wafer composed of integrated circuits increases with the increase of the complexity of the involved processes imposed by the miniaturization. Thus the device specifications must be close to the optimal ones in order to reduce the laboratory phase of preparation. Here modeling is needed to provide an initial optimization. • The cost of the involved resources increases with the complexity of the IC’s, which makes the application of any standard trial-error experiments impossible or at least very difficult. Modeling in microelectronics is divided in topics which reflect the steps in the organization of given IC’s: (1) simulation of the technological processes for formation and linking of the circuit elements; (2) simulation of the operation of the individual devices; (3) simulation of the operation of the whole circuit which typically comprises a huge number of devices. Process simulation (1) treats different physical processes such as material deposition, oxidation, diffusion, ion implantation, and epitaxy to provide information about the dimensions, material composition, and other physical characteristics, including the deviation and variability from the ideal (targeted) device parameters. Device simulation (2) analyzes the electrical behavior of a given device determined by the desired operating conditions and the transport of the current through it. This information is needed at level

1.3 Modeling of Semiconductor Devices

7

(3) for a development of compact models which can describe the behavior of a complete circuit within CAD packages such as [16] for IC design. Device modeling plays a central role in this hierarchy by focusing on the physical processes and phenomena, which in accordance with the material and structural characteristics determine the current transport and thus the electrical behavior of a given device. Device modeling relies on mathematical, physical, and engineering approaches to describe this behavior, which, determines the electrical properties of the final circuit. Nowadays semiconductor companies reduce cost and time-to-market by using a methodology called Design Technology Co-Optimization (DTCO), which considers the whole line from the design of the individual device and the involved material, process, and technology specifications to the functionality of the corresponding circuits, systems, and even products. DTCO establishes the link between circuit performance and the transistor characteristics by circuit to technology computeraided design simulations necessary to evaluate the device impact on specific circuits and systems, which are the target of the technology optimization. In particular modeling of new materials, new patterning techniques, new transistor architectures, and the corresponding compact models are used to consistently link device characteristics with circuit performance. In this way the variability sources which characterize the production flow of advanced technology nodes and impacts the final product functionality are consistently taken into account [17]. On the device level variability originates from lithography, other process imperfections (global variability), and from short range effects such as discreteness of charges and granularity of matter (local or statistical variability). Device simulations which take into account variability are already above four-dimensional: The degrees of freedom, introduced by the statistical distribution of certain device parameters, add to the three-dimensional device design. The simulation of a single, nominal device must be replaced by a set of simulations of a number of devices featured by e.g. geometry variations (line edge roughness) and granularity (e.g. random dopant distributions). This adds a considerable additional simulation burden and raises the importance of refined simulation strategies. Often the choice of the simulation approach is determined by a compromise between physical comprehension and computational efficiently.

1.3 Modeling of Semiconductor Devices 1.3.1 Basic Modules Device modeling comprises two interdependent modules, which must be considered in a self-consistent way. An electromagnetic module comprises equations for the fields, which govern the flux of the current carriers, which in its turn is governed by a module with the transport equations. The fields are due to external factors characterizing the environment, such as potentials applied on the contacts, the density of

8

1 Concepts of Device Modeling

Fig. 1.1 Transport models

the carriers n and their flux J. The latter are input quantities for the electromagnetic module, which, in accordance with the Maxwell equations determine the electric, E, and magnetic H, fields. The corresponding forces accelerate the carriers and thus govern their dynamics. Output quantities are the current-voltage (IV) characteristics of the device, the densities of the carriers, current and their mean energy and velocity. For a wide range of physical conditions, especially valid for semiconductors, the Maxwell equations can be considered in the electrostatic limit. In what follows we assume that there is no applied magnetic field so that, in a scalar potential gauge, the electromagnetic module is reduced to the Poisson equation for the electric potential V . The equation is considered in most textbooks [2], so that we omit any details. For the purposes of this book it is sufficient to consider the module a black box with input parameters being the carrier concentration and the boundary conditions given by the terminal potentials and/or their derivatives, and the spatial distribution of the electric potential as an output quantity. We focus on the transport module and in particular on the transport models, unified in the hierarchical structure shown on Fig. 1.1.

1.3.2 Transport Models The hierarchy of transport models reflects the evolution of the field so that their presentation is in accordance with the evolution of microelectronics. These models are relevant for particular physical conditions. In general the spatial and temporal scales of carrier transport decrease in vertical direction. At the bottom are the analytical models valid for large dimensions and low frequencies characterizing

1.3 Modeling of Semiconductor Devices

9

the infancy of microelectronics.1 For example the 1971 year Intel 4004 processor comprises 2300 transistors based on 10µ silicon technology with a frequency of 400 kHz. The increased physical complexity of the next generations devices becomes too complicated for an analytical description. Relevant become models based on the drift-diffusion equation, which requires a numerical treatment. The increase of the operating frequency imposed a system of differential equations, known as the hydrodynamic model. These models can be derived from phenomenological considerations, or from the leading transport model, the Boltzmann equation, under the assumption of a local equilibrium of the carriers. Further scaled devices, operate at the sub-micrometer scale, where the physical conditions challenge the assumption of locality. Nonequilibrium effect brought by ballistic and hot electrons impose to search for ways of solving the Boltzmann equation itself. The equation represents the most comprehensive model, describing carrier transport in terms of the classical mechanics. In contrast to the previous models, which utilize macroscopic parameters, the equation describes the carriers on a microscopic level by involving the concept of a distribution function in a phase space. The current carriers are point-like particles, accelerated over Newtonian trajectories by the electromagnetic forces - a process called drift. The drift is interrupted by local in space and time scattering processes which change the particle momentum and thus the trajectory. The scattering, caused by lattice imperfections such as vibrations, vacations, and impurities, is described by functions, which, despite calculated by means of quantum mechanical considerations have a probabilistic meaning. As a result the equation is very intuitive and physically transparent so that it can be derived by using phenomenological considerations similarly to the hydrodynamic and drift-diffusion counterparts. The simulation approaches developed to solve the Boltzmann equation distinguish the golden era of device modeling. A stable farther reduction of device dimensions has been achieved with their help. For example the introduced in 2006 Intel Core2 brand contains single-, dual- and quad-core processors which are based on a 45 nm technology and strained silicon, contain of order of 109 transistors and reach an operating frequency of 3 GHz. The nanometers scale of the active regions of nowadays devices and the terahertz scale of electromagnetic radiation reached manifest the beginning of the nanoera of semiconductor electronics. The physical limits of materials, key concepts and technologies associated with the revolutionary development of the field are now approached. Novel effects and phenomena due to granularity of the matter, finite size of the carriers, finite time of scattering, ultra-fast evolution and other phenomena which are beyond the Boltzmann model of classical transport arise. Quantum processes begin to dominate the carrier kinetics and need a corresponding

1 It

should be noted that analytic models are widely used even now, however, for simulation of complete circuits comprised of huge number of transistors. These are the so-called compact models, partially developed by using transistor characteristics, provided by device simulations and measurements.

10

1 Concepts of Device Modeling

description. This is provided at different level of complexity by the models in the upper half of Fig. 1.1. Quantum transport models are traditionally apprehended as ‘difficult challenges’ [15]. Difficulties arise already in the choice between the many formulations of quantum mechanics, where the concepts and notions vary between operators in a Hilbert space to a phase space. There is a considerable lack of physical transparency, how to link the formalism with the concrete simulation task. Once the formalism for a given model is chosen, a challenge becomes the level of comprehension, which is inversely proportional to the numerical aspects. Here the choice is between comprehensive physics and efficient numerical behavior. Finally all quantum-mechanical computations represent the so-called ‘sign problem’, where large numbers with opposite signs cancel to give a small mean value. A typical example is the effort to evaluate the exponent of a large negative number using the Taylor expansion.

1.3.3 Device Modeling: Aspects The importance of the mathematical methods in the field of device modeling is twofold. The level of complexity of a given model relies on a consistent set of assumptions and approximations to reduce the general physical description to a computationally affordable formulation of the task. Furthermore, efficient computational approaches are needed. In particular the models in Fig. 1.1 can be evaluated only numerically. Deterministic methods based on finite differences and finite elements have been successfully developed for the drift-diffusion and hydrodynamic models [18]. The development of numerical approaches reflects the increase of the physical level of comprehension, which in particular imposes a transition from a one-dimensional to a three-dimensional description. Furthermore, the progress of the computational resources offers novel ways for improvement of the efficiency based on high performance computing approaches. Classical transport at the sub-micrometer scale is characterized by a local nonequilibrium and needs a microscopic description via the distribution function and the phase space of particle coordinates and impulses. Thus, when considering the time variable for transient processes, the Boltzmann equation becomes sevendimensional, which poses an enormous computational burden for deterministic methods.2 This, together with the probabilistic character of the equation itself stimulated the development of stochastic approaches for finding the distribution function and the physical averages of interest. The ability of the Monte Carlo methods to solve efficiently the Boltzmann equation boosted the development of

2 Deterministic

approaches to the problem have been developed recently [19, 20]. They rely on the power of the modern computational platforms to compute the distribution function in the cases in which a high precision is needed.

1.3 Modeling of Semiconductor Devices

11

the field. This enables a powerful tool for analysis of different processes and phenomena, development of novel concepts and structures and design of novel devices. In parallel, already 40 years are devoted for refinement and development of novel algorithms which to meet the challenges posed by the development of the physical models. One of our goals is to follow the thread from the first intuitive algorithms, devised by phenomenological considerations to the formal iteration approach which unifies the existing algorithms and provides a platform for devising of novel ones. The conformity between the physical and numerical aspects characterizing the classical models is lacking for the models in the upper, quantum part of Fig. 1.1. The reasons for this are associated with the fact that this area of device modeling is relatively young and still under development, the variety of representations of quantum mechanics, the physical and numerical complexity, and the lack of physical transparency which does not allow development of intuitive approaches. Thus, an universal quantum transport model is missing along with a corresponding numerical method for solving it. The NonEquilibrium Green’s Functions (NEGF) formalism provides the most comprehensive physical description and thus is widely used in almost all fields of physics, from many body theory to seismology. Introduced by methods of many body perturbation theory[21], the theory has been re-introduced starting from the one-electron Schrödinger equation [22] in a convenient for device modeling terms, so that it soon became the approach of choice of a vast number of researchers in the nanoelectronic community. The formalism accounts for both spatial and temporal correlations as well as decoherence processes such as interaction with lattice vibrations (phonons). In general there are two position and two time coordinates, which indicates a non-Markovian evolution. A deterministic approach to the model, based on a recursive algorithm, allows the simulation of two-dimensional problems (four spatial variables) in the case of ballistic transport, where processes of decoherence are neglected [23]. The numerical burden characterizing two-dimensional problems can be further reduced by a variable decomposition which discriminates a transport direction, by assuming a homogeneous in the normal to the transport direction potential. The inclusion of processes of decoherence like interaction with phonons at different levels of approximation, [24, 25] seriously impedes the computational process. The same holds true for self-consistent coupling with the Poisson equation. In general the NEGF formalism provides a feasible modeling approach for near-equilibrium transport in near ballistic regimes, when the coupling with the sources of scattering is weak. The next level of description simplifies the physical picture on the expense of the correlations in time. The unitary equivalents, linked by a Fourier transform density matrix and Wigner function formalisms, maintain a Markovian evolution in time, which makes them convenient for the treatment of time dependent problems. The former is characterized by two spatial coordinates, while in the latter one of the spatial coordinates is replaced by a momentum and thus is known as ‘quantum mechanics in a phase space’.

12

1 Concepts of Device Modeling

The density matrix describes mixed states, consisting of statistical ensembles of solutions of the Schrödinger equation, known as pure states and thus introduces statistics in quantum mechanics. In this way the interaction of a system A with another system B can be associated with a statistical process in A, described in terms of A-states. The density matrix is the preferred approach to many problems such as analysis of decoherence and by the quantum information and quantum optics communities. In particular, the semiconductor Bloch equations, describe the evolution of the density matrices corresponding to interacting systems of electrons, holes (introduced in the next section) and polarization, induced by electromagnetic radiation (laser pulse) under the decoherence processes of interaction with an equilibrium phonon systems. The equation set provides the basis for the development of terahertz quantum cascade lasers and the analysis of ultra-fast coherent and decoherent carrier dynamics in photo-excited semiconductors [26–28]. Historically the Wigner function has been introduced via the density matrix [29]. Actually Wigner’s development [30] was not aimed at a novel formulation of quantum mechanics, but rather for analysis of quantum corrections to classical statistical mechanics The goal was to link the Schrödinger state to a probability distribution in phase space. The theoretical notions of the early Wigner theory were derived on top of the operator mechanics [31] and thus often accepted as an exotic extension of the latter. It took more than two decades to prove the independence of the theory. Due to the works of Moyal and Groenewold, the formalism has been established as a self-contained formulation of quantum mechanics in terms of the Moyal bracket and the star-product [32–34]. In this way the Wigner formalism is fully equivalent to the operator mechanics, which can be derived from the phase space quantum notions [31]. We follow the more intuitive historical way to comment the basic notions of the phase space theory. In the Wigner picture both observables and states are functions of the position and the momentum, which define the phase space. The main difficulty is to establish a map between the set of the wave mechanical observables, which are functions of the position and the momentum operators and the set of the real functions of position and momentum variables. Since the former do not commute, algebraically equivalent functions of these variables give rise to different operators. The correspondence is established by the Weyl transform [35], but it should be noted that alternative ways of mapping can be postulated, giving rise to different phase space theories. The interest to the Wigner function, (and also to alternative phase space formulations) is the rising importance of problems and phenomena which require a nonequilibrium statistical description. Furthermore, many classical concepts and notions remain in the quantum counterpart. In particular physical observables are represented in both cases by the same phase space functions, which gives a very useful way to outline and analyze quantum effects. It is then surprising that the Wigner formalism has been applied in the field of device modeling relatively late - just in the last decade of the twentieth century [36]. Especially having in mind that this quantum theory is the logical counterpart of the Boltzmann transport theory. The correspondence is so native that the Wigner function is often called a quasi-distribution function. Especially convenient is the

1.3 Modeling of Semiconductor Devices

13

Fig. 1.2 Hierarchy of transport models based on the Wigner function

ability of the function in analyzing collision dynamics in a variety of physical systems [37]. This has been used to associate decoherence to the coherent Wigner equation. A decoherence term is introduced, which moreover can be presented by the same operator, used to describe the classical scattering dynamics. Initially it has been introduced by virtue of phenomenological considerations [36], while the theoretical derivation of the generalized equation, called the Wigner-Boltzmann equation has been done later as a e result of a hierarchy of assumptions and approximations [3, 38]. The corresponding hierarchy of models is presented in Fig. 1.2. These models are ordered according to the complexity of the description of the processes of decoherence, while the operators describing the coherent part of the dynamics remain on a rigorous quantum level. A classical limit of these operators recovers the Boltzmann equation, posed at the bottom of the hierarchy to outline the origin of the Monte Carlo methods for Wigner transport. The first applications of the Wigner formalism for device modeling is characterized by a deterministic approach to coherent transport problems. This approach was successfully applied for simulation of a typically quantum device, the resonanttunneling diode [36, 39], Certain problems related to the boundary conditions and to the stability of the discretization scheme are also resolved. The interest towards stochastic approaches to the Wigner transport has been announced only several years after that [40]. The advantage of such approaches is that all classical approaches and algorithms for scattering from the lattice can be directly adopted in the quantum case. Thus the basic challenge of the stochastic approach is posed by the coherent problem. The work on their development was initiated in collaboration between the Bulgarian Academy of Sciences and the University of Modena [40–42]. Groups from the Technische Universität Wien and Arizona State University joined also

14

1 Concepts of Device Modeling

to this activity [43]. A significant contribution to the development of efficient stochastic Wigner algorithms and their applications came from the University of Paris-Sud [3]. Two Monte Carlo approaches, based on the concepts for affinity and for signed particles, have been developed at the beginning of the twenty-first century and proved their feasibility to solve actual problems of quantum electronics, see [44] and the references therein. Here we focus on the second concept, which will be derived by a formal application of the Monte Carlo theory to an integral equation, adjoint to the integral form of the Wigner equation. Very important for the development of this derivation is the experience with the classical algorithms, which suggest to follow the historical path of development of those approaches.

Chapter 2

The Semiconductor Model: Fundamentals

We introduce the basic elements of the semiconductor model, which is in the root of the transport models and provides input parameters for any simulation. These elements are the band structure, the mechanisms for interaction with the lattice, and a set of material parameters characterizing the semiconductor. Details about these concepts can be found in the many textbooks on solid-state physics, such as [45].

2.1 Crystal Lattice Electrons 2.1.1 Band Structure The crystal structure is formed by the ordered arrangement of material pointsatoms, ions, or even molecules, which form symmetric, repeating patterns. The pattern with the smallest dimensions (defining the lattice constant) is called unit cell and completely defines the symmetry of the lattice. The latter is reproduced by consecutive translations on the lattice constant. A unit cell can contain several material points of different type. Since the crystallographic features of crystals are not important for the subject of this book, we focus on the picture of a single atom in a cubic unit cell and often refer to cells as atoms to invoke the intuitive aspects of the considerations. An in a crystal lattice evolving electron, is subject to the action of the potentials originating from the atoms of the lattice as well as from the rest of the electrons. The evolution is described exactly by an immensely large many particle Schrödinger equation, which is reduced to a single-particle problem by a set of assumptions, namely that the lattice atoms are perfectly ordered and stay in fixed positions without ‘feeling’ the evolution of the surrounding electrons. The many-particle potential is replaced by an effective potential, which depends on the single electron © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_2

15

16

2 The Semiconductor Model: Fundamentals

position r only. Furthermore, the crystal structure is assumed periodic, with large but finite size. The length L in a given direction then determines a discrete Fourier transform with a discrete wave space k characterized by a step Δk = 2π/L. Since L is large, Δk is very small and the k- space can be regarded continuous for all three components of the k vector. The effective potential is then invariant under a translation on the elementary vector a associated with the periodic lattice structure [46] and bears all symmetry features of the latter. Important properties of the eigenvalues l and the eigenfunctions (states) Ψl (r) of the reduced Schrödinger equation follow from there. The number l is discrete, l = 0, 1, . . . and both, energies and states depend on the wave vector k. Second, because of the translation invariance of the vector a, energies and states are periodic with respect to the elementary vector b = 2π/a, which characterizes the so-called inverse lattice. In this way the whole information about the band structure—the function l (k)—is contained in a finite domain of k values, which can be chosen to reflect all symmetry features of the lattice, in particular (−b/2 ≤ k ≤ b/2) and is termed ‘the first Brillouin zone’. In this way l and k become characteristic quantum numbers. For a particular l and Ψl (r) we say that the electron is in the l, k state, characterized by the lth energy band and the wave vector k. Actually, according to the Pauli exclusion principle, in a given state there could be up to two electrons with opposite spin. We say that a system of N electrons is in a ground state, if the lowest N/2 states are occupied. In general, electrons occupy states up to a given zone l, while states with higher zone number remain empty. Under general physical conditions part of the ground state electrons are excited to occupy states with higher energy. In a typical semiconductor l, called valence band, is ‘almost’ completely filled with electrons, while the next l + 1, called conduction band, is almost empty. The energy gap between them, called ‘band gap’ is forbidden for electrons. The width g of the band gap is an important characteristics parameter, which determines many physical properties of the semiconductor. Valence electrons can gain energy due to certain physical processes to be transferred to the conduction band. They leave empty states in the valence band, called holes. Under the action of an applied electric field conduction electrons and holes move in opposite directions and thus the latter are interpreted as positively charged carriers. Impurity atoms break the ideal periodicity of the lattice and create states in the band gap, which can provide electrons to the conduction band (donor states) or accept electrons (provide holes) from the valence band (acceptor states). It can be shown that j (k), j = l, l + 1 is an even analytic function of k [46]. It can be calculated by using different approaches, however, this is necessary only in the cases of high fields, which excite carriers with large wave vectors k. Under quite general conditions it is sufficient to know  only around the extreme points. The function can be expanded around such a point k0 as follows: l (k) = l (k0 ) +

 h¯ 2 kμ kν μν

2m∗μν

,

1 ∂ 2 l (k0 ) = , m∗μν ∂kμ ∂kν

μ, ν = x, y, z

(2.1)

2.1 Crystal Lattice Electrons

17

As suggested by the case of a spherical symmetry, the quantity m∗ can be interpreted as a particle mass and h¯ k as a particle momentum. Thus m∗ is called effective mass, which in the general case becomes a tensor. In particular, if the planes of constant energy are ellipsoidal, we talk about longitudinal and transverse effective masses. Effects of nonparabolicity appearing away from k0 are accounted for by a perturbation approach, which provides a correction term α(g , m∗ )l (k)2 added to the left hand side of (2.1). The effective mass associated to the holes is negative, which will be discussed in a while. Thus, an important effect of the periodic potential in the crystal is the change of the free electron mass. The quantum states l, k  k0 are associated by particles having effective mass m∗ and momentum p = h¯ k. It should be noted that due to the periodicity this correspondence between momentum and wave vectors holds only within the first Brillouin zone.

2.1.2 Carrier Dynamics We assume that besides the periodic potential there is an external electric field E = −∇φ(r), which varies slowly on a distance given by the lattice constant. More precisely, according to quantum mechanics, an evolving electron is associated with a localized in space wave packet Ψl . As it will be shown later, if the potential remains up to a quadratic function of position in the region of the wave packet, the evolution is classical. In this case we can use the Hamilton equations dp = −∇r H(r, p), dt

dr = ∇p H(r, p), dt

(2.2)

which describe the motion of a classical particle. The adjoint coordinates r and p define the phase space. The Hamiltonian H of the evolving carrier is given by the sum of the kinetic and potential energies: H = (p/h)−eφ(r), with −e the electron ¯ charge. In this case with the help of (2.1) it is obtained: dp h¯ dk = = −eE, dt dt

v=

dr 1 = ∇k (k) . dt h¯

(2.3)

In the case of a spherical symmetry we consistently arrive at the standard relation between velocity, momentum, and mass: v = p/m∗ . Thus the second equation (2.3), generalizes the relation for the case of a tensor effective mass. Due to the correspondence between wave vector and momentum hk ¯ = p we will use interchangeably both quantities. As a rule, band structures and scattering functions are calculated in k coordinates, while the carrier dynamics uses p coordinates.

18

2 The Semiconductor Model: Fundamentals

2.1.3 Charge Transport The velocity v is an odd function of k because of the function . Thus the current in an entirely full band is zero, −σk ev(k) = 0. Hence, if the band is almost full, the current is equal to the current of virtual positively charged particles associated to the empty states.   I= −ev(k1 ) = +ev(k2 ), (2.4) k1

k2

k1 (k2 ) runs over filled (empty) electron states. If, furthermore the holes are around a maximum of the valence band and we for convenience consider the spherically symmetric case, it holds: l (k) = l (k0 ) +

h¯ 2 k2 h¯ 2 k2 =  (k ) − l 0 2m∗ 2|m∗ |

(2.5)

This result is directly generalized for a tensor effective mass. From here and (2.3) it follows that: dv 1 hdk e ¯ =− ∗ = ∗ E. dt |m | dt |m |

(2.6)

The obtained equation describes the acceleration of a positively charged particle with a positive effective mass |m∗ |. These considerations show that we can interpret holes as quasi-particles. The energy of the holes increases in the top-to-bottom direction, with the increase of the distance from the band maximum. If the dominant part of the current in a semiconductor is due to holes, we say that the conductivity is of p type, otherwise it is of n type. In the case when both contributions are equal, the semiconductor is intrinsic. These concepts conveniently allow to consider a small number of holes instead of a huge number of electrons. Holes are ‘physical’ particles as they are introduced by phenomenological considerations. In what follows we will consider also numerical particles, necessary not only for computational purposes, but also for the analysis of certain physical processes. From an algorithmic point of view holes are treated in the same - as the electrons. Thus most of the results derived in the sequel hold for both p- and ntype semiconductors. This allows us to use in what follows ‘electrons’ or carriers.

2.2 Lattice Imperfections The carriers in an ideal periodic crystal behave under the action of an electric field similarly to the free electrons. According to (2.3) they are accelerated without any resistance, which impedes the permanent growth of the current. In reality

2.2 Lattice Imperfections

19

there are variations in the periodic potential, due to charged or neutral impurities, dislocations, and vibrations of the lattice atoms. These control the growth of the current by destroying the streaming motion of the carriers. Different types of interactions with lattice vibrations provide the most important mechanisms which limit the growth of energy and momentum of the carriers. Such vibrations exist at any temperature and are usually considered as an equilibrium thermostat, which supplies energy to low energy carriers, reduces the energy of the hot carriers, and randomizes the direction of their motion, driving in this way the electron system towards equilibrium. Lattice vibrations can be conveniently presented in term of elementary collective excitations, called phonons. A review of the derivation of the probability density for interaction, or scattering with phonons is presented next.

2.2.1 Phonons The deviation of the atoms from their equilibrium positions (nodes) is assumed small. The potential energy of a given atom is then approximated by the quadratic term of the corresponding Taylor expansion (harmonic approximation). Then the deviation u of a given atom at a node at position R can be described as a superposition of the so-called normal vibrational modes u(R, t) =

 q

h¯ 2Vρωq

1/2

 +  iqR e , eq aq + a−q

(2.7)

where all atoms move with the same frequency. Here V and ρ are the volume and the density of the crystal, q is a wave vector, ωq is the frequency of vibrations, eq is + the unit vector of polarization, and aq , a−q are variables, which satisfy the following equations: daq = iωq aq dt

(2.8)

The Hamilton function of the oscillations H is given by the sum of the kinetic and potential energies of all atoms in terms of the u and du/dt variables. Then, according to the last two equations H becomes a function of a and a + . It can be shown that the latter are canonical variables, satisfying the Hamilton equations1 (2.2) for H(a, a + ). As discussed in Appendix A.1, the transition to a quantum mechanical description is established by replacing a and the adjoint variable a + by operators, which satisfy the commutator equations: aˆ q aˆ q+ − aˆ q+ aˆ q = δq,q .

1A

constant factor is introduced for convenience.

(2.9)

20

2 The Semiconductor Model: Fundamentals

The Hamiltonian operator H transforms into the compact form H=

 q

  1 h¯ ωq aˆ q+ aˆ q + , 2

aˆ q+ aˆ q |nq  = nq |nq  .

(2.10)

The eigenfunctions of H have the general form 

|nq ,

(2.11)

q

where |nq  are the eigenfunctions of aˆ q+ aˆ q corresponding to eigenvalues nq , which are natural numbers. The eigenvalues for the energy are then of the form 

h¯ ωq (nq + 1/2) .

(2.12)

q

According to these results, the lattice vibrations are decomposed into a sum of elementary excitations given by nq phonons with a wave vector q. The equilibrium number n(q) is evaluated by using the fact that the energy E(n, q) = h¯ ωq (nq + 1/2)

(2.13)

is distributed with a probability P (n, q), proportional to exp(−E(n, q)/kT ), where k is the Boltzmann constant and T is the temperature e−E(n,q)/kT 1 − e−h¯ ωq /kT P (nq ) =  −E(n,q)/kT = −hω n /kT , e ¯ q q ne

−1  nP (n, q) = eh¯ ωq /kT − 1 . n(q) =

(2.14)

n

The in this way evaluated n(q) is called Bose-Einstein function. The following relations hold: aˆ q |nq  =



nq |nq − 1,

aˆ q+ |nq  =



nq + 1|nq + 1

(2.15)

These relations reveal the meaning of the operators aˆ and aˆ + , which act as phonon creation and annihilation operators. There are three polarization vectors in a crystal with a single atom in the elementary cell, which in the isotropic case become two transverse and one longitudinal vectors with respect to the direction of q. These are the three acoustic phonons, having the property that ωq → 0 with q → 0. If the cell has N atoms, there are N − 1 optical phonons, characterized by a frequency, which weakly depends on the wave vector q. All atoms in the elementary cell shift equally in the case of

2.2 Lattice Imperfections

21

acoustic phonons, while in the case of optical phonons atoms oscillate in opposite directions, so that their center of mass remains fixed. Thus vibrational modes are described by the wave number q and the number j = 1, . . . N. It is said that the phonons are in mode q, j . A further extension of the range of phonons is introduced by the type of interaction energy with the carriers. There are optical phonons from a deformation potential type, polar optical phonons existing in ionic crystals, and phonons corresponding to piezoelectric forces in crystals without inversion center of symmetry of the lattice.

2.2.2 Phonon Scattering The vibrational movement of the heavy atoms around the lattice nodes is followed practically immediately by the light mass carriers. In other words the change of the energy of the carriers can be accounted for by adding a time dependent term He−ph to the Hamiltonian of the ideal lattice. This means that the potential energy of the carriers in the field caused by the lattice vibrations depends on the displacement u and can be expanded in a Taylor series. Since u is small, the first, linear term has a dominant contribution. In this way He−ph is small and linear with respect to u, which is linear with respect to aˆ and aˆ + . After a transition to a quantum-mechanical description He−ph and in particular u, (2.7) become operators. Then He−ph can be treated as a perturbation which causes transitions between the unperturbed (corresponding to an ideal crystal) carrier states. The theory of perturbation derives the expression: S(i → f ) =

2π | f |He−ph |i|2 δ(Ef − Ei )df . h¯

(2.16)

for the number of transitions per unit time from an initial carrier-phonon state |i to states in the volume df around the given final state |f . The derivation of (2.16) uses the following limit: δ(ξ ) = limt→∞

[sin(ξ t)]2 . π ξ 2t

(2.17)

Here ξ = ΔE/h¯ is the frequency characterizing the energy of the interacting carrierphonon system, and t is the time from the beginning of the interaction causing the transition. The limit shows that at times much larger than the time 1/ξ , the probability for a transition becomes independent of time, but depends only on the initial and final system states. Equivalently, at this level of physical description, the interaction is instantaneous as compared to the evolution time of the carrier-phonon system. This is a basic assumption in the heuristic picture of classical Boltzmann transport. Certain models where the carrier-phonon interaction has a finite duration will be also considered in this book. It will be shown that the physics beyond the instantaneous interaction picture is rich of quantum phenomena.

22

2 The Semiconductor Model: Fundamentals

The initial and final states of the unperturbed (noninteracting) system are of the form   Ψl  (k , r) |nq ,j  , Ψl  (k , r) |nq ,j   . (2.18) q ,j 

q ,j 

The matrix element in (2.16) can be factorized into a carrier part, containing Ψ and + . Then, with a phonon part which is linear with respect to the operators aˆ q,j + aˆ q,j the help of (2.15) we can identify the following terms

δj,j  δq,q δj,j  δq,q (2.19) nq,j δn ,n −1 + nq,j + 1δn ,n +1 , which give rise to selection rules imposed by the interaction. Any process of interaction either annihilates (the first term in the bracket) or creates one (and only one) phonon. Hence, the delta function in (2.16) depends on the initial and the final carrier energies, and the energy of the generated (destroyed) phonon ±hω ¯ j,q and thus establishes the conservation of the energy of the carrier-phonon system. The second important assumption of the Boltzmann theory is that the phonons are not disturbed by the interaction with the carriers and remain in equilibrium. Their number is big, or alternatively there are other processes which allow to consider the phonon system as a thermostat. nq,j , after averaging with the help of (2.14), is replaced by the equilibrium value n (q, j ). The carrier part of the matrix element is given by an integral over R − r, where R is the coordinate of the lattice node and r is the relative coordinate. The main result of the integration of the functions depending on R, such as the exponent in (2.7), is a conservation of momentum: δ(k − k ± q), where a part of the initial carrier momentum hk ¯  is transferred (accepted) to (from) the created (annihilated) phonon having a momentum hq. ¯ The integral over the relative coordinates gives rise to further selection rules depending on the symmetry of the interaction and the functions Ψ . If the involved functions weakly depend on r within a unit cell, which is the case of the long range forces of polar and piezoelectric phonons, the integral is not zero only if l  = l  , because of the orthogonality of Ψ . This corresponds to intraband transitions. Interband transitions occur when l  = l  , which is the case, e.g., for deformation potential phonons. These considerations give rise to the following expression about the total rate for (out) scattering of a carrier in the state k, l from given types of phonons in state q, j :

    λ(k, l, j ) = S(k, l, k , l )dk = A(k, l, k , l  , j ) × (2.20) 

δ(k − k ± q) n(q, j ) +



±,l 

  1 1 ∓ δ (k, l) − (k , l  ) ± h¯ ωq,j dk , 2 2

2.2 Lattice Imperfections

23

The function A is determined by the type of interaction. The scattering rate S, and the total out-scattering rate λ are basic quantities in the Boltzmann models of interaction with phonons and other sources of scattering. For convenience we skip the indices of the bands, which does not affect the generality of these considerations.

Chapter 3

Transport Theories in Phase Space

This chapter introduces the Boltzmann and the Wigner transport equations, which will be approached with the numerical theory of the Monte Carlo method in order to devise algorithms for solving them. For this purpose we focus on the heuristic aspects of the physical assumptions about the transport system, which give rise to clearly formulated mathematical models. The derivation of the classical Boltzmann equation is presented in terms of particles and trajectories. At first glance it seems that these concepts become obsolete in the quantum mechanics in phase space, formally derived from the abstract operator mechanics. However, the Wigner equation has formal structural similarities to the Boltzmann counterpart, which appears to be essential for associating particles to quantum transport in phase space.

3.1 Classical Transport: Boltzmann Equation 3.1.1 Phenomenological Derivation The phenomenological derivation of the Boltzmann equation describing the evolution of carriers in semiconductors plays a fundamental role for the development of the Monte Carlo methods in the field. The derivation is based on the considered assumption for classical transport, namely, slowly spatial variations of the electric field, which allows to consider the carrier states as point-like objects, accelerated by the electric filed over Newtonian trajectories (called also free flight or drift process), see Sect. 2.1.2; The free flight is interrupted by processes of scattering, which are local in both, space and time, that is, changed is only the momentum of the carrier. The scattering is described by the scattering rates, which are nonnegative and have a probabilistic interpretation. The carrier evolution is described by a function f giving the electron distribution in the phase space. The equation © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_3

25

26

3 Transport Theories in Phase Space

for f is derived by phenomenological considerations accounting for the balance of the processes governing the carrier evolution. Let f (p, r, t)dpdr be the number1 of carriers in the phase volume Ω = dpdr around the point p, r at time t. At the next time t + dt, these carriers are drifted to a volume Ω(t + dt) around a novel point p(t +dt), r(t +dt). According to a fundamental result of the classical mechanics, the theorem of Liouville, the phase volume is in-compressible, that is Ω(t + dt) = Ω. Thus other carriers can appear in this volume only by processes of scattering, so that ∂f df dp dr = + ∇p f + ∇r f = dt ∂t dt dt



∂f ∂t

 (3.1) col

The term containing the ∇ operators are identified as fluxes through the boundaries of the phase space volume. The acceleration due to the electric force gives rise to changes in the momentum subspace, while the velocity determines the flux through the real space boundaries. Newtonian trajectories are characteristics of the left hand side of the equation. Without scattering there is a conservation of the number of carriers along these trajectories: f (p(t), r(t), t) = f (p(t + dt), r(t + dt), t + dt). The scattering redistributes the carriers between the momentum part of Ω and the rest of the local (having common spatial coordinates with Ω) part of the phase space: 

∂f ∂t



=

dp S(p , p)f (p , r, t) − λ(p)f (p, r, t)

(3.2)

col

The second term accounts for all scattering events driven out of Ω. Their number is a product of the scattering rate and the available carriers inside, which can take part in the scattering process. The first term is responsible for the inverse process. Carriers from any point (p , r) ∈ / Ω are redistributed to B(p,r)∈Ω . Their number is proportional to the product of S and f in the initial point p , r. These considerations are based on the important assumption that the final state can accept the carrier. Indeed, the latter remains a quantum object despite the particle approximation and thus the Pauli exclusion principle must be obeyed. This can be accounted for by the factor 1 − f in the final state, which makes the equation nonlinear. Usually the low density limit f t. The order is irrelevant: r(t  )p(t  ) and p(t  )r(t  ) denote the same trajectory. With the help of (2.3), the segment of the trajectory between times t2 < t1 can be written in two ways.

te

P(te ) = P(t2 ) +

F(R(y))dy = P(t1 ) −

t2

R(te ) = R(t2 ) +

t1

F(R(y))dy, te

te

v(P(y))dy = R(t1 ) −

t2

t1

v(P(y))dy

(3.8)

te

Because of the uniqueness of the solution, the above relations show two ways for initialization - at time t2 , by using P(t2 ), R(t2 ), t2 , te > t2 , or at time t1 by using P(t1 ), R(t1 ), t1 , te < t1 . At the time te both of them give one and the same point P(te ), R(te ). Accordingly the first way is called normal initialization, while the second one backward initialization. Since the phase space volume is conserved along the trajectories, dpdr = dP(t  )dR(t  ), it holds

  dpdrφ(p, r, P(t ), R(t )) = dp dr φ(P (t), R (t), p , r ) . (3.9) φ is a given function, P(τ ), R(τ ) is a backward trajectory initialized by p, r at time t, while P (τ ), R (τ ) is a normal trajectory, initialized by p = P(t  ), r = R(t  ). Equation (3.9) is often used in time transforms which conserve the mean values of the physical quantities.

3.1.3 Classical Distribution Function The Boltzmann equation can be alternatively derived from the principles of the statistical mechanics. An important notion, called Poisson bracket is introduced, which is needed to postulate quantum mechanics. The Poisson bracket describes the time evolution (equation of motion) of the dynamical function of a generic physical quantity A(t). There are two ways (1) A(t) = A(r(t), p(t)) is the old function A(r, p) in the novel coordinates r(t), p(t); (2) A(t) = A(t, r, p) is a novel function in the old coordinates. In the first case we postulate that the laws of mechanics do not

3.1 Classical Transport: Boltzmann Equation

29

depend on time: A remains the same function for both, old and novel coordinates. Then, the equation of motion of A(t) is derived with the help of (2.2). dA ∂A(r, p) ∂H(r, p) ∂A(r, p) ∂H(r, p) = [A, H]P = − dt ∂r ∂p ∂p ∂r

[r, p]P = 1 (3.10)

The Poisson bracket [·, ·]P links the evolution of the dynamical functions with the Hamiltonian H. It can be shown that the bracket conserves the algebraic structure (automorphism) of the dynamical functions. Alternatively, in the second case we need to postulate the law for the evolution of A(t, r, p). If it is chosen in accordance with (3.10), the automorphism leads to conservation of the laws of mechanics: The novel function of the old coordinates is the old function in the novel coordinates! A(t, r, p) = A(r(t), p(t))

(3.11)

In this way the Poisson bracket guarantees consistency in the description of the evolution of the dynamical functions. A statistical description is introduced, when for example the coordinates of the mechanical object, such as a particle, are not known exactly, or when a system of particles is considered. The fundamental postulate of the classical statistical mechanics asserts that the state of a given system is entirely determined by a function f (r, p) with the following properties:

f (r, p) ≥ 0, drdpf (r, p) = 1 (3.12) The mean values of the physical quantities A are obtained by phase space integrals.

A(t) = drdpA(t, r, p)f (r, p) (3.13) According to this equation, the evolution of any particular physical quantity must first be evaluated and then averaged with f . The equation can be conveniently reformulated with the help of (3.11) together with the Liouville theorem.

A(t) =

drdpA(r, p)f (r, p, t)

(3.14)

The time has been associated to the function f , so that now the knowledge of the evolution of the function f is sufficient to provide the value of any generic physical quantity. The evolution equation of f is obtained with the help of (3.10) and (2.2) [47]. 

   ∂ p ∂ ∂ ∂f + . + F(r) f (r, p, t) = ∂t m ∂r ∂p ∂t c

(3.15)

30

3 Transport Theories in Phase Space

This is the Boltzmann equation previously formulated from a phenomenological point of view. The left hand side of (3.15), is the total time derivative of the function f remains constant along the trajectories, if there is no interaction with

= 0. Alternatively, particles are the environment, such as scattering, i.e. ∂f ∂t c redistributed between the trajectories and the right hand side of (3.15) accounts for the net change of the distribution function. In the next section we introduce the quantum counterpart of Eqs. (3.12), (3.14), and of the Boltzmann equation.

3.2 Quantum Transport: Wigner Equation Nowadays quantum mechanics enjoys on several alternative formulations. Historically the first theories are the matrix mechanics of Heisenberg [48] and Schrödinger’s wave mechanics [49], which have been generalized by Paul Dirac [50] and John von Neumann [51] in an abstract operator formalism in terms of Hilbert spaces. Physical quantities are represented by Hermitian operators, while the space vectors are associated with states of the physical system. At the same time Eugene Wigner [30] suggested quantum mechanics in phase space, where both, observables and states, are represented by functions of the phase space coordinates. The Weyl transform [35] establishes the correspondence between the operators in Hilbert space and the functions of phase space. An advantage of the Wigner picture is that the main notions of the statistical mechanics are retained. In particular the procedure for obtaining the mean value of a physical quantity involves concepts and expressions given by (3.12) and (3.13). Historically the Wigner theory was developed on top of the operator mechanics [30, 32]. This raises the question, if the Wigner theory is an alternative, equivalent formulation of quantum mechanics. What distinguishes classical from quantum states in the phase space? Any nonnegative and integrable, eventually normalized function in the phase space is a possible distribution function. However, how to identify the phase space functions which are possible quantum states? The answer is given by the inverse approach, which offers an independent formulation of the Wigner theory [31, 52]. The operator mechanics can then be derived from the latter, which completes the proof of the logical equivalence of the two theories and classifies the phase space functions as ‘nonquantum’ or quantum, corresponding to pure and mixed states (discussed in Sect. 3.2.1). For example, a pure state function must be real, square-integrable, and must satisfy a particular partial differential equation. It is shown that the space of such functions is isomorphic with respect to the quantum evolution and thus is equivalent to the Hilbert space of the operator mechanics. We follow the historical way of derivation of the Wigner theory by first recalling the postulates of the operator mechanics.

3.2 Quantum Transport: Wigner Equation

31

3.2.1 Operator Mechanics The high level of mathematical abstraction of the operator mechanics is linked to the physical reality by associating Hermitian operators Aˆ to the physical observables. ˆ n  = an |φn , A|φ

φn |φm  = δmn ,



|φn  φn | = 1ˆ

(3.16)

n

The real eigenvalues characterizing such operators correspond to the possible values which the corresponding physical quantity can have. Furthermore, the span of the system of orthonormal eigenvectors defines the Hilbert space H of the possible states |Ψt  ∈ H . The evolution equation of |Ψt  is postulated with the help of the operator Hˆ corresponding to the Hamilton function H (the rules of such a correspondence are discussed below). ∂|Ψt  , Hˆ |Ψt  = i h¯ ∂t

Ψt |Ψt  = 1,

|Ψt  =



cn (t)|φn 

(3.17)

n

The knowledge of |Ψt  provides the complete information about the physical system. It is said that the system is in a pure state. The state can be expanded in the basis of the operator corresponding to any of the physical observables A. It also can be shown that the scalar product Ψt |Ψt  is conserved during the evolution, which is called ‘law of probability conservation’. Certainly, the most important question in this approach is how to obtain the operators corresponding to the physical quantities. The first step in addressing this question is the correspondence principle, discussed in Appendix A.1. The classical position and momentum are associated with the ˆ which obey the quantum analog of the Poisson bracket, Hermitian operators rˆ and p, the commutator [·, ·]− . r → rˆ ,

ˆ p → p,

ˆ r = [ˆri , pˆ j ]− = i hδ rˆ pˆ − pˆ ¯ ij

(3.18)

As these operators do not commute, the description of a physical system can be provided by either of their eigenbasis systems, but not of both of them in the same time. The joint precise measurement of the position and momentum is not possible according to the Heisenberg relations, following from (3.18). Thus the coordinate and momentum variables of a particle cannot be used to define a quantum phase space, where the particle occupies a definite point. The quantum phase space has a different meaning with respect to a physical interpretation as we will show. In a coordinate representation, one obtains with the help of (3.16) and (3.18)

rˆ |r = r|r,

ˆ dr|r r| = 1,

pˆ = −i h¯

∂ . ∂r

(3.19)

32

3 Transport Theories in Phase Space

The expectation value of a generic physical quantity A in a given state is ˆ t =

A(t) = Ψt |A|Ψ

ˆ t . dr Ψt |r r|A|Ψ

(3.20)

At a first glance it looks like the ‘half’ of the phase space—either the position or the momentum sub-spaces are sufficient to describe the physical system. However, it appears that (3.20) involves two momentum or position variables. Indeed, with the help of (3.16) and (3.19) one obtains ˆ t =

r|A|Ψ

dr



an r|φn  φn |r  r |Ψt  =

dr α(r, r )Ψt (r ),

(3.21)

n

where Ψt (r) = r|Ψt . A replacement in (3.20) yields:

A(t) =

dr

ˆ drα(r, r )ρt (r , r) = T r(ρˆt A)

(3.22)

The quantity ρt ( ρˆt ) is called density matrix (operator). ρt (r, r ) = Ψt∗ (r )Ψt (r) = r|Ψt  Ψt |r  = r|ρˆt |r 

ρˆt =



∗ cm (t)cn (t)|φn  φm |

m,n

(3.23) The relation (3.22) is universal, in the sense that it does not depend on the choice of the basis. The physical mean value is given by the trace with the density operator. Direct calculations show that, if |Ψt  is a solution of (3.17), it follows that i h¯

d ρˆt = [H, ρˆt ]− , dt

T r(ρˆt ) = 1 .

(3.24)

The Von Neumann equation (3.24) has a fundamental role in the quantum statistics. It generalizes the cases where the information about the system is not fully, but statistically distributed. This is the case for example where a system is open, that is it interacts with the rest of the universe. It is said in such cases that the system state is mixed. For a mixed state there is no solution |Ψt  of Eq. (3.17). Postulated for such cases is the Von Neumann equation (3.24). As in the classical case, the time dependence can be carried by the density operator, or by the operators of the dynamical quantities.

3.2 Quantum Transport: Wigner Equation

33

3.2.2 Quantum Mechanics in Phase Space Equation (3.22) has the general structure of (A.11), which becomes particularly obvious if one of the spatial variables is replaced by a momentum variable. From this observation follows that a proper change of variables will transform the density matrix into a quantum analog of the distribution function. For this we need to analyze the correspondence between the physical quantities and the operators. In principle, Aˆ can be obtained with the help of (3.18) from the classical phase space function A(r, p), corresponding to the observable A. One can use the Taylor expansion A(r, p) =



bi,j ri pj



ˆ = A(ˆr, p)

i,j



bi,j rˆ i pˆ j

(3.25)

i,j

to establish the correspondence. For the Hamiltonian of a particle in a potential p2 + V (r). The procedure leads to a unique operator expression. field H (r, p) = 2m However, this is not the situation in the general case as the operators pˆ and rˆ do not commute. In this way non-Hermitian operators can occur. Second, even for Hermitian operators the correspondence is not unique. For example the two equivalent expressions for the function A(x, px ) A1 = px x 2 px = A2 =

1 2 2 (p x + x 2 px2 ) 2 x

(3.26)

give rise to the operators A1 → Aˆ 1 = pˆ x xˆ 2 pˆ x ,

1 A2 → Aˆ 2 = (pˆ x2 xˆ 2 + xˆ 2 pˆ x2 ), 2

(3.27)

which differ by h¯ 2 as Aˆ 1 = Aˆ 2 + h¯ 2 . In general, different operators are associated with a particular classical function, which show that (3.18) is not sufficient to ˆ One needs a rule which establish a unique correspondence between A and A. removes this ambiguity. We use the fact that a function f (r, p) can be obtained by a generating function F (s, q) = ei(sr+qp) as follows: 1 1 f (r, p) = f ( ∇s , ∇q )F (s, q)s,q=0 = i i

dsdqdldm f (l, m)e−i(ls+mq) F (s, q) (2π )6 (3.28)

Different operator formulations of F are possible, e.g., ˆ Fˆ1 = ei(sˆr) ei(qp) ,

ˆ i(sˆr) Fˆ2 = ei(qp) e ,

ˆ F3 = ei(sˆr+qp) ,

(3.29)

which are called standard ordering, where the position operators precede the momentum ones, the inverse anti-standard ordering, and the Weyl ordering, where

34

3 Transport Theories in Phase Space

the position and momentum operators appear in a fully symmetric combinations in the terms of the expansion of the last exponent in (3.29). Any of these F operators can be used to define the correspondence. It must be noted that, once postulated, the rule has to be consistently applied (3.20) to formulate the correspondence between dynamical operators and dynamical functions. The Wigner theory is based on F3 , which bears many similarities to the characteristic function of a probability distribution [52]. The Weyl transform is defined as follows: ˆ r, p)) ˆ = A(r, p) = W (A(ˆ

h¯ 3 (2π )3



ˆ ˆ r, p)e ˆ i(sˆr+qp) e−i(sr+qp) dsdqT r A(ˆ (3.30)

Equivalently, it follows that ˆ r, p) ˆ = Aˆ = A(ˆ

ˆ . dsdqβ(s, q)ei(sˆr+qp)

(3.31)

The function β is linked to A via the Fourier transform.

A(r, p) =

dsdqβ(s, q)ei(sr+qp) ,

β(s, q) =

1 (2π )6

drdpA(r, p)e−i(sr+qp) (3.32)

To summarize: The physical quantities of the classical mechanics can be represented in many equivalent ways as phase space functions of coordinates and momentum. However, if the latter are replaced by operators according to (3.18), any particular formulation gives rise to a different operator. This ambiguity is avoided by postulating a correspondence rule. There could be different rules, however, among them we focus on the Weyl transform, where position and momentum variables or operators are ordered in a fully symmetric way.

3.2.3 Derivation of the Wigner Equation The Wigner function corresponds via the Weyl transform to the density operator. The coordinate representation of the latter, the density matrix, obeys the Von Neumann equation. i h¯ 

∂ρ(r, r , t) = r|[Hˆ , ρˆt ]− |r  = ∂t

   h¯ 2 ∂ 2 ∂2  ( − 2 ) + V (r) − V (r ) ρ(r, r , t) − 2m ∂r2 ∂r

(3.33)

3.2 Quantum Transport: Wigner Equation

35

After a center-of-mass transform r1 = (r + r )/2, r2 = r − r the equation becomes  ∂ρ(r1 + r2 /2, r1 − r2 /2, t) 1 h¯ 2 ∂ 2 (3.34) = − ∂t i h¯ m ∂r1 ∂r2  + (V (r1 + r2 /2) − V (r1 − r2 /2)) ρ(r1 + r2 /2, r1 − r2 /2, t) As shown in Appendix A.2, the Weyl transform of the density operator gives rise to the following Fourier transform with respect to r2 of the density matrix: 1 fw (r1 , p, t) = (2π h) ¯ 3

dr2 ρ(r1 + r2 /2, r1 − r2 /2, t)e−ir2 .p/h¯

(3.35)

It is important to note that the coordinate r1 and the momentum p are independent variables, corresponding to commuting operators. Indeed the adjoint of p with respect to the Fourier relation is r2 . Thus r1 and p are compatible quantities and can define a phase space. After the application of the Fourier transform to the right side of (3.34) one obtains two terms, written with the help of the abbreviation ρ(+, −, t) = ρ(r1 + r2 /2, r1 − r2 /2, t) as follows: I =−

1 h¯ 2 i h¯ m(2π h¯ )3

1 ∂ − p. 3 ∂r1 m(2π h¯ )

dr2 e−ir2 .p/h¯

∂ 2 ρ(+, −, t) = ∂r1 ∂r2

dr2 e−ir2 .p/h¯ ρ(+, −, t) = −

1 ∂fw (r1 , p, t) p. m ∂r1

The integration by parts uses the property ρ → 0 if r2 → ±∞. II =

1 i h(2π h) ¯ ¯ 3



dr2

dr2 e−ir2 .p/h¯ (V (r1 + r2 /2) − V (r1 − r2 /2))ρ(+, −, t) =

dr δ(r2 − r )e−ir2 .p/h¯ (V (r1 + r2 /2) − V (r1 − r2 /2))ρ(r1 + r /2, r1 − r /2, t)

The delta function is replaced by the integral δ(r2 − r ) =

1 (2π h) ¯ 3





dp ei(r2 −r )p /h¯

(3.36)

36

3 Transport Theories in Phase Space

so that

1   II = dp dr2 eir2 .(p −p)/h¯ (V (r1 + r2 /2) − V (r1 − r2 /2)) × 3 i h(2π h) ¯ ¯

1   dr e−ir p /h¯ ρ(r1 + r /2, r1 − r /2, t) = (2π h) ¯ 3

dp Vw (r1 , p − p)fw (r1 , p , t). In this way we obtain the Wigner equation ∂fw (r, p, t) p ∂fw (r, p, t) + . = ∂t m ∂r

dp Vw (r1 , p − p)fw (r, p , t)

(3.37)

dr e−ir p/h¯ (V (r − r /2) − V (r + r /2))

(3.38)

where Vw is the Wigner potential. 1 Vw (r, p) = i h(2π h) ¯ ¯ 3



The operations leading to (3.37) can be inverted so that the equation is equivalent to (3.33). The latter is, however, obtained from the evolution (posed by an initial condition) problem, by the difference of the corresponding Schrödinger equation and its complex conjugate version. For stationary (eigenvalue) problems the difference does not carry meaningful physical information and the corresponding sum becomes actual. Thus, (3.37) is not sufficient to provide a complete quantum mechanical description in the phase space. One has to include the eigenvalue problem into the description.

3.2.4 Properties of the Wigner Equation In the case of evolution of a pure state, the Wigner and the Schrödinger equations are equivalent. Indeed, from Ψt one can obtain ρ and thus fw . The opposite is also true. It can be shown that Ψt can be obtained from fw with an uncertainty of a phase factor [52]. A comparison with (3.15) allows to identify in the left hand side of the Wigner equation the zero field Liouville operator. The Wigner potential is a real quantity, Vw = Vw∗ , and the same holds for fw as it follows from the corresponding definitions. The conservation of the probability can also be shown by direct calculations:



dr

dpfw (r, p, t) =

dr

dr2 ρ(r+r2 /2, r1 −r2 /2, t)δ(r2 ) =

dr r|ρˆt |r = 1 (3.39)

3.2 Quantum Transport: Wigner Equation

37

The marginal distributions, obtained after integration on the alternative variable have meaning of probability densities with respect to p and r

dpfw (r, p, t) = |Ψt (r)|2 ,

drfw (r, p, t) = |Ψt (p)|2 .

(3.40)

The most important property of the Wigner function, derived in Appendix A.2, is that the mean value A(t) of a generic physical quantity is obtained by the classical expression

A(t) =

dr

dpfw (r, p, t)A(r, p),

(3.41)

where A(r, p) is the classical function (3.32). We conclude that the Wigner formalism provides to a large extend the pursued similarities with the classical statistical mechanics. Equation (3.39) corresponds to the second equation (3.12) and the Wigner function is real valued. The left hand sides of the classical and quantum transport equations are given by the Liouville operator. Equation (3.41) resembles the formula for calculation of the expectation value of a random variable, considered in Appendix A.3, (A.11). The classical and quantum pictures become very close. Nevertheless there are fundamental differences. The Wigner function can take negative values and thus cannot be a probability density, neither to be interpreted as a distribution of particles with certain momentum and spatial coordinates. Actually the Wigner function can take nonzero values in regions where the carrier concentration is zero. As it follows from (3.40) a physical interpretation is possible only after integration.

3.2.5 Classical Limit of the Wigner Equation One of the most important heuristic aspects of the Wigner formulation is that the classical limit of (3.37) provides physical insight about the differences between classical and quantum evolution. If the potential V is a linear or quadratic function in the region where fw = 0. V (r ±

r ∂V (r) r r ) = V (r) ± + . . . = V (r) ∓ F(r) + . . . 2 ∂r 2 2

(3.42)

The Wigner potential, (3.38), becomes 1 Vw (r, p) = i h(2π h) ¯ ¯ 3



dr e−ir p/h¯ F(r)r .

(3.43)

38

3 Transport Theories in Phase Space

Then the right hand side of (3.37) can be transformed as follows:

dp Vw (r1 , p − p)fw (r, p , t) =

−i h(2π h) ¯ ¯ 3

−F(r) ∂ (2π h) ¯ 3 ∂p

dp





dp



(3.44)



dr e−ir (p −p)/h¯ F(r)r fw (r, p , t) =





dr e−ir (p −p)/h¯ fw (r, p , t) = −F(r)

fw (r, p, t) , ∂p

where we have used 



r e−ir (p −p)/h¯ = −i h¯

∂ −ir (p −p)/h¯ e . ∂p

(3.45)

Thereby the ballistic Boltzmann equation is obtained. ∂fw (r, p, t) p ∂fw (r, p, t) ∂fw (r, p, t) + . + F(r) =0 ∂t m ∂r ∂p

(3.46)

Now we consider an initial condition given by a minimum uncertainty wave packet. The latter is usually associated with a free particle in most of the quantum textbooks. The corresponding Wigner function has a Gaussian shape in both position and momentum [39]. The latter can equally well be interpreted as an initial classical distribution of carriers. Thus, according to (3.46) the quantum evolution is governed by the same force as the classical evolution, that is, for ‘slow’ varying potentials there is only a ‘small’ difference between quantum and classical descriptions and fw remains nonnegative. On the contrary, it has been observed that near rapid potential oscillations the evolving quantum state obtains negative values which are associated with interference effects.

Chapter 4

Monte Carlo Computing

We summarize the basic elements of the Monte Carlo method for solving integral equations. The method extensively employs concepts and notions of probability theory and statistics. They are marked in italic and defined in the appendix in the case that the reader wants to refresh his insight on the subject during reading this section. Where possible, we tried to synchronize the notations used in the probability theory with these used in the here presented Monte Carlo approaches.

4.1 The Monte Carlo Method for Solving Integrals The expectation value E{x} = Ex of a random variable x, which takes values x(Q) with a probability density pψ (Q), is given by the integral

Ex =

dQpx (Q)x(Q),

(4.1)

where Q ∈ Rn can be a multi-dimensional point or vector. The fundamental Monte Carlo method for evaluation of Ex is to carry out N independent experiments, or applications of the probability density px , which generate N random points Q1 , . . . , QN , which represent an N-dimensional statistical sample of x. The sample mean η is related to the expectation value Ex by Ex  η =

N 1  x(Qi ), N i=1

3σx P{|Ex − η| ≤ √ }  0.997 , N

(4.2)

with a precision, which depends on the number of independent applications N and the standard deviation σx of the random variable. According to the three σ rule, © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_4

39

40

4 Monte Carlo Computing

√ the probability P for η to belong to the interval 3σx / N around Ex is very high (0.997). In this way the idea of the Monte Carlo method for evaluation of integrals is to present the considered integral as an expectation value

I=

f (Q)dQ =

f (Q) p(Q) dQ, p(Q) ≥ 0, p(Q)

p(Q)dQ = 1

(4.3)

of the random variable x = f/p. The probability density p can be arbitrary, but admissible for f , as discussed in Sect. 4.3. Various choices of p give rise to different random variables. All of them have the desired expectation value, i.e., the value of the integral I , however different variances. As (4.2) directly follows from the Central Limit Theorem, the integral I is evaluated by a precision which depends on the variance. Section 4.3 shows that the variance is minimized, if p is proportional to |f |. We continue with the following convention. The assertion ‘p(Q) denotes a probability density for selecting Q’ is equivalent to ‘p(Q)dQ is the probability for selecting a point in the volume dQ around Q’. When possible, the latter assertion will be replaced by the former with ‘density’ omitted, in order to simplify the sentences.

4.2 The Monte Carlo Method for Solving Integral Equations This idea is generalized for solving integral equations. We consider a Fredholm integral equation of the second kind with a kernel K and a free term f0 , which can be written in two ways:

f (Q) =

dQ f (Q )K(Q , Q) + f0 (Q) =

dQ K  (Q, Q )f (Q ) + f0 (Q) (4.4)

The iterative replacement of theequation into itself allows to present the solution as a Neumann series f (Q) = i fi (Q). The terms of the series are given by the consecutive iterations of the kernel on the free term. Each such term is a multidimensional integral of the type (4.3) and can be evaluated by a Monte Carlo method.

K  (Q, Q1 ) K  (Q1 , Q2 ) (4.5) f2 (Q) = dQ1 dQ2 P (Q, Q1 )P (Q1 , Q2 ) P (Q, Q1 ) P (Q1 , Q2 )

4.2 The Monte Carlo Method for Solving Integral Equations

41

Here the function P (Q, Q ), called transition probability,1 is used to generate a multi-dimensional point Q under the condition that the point Q is given. The probability p is obtained by the product of the consecutive applications of P . In this way, the terms in the Neumann series can be consecutively evaluated, which offers a way to estimate f directly. Details are provided by considering the general problem for evaluation of the integral of the product of f with a given function A.

A = (A, f ) =

(4.6)

dQA(Q)f (Q)

The task of finding the solution in a given point Q is a particular case of (4.6). This problem can be reformulated by considering an equation, adjoint to (4.4).



g(Q ) =

dQK(Q , Q)g(Q) + A(Q )

(4.7)

If (4.4) is multiplied by g, (4.7) is multiplied by f , and the obtained equations are integrated and compared, the following result is obtained:

A =







dQ A(Q )f (Q ) =

dQf0 (Q)g(Q) =



Ai

(4.8)

i

The last term A = (4.7).



i Ai

is derived with the help of the Neumann expansion of

g(Q ) = A(Q ) + K i (Q , Q) =

∞ 

dQK i (Q , Q))A(Q)

i=1

dQ1 K(Q , Q1 )K i−1 (Q1 , Q)

(4.9)

For example, the integrand of A2 is a product of f0 with two iterations of the kernel:

A2 = dQ dQ1 dQ2 P0 (Q )P (Q , Q1 )P (Q1 , Q2 ) × f0 (Q )K(Q , Q1 )K(Q1 , Q2 )A(Q2 ) P0 (Q )P (Q , Q1 )P (Q1 , Q2 )

(4.10)

Equation (4.10) has been augmented with the help of two probability densities P0 and P in analogy to Eqs. (4.3) and (4.5). The corresponding densities are applied to construct the so-called numerical trajectories, consisting of consecutive points Q,

1 In

the spirit of our convention.

42

4 Monte Carlo Computing

chosen in the following way: (1) P0 (Q ) is used to choose the initial trajectory point Q . P0 is assumed admissible for f0 ; (2) The transition probability P (Q , Q) uses the value of the at the previous step chosen point Q to generate the next point Q.  P satisfies the condition dQP (Q , Q) = 1∀Q and is admissible with respect to the kernel K. The random variable associated to A2 is a product of multipliers Pf00 , K P estimated on the trajectory points Q0 → Q1 → Q2 obtained by P0 → P → P . The sample mean of N independent generations of this random variable, evaluated on the trajectories (Q → Q1 → Q2 )i , i = 1, . . . , N , gives the estimate for

A2 . The iterative character of the multi-dimensional integral (4.10) allows to generalize the procedure for trajectory construction, which can be continued by further applications of P . In this way, a single trajectory Q0 → Q1 → Q2 → Q3 → · · · is a Markov chain and can be used to evaluate the consecutive terms in (4.8), which is equivalent to estimate directly A. Conditions for the convergence of the Neumann series as well as conditions for interruption of the trajectory are given in [53]. To any such trajectory can be associated a numerical (or virtual) particle, which often allows to analyze and provide a physical interpretation of the solved equation.

4.3 Monte Carlo Integration and Variance Analysis Here we consider the numerical aspects of the Monte Carlo integration of a given absolute integrable, multi-dimensional function φ

∞ I=

φ(x) dx,

(4.11)

−∞

Suppose  ∞ that it is a product of two functions, φ = ψp and that p is nonnegative and −∞ p(x) dx = 1 holds. Thus p has the characteristics of a density function. As discussed, we can construct an experiment having a random variable x with the density p(x). Then ψ(x) has the meaning of another random variable on this experiment, defined by the function ψ. According to (4.1), the integral I appears to be the expected value of ψ(x). Furthermore, as discussed, this value is estimated by the sample mean of N independent identically distributed variables as closely, as large is N. The following cases can be considered: C1 The function φ is a product of two functions φ(x) = ψ(x)p(x), p(x) is nonnegative and A p(x)dx = 1. Then I has the meaning of the expectation value of a random variable which takes values ψ(x) with probability p(x) in the domain A. Then the simplest algorithm for calculating I is very short: (1) with probability p(x) generate N random  points xi ; (2) calculate the estimator μi = f (xi ) (3) calculate the sample mean N i=1 μi /N. According to the C.L.T. it is an estimate for the value of I .

4.3 Monte Carlo Integration and Variance Analysis

43

C2 The functionφ is a product of two functions φ(x) = ψ(x)p(x), p(x) is nonnegative but A p(x) dx = 1.  C2.a If J = A p(x) dx is known, the problem is reduced to the first case by using φ(x) = (J ψ(x))(p(x)/J ). Now the probability function p(x)/J is normalized to unity in A.  C2.b When A p(x) dx < 1, another possibility is to extrapolate p to a domain  B, where A ∈ B so that J = B p(x) dx = 1 holds. The problem is reduced to the first case by setting ψ(x) = 0, when x ∈ / A, and generating points in B. C3 We can choose an arbitrary probability function p(x) in A. Then we can write φ(x) = (φ(x)/p(x))p(x) which reduces the problem to the first case. The function p(x) must be admissible with respect to φ, namely the condition p = 0 if φ = 0 must be satisfied. We consider the most general case C3. The density function is p(x) and the values of the random variable are given by the function pφ (x). To different functions p correspond different random variables. Thus there is a family of random variables having the same expected value I . But their variances are likely not equal. We are interested in the random variables having lower variance, which increases the precision of the Monte Carlo experiment. There is a theoretical result showing that, if p is proportional to |φ|, the variance is minimal. Indeed, if

I= A

φ(x) p(x)dx p(x)

then

σ2 = A

φ(x)2 dx − I 2 . p(x)

(4.12)

Here the functions φ and p are supposed such that the integral defining σ exists. Suppose that p is proportional to |φ|. If p(x) = 

|φ(x)| A |φ(x)|dx

 then

σ02 =

2 |φ(x)|dx

− I2 .

(4.13)

A

Using the Schwarz inequality we can estimate that always σ0 ≤ σ . 

2 |φ(x)|dx A

 = A

|φ(x)| p(x)dx √ p(x)

Note that if φ is positive, σ0 is zero.

2

≤ A

φ(x)2 dx p(x)

p(x)dx A

(4.14)

Part II

Stochastic Algorithms for Boltzmann Transport

Chapter 5

Homogeneous Transport: Empirical Approach

Homogeneous transport problems feature bulk materials and thus involve only a half of the phase space, the subspace of the carrier momenta. Nevertheless homogeneous simulations provide important semiconductor characteristics such as the dependence of the physical quantities on the time, applied field, carrier energy, temperature, and crystal orientation. Of basic importance are quantities, which are input parameters for the macroscopic models such as the mobility and diffusion, which are tensors in anisotropic materials. For stationary homogeneous problems the momentum components remain the only arguments of the distribution function. In this way the first Monte Carlo algorithms are for homogeneous simulations and aim to provide inputs for the inhomogeneous macroscopic models. The Single-Particle algorithm was introduced by Kurosawa in 1966 as a promising approach for the analysis of hot carrier effects [54].

5.1 Single-Particle Algorithm The algorithm employs the hypothesis for ergodicity of a stationary system, which allows to replace the average over an ensemble of carriers with the time average over the evolving trajectory of a single particle. The evolution follows the processes of drift and scattering, whereby the probabilities for scattering are obtained from the scattering rates S. The index j indicates the alternative scattering mechanisms, including phonon emission and absorption, and intervalley processes l. The total probability for scattering of a carrier with an initial momentum p to a final (afterscattering) momentum p is given by the sum S(p, p ) = λ(p)



 j Sj (p, p )  . j λj (p)

© Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_5

(5.1)

47

48

5 Homogeneous Transport: Empirical Approach

The probability

t

w(t) = exp[−

λ(P(τ ))dτ ]

(5.2)

0

for a drift without scattering for time t over the impulse component of the trajectory is obtained directly from the fact that a part λ(p)dt of the carriers with momentum p at time 0 is scattered out (obtains a novel momentum) for a time dt. Hence, if the carrier has a momentum p at time t0 , the probability for scattering until a next time t is:

t t − λ(P(τ ))dτ W (t) = w(t  )dt  W (∞) = 1 (5.3) w(t) = λ(P(t))e t0 t0

We note that the evolution is forward in time. The backward in time evolution, obtained by an inversion of the arrow of time in (5.3) (so that −∞ < t < t0 ) corresponds to the probabilities w− (t) = λ(P(t))e− with

 t0 t

λ(P(τ ))dτ

W− (t) =

t

t0

w− (t  )dt  ,

(5.4)

W− (−∞) = 1.

5.1.1 Single-Particle Trajectory The single-particle trajectory is constructed with the help of the conditional probabilities Sj and w after the choice of the initial carrier momentum (usually the choice of the initial state is arbitrary). By denoting r as an evenly distributed random number in the interval (0, 1), we can formulate the algorithm for trajectory construction as follows: Algorithm 5.1 1. Initialize the random variables corresponding to the carrier state (momentum) p0 and the times t0 and t. The initial values of cumulative time T and the physical quantity of interest A are set to 0. The task is to calculate the averaged value of A in a given domain Ω. 2. Generate a value for r and solve the equation W (t) = r to determine the scattering time t = t (r). 3. Calculate the before-scattering momentum P(t) with the help of the trajectory, initialized by p0 , t0 and update the values T = T + t − t0 ,

t0 = t,

p = P(t).

(5.5)

5.1 Single-Particle Algorithm

49

4. Calculate the contribution

t

A(p(t))θΩ (p(t))dt

(5.6)

t0

of this part of the trajectory to the averaged value AΩ .1 5. Use (5.1) to find the after-scattering momentum p . First generate r and solve the equation k 

λj (p) ≤ λ(p)r
0 is selected, a scattering event occurs. In the latter case, the scattering ‘happens’ before the initial time. The probability for such events is given by (6.8), which means that P2 is selected and the construction of the current trajectory ends. In conclusion, the iterative application of the transition density gives rise to a trajectory with a number of segments, related to the corresponding term in the Neumann expansion for f . The latter is chosen by a probability which is a component of the kernel of Eq. (6.6). In this way one trajectory gives an independent sample of the random variable ξ , which has a mean value equal to f . We note the difference between this random variable and the random variable ξn associated to a given term fn . Both ξ and ξn are sampled by the same number of trajectories. However xn takes the value zero each time, when the number of the segments of a sampling trajectory does not correspond to n. The steps of the algorithm are given below. Algorithm 6.1 1. Initialize the values of p and t, where the distribution function will be evaluated, the number of the trajectories N and two random variables μ = 0, ν = 1. 2. Select the value of the scattering time t  with the help of the probability w− . 3. If t  > 0, compute the value P(t  ), initialized by p, t. Apply the probability S(p , P(t  )/λ (P v(t  )) to select a value for p . Multiply ν by λ (P v(t  ))/λ(P v(t  )). Update the values of p and t with (P v(t  )), t  . 4. If t  ≤ 0, compute the value P(0), initialized by p, t, Multiply ν by f0 (P(0)) and add to μ. The variables p and t obtained are reset to the values acquired at the initialization Step 1. 5. If the number of the trajectories is less then N , go to Step 2, else f is estimated by μ/N .

6.3.2 Derivation of Empirical Algorithms The derivation of the Ensemble algorithm, described in Sect. 5.2 follows the task to find the mean number of carriers in the domain Ω at a given instant of time.

f Ω =

f (p, t)dp =

Ω

t

dt 0



t  − λ(P(y))dy

dpf0 (p0 )e

0

g(P(t  ), t  ),

(6.9)

60

6 Homogeneous Transport: Stochastic Approach

where P(τ ) is initialized by p0 , 0, while the second equality corresponds to (6.6). The corresponding adjoint equation is

t

g(p , t  ) =



τ − λ(P(y))dy

dp S(p , p)e

t

g(P(τ ), τ )+δ(t −t  )θΩ (p ),

(6.10)

t

where the parametrization of the initialized by p, t  trajectory P(τ ) is forward, or normal, τ > t  . The derivation of (6.10) and the second equality in (6.9) will be discussed in the general inhomogeneous case further in this part. Equation (6.10) can be augmented by the factor θ (t − τ ), which allows to extend the time integration to infinity. The normalization of the kernel factors responsible for the free flight and scattering events involves the weight factor (λ(p )/λ(P(τ )). Fortunately, this factor is canceled between the consecutive iteration terms. This can be seen in the expression for f Ω,2 , which can be written as:



f Ω,2 (t) =

dt0 0

⎧ ⎪ ⎨ ⎪ ⎩ ⎧ ⎪ ⎨ ⎪ ⎩



e

⎪ ⎩



t2

dt1 t0

dt2

dp0

dp1

λ(P1 (y))dy

⎫ ⎪ S(P1 (t1 ), p2 ) ⎬ λ(P1 (t1 )) × λ(P1 (t1 )) ⎪ ⎭

λ(P2 (y))dy

t1

dp2 {f0 (p0 )} ×

t1

⎫ ⎪ S(P0 (t0 ), p1 ) ⎬ × λ(P0 (t0 )) λ(P0 (t0 )) ⎪ ⎭

t0

e

e

t1



λ(P0 (y))dy

0



⎧ ⎪ ⎨

t0



⎫ ⎪ ⎬

λ(P2 (t2 )) θ (t − t1 )θΩ (P2 (t))θ (t2 − t) ⎪ ⎭

(6.11)

Hence, the initial probability is proportional to f0 , while the iteratively repeating transition probability is given by the second and third terms in curly brackets. The latter correspond to drift until time ti−1 , followed by a scattering in a new momentum state pi . The values of pi , ti−1 initialize the next drift trajectory Pi (ti ), where i = 0, 1, 2 and t−1 = 0. In the last row, the term in curly brackets can be augmented with the help of the identity

dp3 S(P2 (t2 ), p3 )λ−1 (P2 (t2 )) = 1 .

6.3 Iteration Approach

61

Since the random variable given by the product of the three θ functions is independent of p3 , one obtains the transition probability for selection of t2 . Such an independence indicates the casualty principle: The instant t2 is later than the time t of evaluation of f Ω,2 . Indeed the θ functions take care for the proper time ordering t1 < t < t2 . These considerations hold for any term of the Neumann series. Hence a single trajectory samples one and only one such term and in this way represents an independent experiment for f Ω . We come to an important conclusion. If the initial condition f0 corresponds to N carriers, and N trajectories are chosen, then the simulation comprised by the above initial and transition probabilities resembles the physical evolution of the system until time t. The domain indicator θΩ is unity, if the trajectory is in Ω at time t, or zero otherwise. In this way, the algorithm ‘counts’ the particles which are in the desired domain. The chosen probabilities resemble these used by the Ensemble algorithm and thus are called natural probabilities. A choice of alternative probabilities would result in a weight factor of the random variable, updated with the any next iteration. Furthermore the resulting modification of the natural process of evolution can be used for statistical enhancement [76].

6.3.3 Featured Applications The purpose of this section was to introduce in a consecutive way the concepts of the Iteration Approach [77]. The latter, as applied to general, inhomogeneous but stationary conditions allows in particular to derive the inhomogeneous generalization of the considered homogeneous Single-Particle algorithm. Thus all details of the derivation of the inhomogeneous algorithm, presented in Chap. 8, can be reduced to rules for obtaining of the homogeneous counterpart and thus will be omitted here. Similarly, the Ensemble algorithm for general, inhomogeneous and time dependent transport can be derived by applying the Iteration Approach. The former is generalized by the self-consistent, mixed problem with weights, derived in details in Chap. 9. In the next chapter we continue with the analysis of homogeneous algorithms associated with the problem for small signal analysis.

Chapter 7

Small Signal Analysis

The analysis of the response of a carrier system to small changes in an applied electric field is an important task, which provides macroscopic parameters needed for the compact models for circuit simulations. The differential response function depends on the frequency ω of the perturbation and the strength of the applied constant electric field Es . The task is to find the response to a small harmonic signal E1 eiωt , superimposed on Es of the mean value A of a generic physical quantity A(p), which, in the framework of the linear response theory, will oscillate with the same frequency ω. The complex amplitude A1 of the oscillations around the stationary mean value As is linear with respect to E1 .

A1 (ω) = KA (ω)E1

(7.1)

Accordingly, the differential response function KA is the gradient of A1 on E1 . Of particular interest is the tensor of the differential mobility Kv = μ(ω) of the velocity response

v1 (ω) = μ(ω)E1

(7.2)

The analysis in terms of frequencies can equivalently be performed in the time domain with the help of a Fourier transform. Furthermore, the response A1 (t) to an arbitrary time perturbation E1 (t) can be obtained from the response Aim (t) to an impulse (delta function) in time, as proved in the next Assert 7.1. In this way, the response to an impulse in time is the main task of the small signal analysis. Monte Carlo algorithms are used in the field for more that four decades [78]. Alternative algorithms are developed, which utilize stationary or transient conditions and are thus based on the Single-Particle or Ensemble algorithms.

© Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_7

63

64

7 Small Signal Analysis

7.1 Empirical Approach 7.1.1 Stationary Algorithms Two stationary algorithms are derived in the framework of the theory of correlation functions [79, 80]. Algorithm 7.1 A Single-Particle algorithm, developed under the condition of parallel Es and E1 , presents the response function as a difference of the beforeand after-scattering averages.

A1 (t) = KA (t)E1 = ⎞ ⎛

T M  |E1 | ⎝ 1 1 λ(p(t  − t))A(p(t  ))dt  − A(p(ti + t))⎠ |Es | T − t τM

(7.3)

i=1

t

Here T is the simulation time, τ is the mean time between two consecutive scattering events, M is the number of the scattering events, ti is the time of the i-th scattering event, and p(t  ) is the momentum at time t  . For example, if A = vE1 is the velocity component parallel to E1 , then KA (t) = μE1 (t) is the longitudinal differential mobility. Small values of the stationary field are beyond the applicability of the equation, which becomes undefined for Es = 0. In the case of a general orientation of the two fields, the response function can be obtained by a direct evaluation of the gradient of the distribution function along the single-particle trajectory [81]. An alternative algorithm, suggested by Price [78], is again stationary but relies on an ensemble of particles. It allows to consider a general orientation of the two fields to calculate the tensor of the differential mobility. Algorithm 7.2 The ensemble of 2N particles is simulated until the system reaches the stationary state. Then, it is divided into two equal parts, called P (plus) and M (minus) subensembles. The momenta of the P particles are shifted by Δpα in the direction of the desired α axis, whereas the momenta of the M particle are shifted by −Δpα . The clock is reset and the simulation continues. The differential mobility components is obtained by the difference of the P and M mean values of the velocity. μβα (t) =

−q ( vβ P (t) − vβ M (t)) 2Δpα

vβ is the component of the velocity along the β axis.

(7.4)

7.1 Empirical Approach

65

7.1.2 Time Dependent Algorithms In the time dependent approach, the initial system is in a steady state driven by the stationary field. Algorithm 7.3 At time 0, the field E1 is switched on so that the total field E becomes time dependent. The following relations hold: E(t) = Es + E1 (t),

and

f (p, t) = fs (p) + f1 (p, t)

(7.5)

The mean value of a physical quantity is then:

A(t) = As + A1 (t) =

A(p)fs (p)dp +

A(p)f1 (p, t)dp

(7.6)

The stationary distribution function fs is normalized to unity fs  = 1. According to Eq. (7.6), it follows that

f  = (fs + f1 ) = 1 → f1  = 0,

(7.7)

that is, the mean value of the response function is zero. Such a phenomenological approach cannot be applied directly in the case when E1 (t) corresponds to an impulse in time. Instead, a step in time is applied, corresponding to the Heaviside function E1 (t) = Estep θ (t). The impulse response Aimp (t) is then obtained from the time derivative of Astep (t). The perturbation is applied to the stationary ensemble at time 0, when the field becomes Es + Estep . It is again stationary for positive time, however, the system relaxes to the new steady state, which gives rise to a time evolution of Astep (t). In this case the property (7.7) is revealed by the fact that the number of carriers is conserved during the evolution. The algorithm demands a high numerical accuracy of the calculated physical averages because of the subsequent step of numerical differentiation. It becomes particularly unstable for small values of Estep because of the growing stochastic fluctuations [82]. Algorithm 7.4 This problem is avoided in the by Vaissiere [83] suggested deterministic algorithm for solving the linearized equation for f1 . The equation is obtained with the help of (7.5) under the assumption that the quantities with index 1 are very small. A replacement in the Boltzmann equation gives rise to consecutive approximations. The zeroth order is the homogeneous stationary Boltzmann equation containing only s terms. The first-order equation is linear with respect to E1 and contains fs dependent terms. ∂f1 (p, t) + qEs · ∇f1 (p, t) = ∂t

S(p , p)f1 (p )dp −

λ(p)f1 (p) − f qE1 (t) · ∇fs (p)

(7.8)

66

7 Small Signal Analysis

The deterministic algorithm is developed for the case of a perturbation step in time. Since Eq. (7.8) is an approximate equation, it remains unclear if the condition (7.7) is fulfilled. In the next section we show that the application of the Iteration Approach to (7.8) allows to treat directly the case of an impulse in time and in particular to prove (7.7) and to derive the stationary Algorithms 7.1 and 7.2.

7.2 Iteration Approach: Stochastic Model Theorem 7.1 The solution of (7.8) in the case of an impulse in the field E1 (t) = Eim δ(t)

(7.9)

is expressed as a difference of the solutions of the Boltzmann equation for the evolution of two initial conditions G± (p) ≥ 0, such that G+ − G− = qEim · ∇fs (p).

(7.10)

Moreover, the condition (7.7) is satisfied at all instants of the evolution. In the following we present the proof. The existence of the δ-function suggests to use the integral form of (7.8). For this puprose, the by p, t initialized stationary trajectory P(t  ) = p − qEs (t − t  )

(7.11)

is used to represent the right hand side of (7.8) as a total time derivative. 

 d + λ(t) f (t) = g(t), dt

t



f (t) =

t − λ(y)dy



dt g(t )e

t



+ f (t0 )e

t

λ(y)dy

t0

t0

(7.12) The second equation is the corresponding solution, provided by the initial condition f (t0 ) at time t0 . We can use the fact that for negative times f1 = 0, and chose an infinitesimal time t0 = 0− before the application of the impulse. The following integral equation is obtained.

t fim (p, t) =

dt













t − λ(P(y)dy

dp fim (p , t )S(p , P(t ))e

t

0 t − λ(P(y)dy

+ G(P(0))e

0

(7.13)

7.2 Iteration Approach: Stochastic Model

67

Here G(p) = −qEim · ∇fs (p) .

(7.14)

Equation (7.13) formally resembles (6.6) but the free term, which takes negative values, thus cannot be interpreted as an initial condition. However, G can be presented as a difference of two positive functions G = G+ − G− . This, together with the linearity of the problem, allows to formulate two Boltzmann integral equations ± fim (p, t)

t =

dt



t − λ(P(y)dy  ±     dp fim (p , t )S(p , P(t ))e t 

0 t − λ(P(y)dy

±

+ G (P(0))e

0

(7.15)

with initial conditions ± (p, 0) = G± (p) ≥ 0 . fim

(7.16)

+ − (p, t) − fim (p, t) . fim (p, t) = fim

(7.17)

The response function

has some relevant properties. From Eq. (7.14) and the Gauss theorem follows that

G(p)dp = 0

(7.18)

± so that the two functions fim have equal norms at the initial time t = 0. The condition (7.7) follows from the fact that the Boltzmann equation conserves the normalization.

(7.19) fim (p, t)d p = 0, ∀t

The decomposition G = G+ − G− is not unique and can be further explored for conditions which give rise to optimization of the computational approach. The two Eqs. (7.15) are linear with respect to Eim and the same holds for the response function Aim (t). The knowledge of the impulse response provides sufficient information. Indeed, because of the linearity of the problem the response to a field with an arbitrary time dependence can be obtained from the impulse counterpart by a convolution, which will be shown shortly.

68

7 Small Signal Analysis

These considerations give rise to the following simulation model [84, 85]. The field impulse at t = 0 creates instantly two initial conditions G+ and G− of two ensembles of particles P and M. The evolution of these ensembles, which contain an equal number of particles, is governed by the stationary field Es . The impulse response of the quantity Aim (t) is obtained by the difference of the corresponding averages of A in the two ensembles.

Aim (t) = AP (t) − AM (t)

(7.20)

For long evolution times the two ensembles converge to the same stationary state described by the distribution function fs .

± fim (p, t → ∞) = Cfs (p),

C=

G± (p) dp

(7.21)

Hence the impulse response of any physical averaged quantity tends to zero for long times.

Aim (t → ∞) = 0

(7.22)

For the sake of completeness, we derive the expression for the response to an arbitrary signal. Assert 7.1 The response function f1 , corresponding to an arbitrary time dependent signal φ(t), is given by the following convolution integral:

f1 (p, t) =

t

dti φ(ti )fim (p, t − ti )

(7.23)

0

In the following we present the proof. We consider the signal E1 (t) = E1 φ(t)

(7.24)

and the reaction of the system at time t, caused by an impulse in time ti . The function fim depends on the time difference t − ti : fim (p, t − ti ). The Eq. (7.13) can be rewritten as follows:

t fim (p, t − ti ) =

dt 

t − λ(P(y))dy

dp fim (p , t  − ti )S(p , P(t  ))e

ti t − λ(P(y))dy

+ (−qE1 ∇fs (P(ti ))e

ti

t

7.3 Iteration Approach: Generalizing the Empirical Algorithms

69

A multiplication by φ(ti ), integration on ti in the interval (0, t) and with the help of (7.23) one obtains

t f1 (p, t) =

dt 

⎛ dp ⎝

0

t



S(p , P(t ))e

dti φ(ti )fim (p , t  − ti )θ (t  − ti )⎠ ×

(7.25)

0 t





− λ(P(y))dy t

t +

dti − qE1 (ti ) · ∇fs (P(ti ))e

t − λ(P(y))dy ti

.

0

According to (7.23), the integral inside the brackets is f1 (p , t  ). We conclude that the response to any signal can be expressed via fim with Eq. (7.23). For the generalization to the case of a directional time dependence of the signal, it is sufficient to consider the components along the three axes, which are of the type of (7.24), and to use the linearity of (7.25).

7.3 Iteration Approach: Generalizing the Empirical Algorithms A variety of stochastic algorithms can be derived with the help of alternative ways of decomposition of the function G. The relaxation process is simulated by the ensemble approach based on the particular choice of G± .

7.3.1 Derivation of Finite Difference Algorithms The first option is to directly represent ∇fs as a finite difference.  G(p)  −q|Eim |

fs (p + Δp1 ) − fs (p − Δp1 ) 2|Δp1 |

 .

(7.26)

The vector Δp1 is parallel to Eim , while Eim and Es can have arbitrary directions. The two terms in the brackets are directly interpreted as the initial distributions G± for the ensembles P and M. In this approach the choice of Δp1 is a matter of compromise between the need to maintain a linear response process and the increase of the stochastic error with the reduction of the magnitude of this quantity. Algorithm 7.5 The first step is to choose N values of p distributed according to fs (p). This can be done by simulating an ensemble of N particles until reaching the stationary state. Next, the ensemble is divided into P and M subensembles. The momenta of the P (M) particles are shifted by Δp1 (by −Δp1 ). This initializes the

70

7 Small Signal Analysis

trajectories for the subsequent evolution giving the response function of a quantity A defined by (7.20) and multiplied by −q|Eim |/(2|Δp1 |). This algorithm generalizes the approach of Price [78] (Algorithm 7.2) for computation of the mobility to the general case of generic physical quantities A. Algorithm 7.6 The second algorithm uses the result that the impulse values pj = P(tj ), selected at equal time intervals tj = j Δt over a single-particle trajectory P(t) and driven by a constant field, are distributed according to fs . This follows from the equivalence of time and ensemble averages under stationary conditions. Therefore, a single-particle approach can be used to calculate the impulse response. At time tj the evolution of the single particle interrupts. The current value of p is used to determine the initial point of a P trajectory as p + Δp1 and of an M trajectory as p − Δp1 . Both, P and M trajectories are simulated for a time T and their contribution to the response function is recorded as follows: At equal time intervals ti the quantity A(P+ (ti )) is added to the estimator νi+ , and A(P− (ti )) to the estimator νi− . Then the evolution of the interrupted single-particle trajectory continues for an interval Δt and the procedure is repeated N times. The response function is estimated by the expression

Aim (ti ) =

−q|Eim | + (ν − νi− ). 2N|Δp1 | i

(7.27)

7.3.2 Derivation of Collinear Perturbation Algorithms The stationary Boltzmann equation can be used to provide a natural decomposition of G in the case, when Es and Eim are parallel. Eim G(p) = Es



λ(p)fs (p) −







fs (p )S(p , p)dp

 .

(7.28)

Algorithm 7.7 According to (7.28) the initial distributions are chosen as Eim λ(p){fs (p)} Es  

Eim S(p, p ) −  λ(p)dp . {fs (p)} G (p ) = Es λ(p)

G+ (p) =

(7.29)

The two terms depend on fs , so that the states p can be generated like in Algorithm 7.6. Moreover, the probability λ−1 S appears for the choice of the afterscattering state p from a given p in G− . Having this in mind, one arrives at the following algorithm.

7.3 Iteration Approach: Generalizing the Empirical Algorithms

71

1. Follow the main trajectory for the time interval Δt and determine the beforescattering state p as well as the weight λ(p); 2. Perform a scattering event from p to p ; 3. Simulate for time T a trajectory P+ (t), initialized by p, and another trajectory P− (t), initialized by p ; 4. At equally spaced instants of time ti add λ(p)A(P+ (ti )) to the estimator νi+ , and λ(p)A(P− (ti )) to the estimator νi− ; 5. Continue from Step 1 until generating N points p; 6. The response function is estimated by

Aim (ti ) =

Eim + (ν − νi− ). NEs i

Algorithm 7.8 The initial distributions are reformulated as follows. Eim G (k) =

λs Es +

Eim G (k ) =

λs Es −





 λ(k)fs (k) ,

λs



λ(k)fs (k)

λs

(7.30) 

 S(k, k ) dk, λ(k)

where

λs =

fs (k)λ(k)dk

ensures the proper normalization. With the help of the algorithm for beforescattering states averaging (5.11), it can be shown that λs is the inverse of the mean free flight time. The probability λfs / λs gives the distribution of the before-scattering states. Hence the product of the two terms in (7.30) is the normalized distribution function of the after-scattering states. One obtains the following algorithm. 1. The main trajectory is simulated until finding the before and the after scattering states pb , pa ; 2. Two trajectories, P+ (t) initialized by pb and P− (t), initialized by pa are simulated for a time T 3. At equally spaced time instants ti : the quantity A(P+ (ti )) is added to the estimator νi+ , while the quantity A(P− (ti )) is added to νi− . 4. Continue from step 1 until N points p are generated; 5. The response function is estimated by

Aim (ti ) =

Eim λs + (νi − νi− ) . NEs

72

7 Small Signal Analysis

Algorithms 7.7 and 7.8 can be combined to obtain the single-particle counterpart of Algorithm 7.2. Indeed, we first note that the trajectories P− can be projected on the main trajectory and there is no need to simulate them separately. This gives the second term on the right in (7.3). The time integral is equivalent to an averaging over the P ensemble as in Algorithm 7.7. These results show that the interval of averaging (T − t) in (7.3) must be replaced by a constant, T  , which is large enough to reflect the whole process of relaxation.

Chapter 8

Inhomogeneous Stationary Transport

Single-particle algorithms for general inhomogeneous conditions have been considered from the point of view of the Iteration Approach fairly lately, at the beginning of the century in [86, 87] and the references therein. This is related to the higher level of abstraction of the stationary concepts and notions and, in particular, the two ways for evaluation of the mean values given by (5.10) and (5.11), which required certain experience with the application of the approach compared to the more intuitive evolution problems. We consider the problem of finding a functional of the solution of the stationary Boltzmann equation (8.1).

v(p) · ∇r f (p, r) + F(r) · ∇p f (p, r) =

dp1 S(p1 , p, r)f (p1 , r)

− λ(p, r)f (p, r)

(8.1)

The solution is determined by conditions, defined on the surface defining the real space boundary of the simulation domain. The physical system is open, which means that there is a flux of carriers entering or leaving the simulation domain through the boundaries from and into the contact domains, where the distribution function is assumed known. The rest of the surface is assumed reflecting, so that the carriers are kept in the domain of interest. Under stationary conditions there is no time dependence of the generic physical quantities and, in particular, of the electric force F.

© Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_8

73

74

8 Inhomogeneous Stationary Transport

8.1 Stationary Conditions The stationary conditions entail specific properties of both, the trajectories and the transport equation, which are specified by the following two asserts. Assert 8.1 Stationary trajectories are invariant with respect to the change of the time origin. P(t1 + τ ; t + τ, p, r) = P(t1 ; t, p, r), R(t1 + τ ; t + τ, p, r) = R(t1 ; t, p, r)

(8.2)

In the following we present the proof. The shift of the time origin by τ is written as follows: " P(t1 + τ ; t + τ, p, r) = p −

t+τ

  F " R(y; t + τ, p, r) dy,

(8.3)

  v " P(y; t + τ, p, r) dy .

(8.4)

  F " R(y1 + τ ; t + τ, p, r) dy1 ,

(8.5)

  v " P(y1 + τ ; t + τ, p, r) dy1 .

(8.6)

t1 +τ

" R(t1 + τ ; t + τ, p, r) = r −

t+τ

t1 +τ

A change of the variables y1 = y − τ gives: " P(t1 + τ ; t + τ, p, r) = p −

t t1

" R(t1 + τ ; t + τ, p, r) = r −

t t1

The functions " P and " R satisfy Newton’s equations d" P(t1 ) = F(" R(t1 )), dt1

d" R(t1 ) = v(" P(t1 )) dt1

(8.7)

and have common initialization values " P(t) = P(t) = p, " R(t) = R(t) = r. Equation (8.2) follows from the uniqueness of the solution of Newton’s equations. Of particular importance is the fact that the force is time independent.

8.1 Stationary Conditions

75

Assert 8.2 The stationary Boltzmann equation is invariant with respect to the choice of the initial time. In the following we present the proof. We choose a point p, r in the simulation domain at a given instant t, which initializes a backward trajectory. The approach used to obtain (7.12) is directly applicable for the inhomogeneous formulation of the task. In this case, however, we choose as lower limit of integration not the time t0 , but the time tb (t, p, r) of crossing the boundary. Indeed, any time t  , t ≤ t  ≥ tb , determines a trajectory point in the simulation domain, which determines tb when it becomes a boundary point. Then, the integral equation can be written as

t

  f P(t  ; t, p, r), R(t  ; t, p, r ) =

dt1

dp f (p , R(t1 ; t, p, r))) ×

tb (t,p,r) −

S(p , P(t1 ; t, p, r)R(t1 ; t, p, r))e −

dyλ(R(y;t,p,r),P(y;t,p,r))

t1

+

t



dyλ(R(y;t,p,r),P(y;t,p,r))

tb

e

t 

fb (R(tb ; t, p, r), P(tb ; t, p, r)),

(8.8)

where fb is the distribution function at the boundary point, which is assumed to be known. Alternatively, the differentiation of the equation on t  followed by the substitution t = t  gives rise to the integro-differential form (8.1). Now, we shift the time origin by τ . Equation (8.8), written for t  = t in the novel time is:

t+τ

f (p, r) =

dt1

dp f (p , R(t1 ; t + τ, p, r)) ×

tb (t+τ,p,r) −



S(p , P(t1 ; t + τ, p, r)R(t1 ; t, p, r))e −

e

t+τ 

t+τ 

dyλ(R(y;t+τ,p,r),P(y;t+τ,p,r))

t1

+

dyλ(R(y;t+τ,p,r),P(y;t+τ,p,r))

tb (t+τ,p,r)

fb (R(tb ; t + τ, p, r), P(tb ; t + τ, p, r))

From (8.2) follows that tb (t + τ, p, r) = tb (t, p, r) + τ .

76

8 Inhomogeneous Stationary Transport

The change t1 = t1 − τ gives:

t

dt1

f (p, r) =

dp f (p , R(t1 + τ ; t + τ, p, r)) ×

tb (t,p,r)

S(p , P(t1 + τ ; t + τ, p, r)R(t1 + τ ; t, p, r)) × −

e −

+e

t+τ  tb +τ

dyλ(R(y;t+τ,p,r),P(y;t+τ,p,r))

t+τ  t1 +τ

dyλ(R(y;t+τ,p,r),P(y;t+τ,p,r))

×

fb (R(tb + τ ; t + τ, p, r), P(tb + τ ; t + τ, p, r)) Now the integrals in the exponents have the for the change y  = y − τ appropriate limits. After the change the time arguments of all trajectories become increased by τ . With the help of (8.2) and by setting t  = t. we recover (8.8). It is convenient to choose the initialization time of the trajectories to be 0.

0 f (p, r) =

dt1









dp f (p , R(t1 ))S(p , P(t1 ), R(t1 ))e

0

dyλ(P(y),R(y))

t1

tb (p,r) −

+e

0

dyλ(P(y),R(y))

tb (p,r)

fb (P(tb (p, r)), R(tb (p, r)))

(8.9)

8.2 Iteration Approach: Forward Stochastic Model The task to compute a functional of the solution of (8.9) is finding the averaged value of the distribution function in a given domain Ω

fΩ =



dp

dr f (p , r )θΩ (p , r ),

(8.10)

where the integration is over the six-dimensional phase space. According to the Monte Carlo theory, the task can be reformulated with the help of the adjoint equation.

8.2 Iteration Approach: Forward Stochastic Model

77

8.2.1 Adjoint Equation Theorem 8.1 The equation, adjoint to (8.9) and corresponding to (8.10), is: 







g(p , r ) =





τ − λ(P(y),R(y))dy



dpa S(p , pa , r )e

0

×

0

g(P(τ ), R(τ ))θD (r ) + θΩ (p , r ),

(8.11)

where the trajectories are initialized by pa , r , 0 and θD is the indicator of the simulation domain D. In the following we present the proof. A comparison of (8.10) and (8.9) suggest that first the number of the integrals in the two expressions must be equalized. A formal integration on r allows to associate the time integral in (8.9) to the kernel.

f (p, r) =









0

dr f (p , r )

dp









0 − λ(P(y),R(y))dy

dt S(p , P(t ), r )e

t

−∞ −

× δ(r − R(t  ))θD (r ) + fb (P(tb ), R(tb ))e

0

λ(P(y),R(y))dy

tb

(8.12)

The indicator of the simulation domain θD ensures the correct limit tb 1 of the time integral. The adjoint equation has the same kernel as (8.12), but the integration is on the alternative, unprimed variables p, r, and the free term is given by the function, defining the inner product with f , (8.10). 



g(p , r ) =

dp

0 dr g(p, r)

dt  S(p , P(t  ), r ) ×

−∞ 0

− λ(P(y),R(y))dy

e

t

δ(r − R(t  ))θD (r ) + g0 (p , r ),

where g0 (p , r ) = θΩ (p , r ). The change of the integration variables p, r by pa = P(t  ; 0, p, r);

1 The

r = R(t  ; 0, p, r);

dpdr = dpa dr ,

(8.13)

time is uniquely determined since a trajectory cannot reenter the domain D, see Sect. 5.1.4.

78

8 Inhomogeneous Stationary Transport

where the last equality displays the Liouville theorem, gives p = P(0; t  , pa , r ) = P(−t  ; 0, pa , r ),

(8.14)

r = R(0; t  , pa , r ) = R(−t  ; 0, pa , r ) . Here we used the invariance of the trajectories with respect to the shift of the time by −t  . The time origin can be shifted also in the exponent. 0 − λ(P(y;t  ,pa ,r ), R(y;t  ,pa ,r ))dy

e

t



=e

−t 

λ(P(y;0,pa ,r ), R(y;0,pa ,r ))dy

0

A subsequent change τ = −t  leads to a forward in time parametrization of the trajectories, τ > 0. Finally, the spatial integral can be evaluated explicitly with the help of the δ-function, giving rise to (8.11).

8.2.2 Boundary Conditions The task is reformulated in terms of the solution of the adjoint equation as follows.

fΩ =

dp



dr g(p , r )fb (P (tb ), R (tb ))e

0

λ(P (y),R (y))dy

tb

(8.15)

The spatial integration is over the whole simulation domain D. The common assumption about the boundary conditions is that the carriers in the contacts obey the equilibrium distribution fb = fMB . The value of fb on the reflecting boundaries is not known, but not needed for finding the solution. The Eq. (8.15) must be reformulated so that the boundary appears explicitly in the expression. For this purpose, we follow the idea to invert the mapping between the internal points p , r and the boundary points pb , rb reached at time tb . Notably internal points which initialize ‘not crossing boundary’ trajectories, conveniently fall out of (8.15), because their boundary time is tb = −∞. The following theorem has been presented in [86]: Theorem 8.2 The product |v⊥ |fb of the normal to the boundary velocity component and the distribution function fb present the boundary conditions providing the value of fΩ , according to:

fΩ =



dt 0

dσ |v⊥ (pb )| ×

dpb K+

∂D t

− λ(Pb (y),Rb (y))dy

fb (pb , rb )e

0

g(Pb (t), Rb (t))

(8.16)

8.2 Iteration Approach: Forward Stochastic Model

79

Here ∂D comprises only the boundary part with the contacts, where exchange of carriers occurs, while K+ comprises only the states with a velocity pointing towards the domain D. In the following we present the proof. The general definition of a boundary is given by the equation B(r) = c, where B is a differentiable function of its arguments and c is a constant. The boundary crossing time for a trajectory R(t) is then the solution of the equation B(R(t)) − c = 0. This condition can be used in (8.15) together with the factor δ(B(R(t)) − c)φ(R(t)). The aim of the function φ is to conserve the value of fΩ according to the condition

0 dt δ(B(R(t)) − c)φ(R(t)) = 1,

(8.17)

−∞

which can be further evaluated as

δ(g(t))φ(t) dt = δ(t − tb )|g  (tb )|−1 φ(tb ) dt.

(8.18)

Here we assume that the argument of the delta function has a single root. The function φ is then evaluated as # # # # dR φ(R(tb )) = ##(∇R B)(R(tb )) · (tb )## = |(∇R B)(rb )||v⊥ (pb )|. (8.19) dt Here v⊥ is the normal to the boundary velocity component and rb , pb are the values of the boundary point of the trajectory. The augmented equation (8.15) becomes

fΩ =

0

−∞

dt 

dp



dr g(p , r ) ×











0 − λ(P (y),R (y))dy

δ(B(R(t )) − c)φ(R(t ))fb (P (t ), R (t ))e

t

,

which is further processed by following the steps leading to the forward time parametrization. The integration variables p , r are replaced with pb , r according to: pb = P (t  ; 0, p , r ) r = R (t  ; 0, p , r )

(8.20)

With the help of (8.2) follows: p = P (0; t  , pb , r ) = P (−t  ; 0, pb , r ) r = R (0; t  , pb , r ) = R (−t  ; 0, pb , r )

(8.21)

80

8 Inhomogeneous Stationary Transport

The time in the exponent is shifted by −t  . 0 − λ(P (y;t  ,pb ,r ), R (y;t  ,pb ,r ))dy

e

t



=e

−t 

λ(P (y;0,pb ,r ), R (y;0,pb ,r ))dy

0

The time is changed according to t = −t  .

fΩ =



dt

dpb

dr g(P (t), R (t)) ×

0 





t − λ(P (y),R (y))dy

δ(B(r ) − c)φ(r )fb (pb , r )e

0

(8.22)

Now we can use the equality

dr δ(B(r) − c)h(r) =

dσ h(rb ) , |(∇r B)(rb )|

(8.23)

which can be obtained by the linear expansion of B around the point rb on the boundary. B(r) = c + (∇r B)(rb ) · (r − rb )

(8.24)

The coordinate system can be rotated in a way that, r3 becomes normal to the surface in the given point. The Jacobian of such a transform is unity. The δ-function now depends on r3 only and can be accounted for explicitly. Only the integration on the tangential variables remains. In this way, the initialization of the trajectory P (t), R (t) is given by the point pb , rb , 0. For convenience the latter is denoted by Pb , Rb . Finally, by using (8.19) and (8.23) in (8.22) one obtains (8.16). The velocity vectors point inside the domain, which defines the subspace K+ for the pb integration. The integrand jb (p, r) = fb (p, r) v(p) · n(r)

(8.25)

of (8.16) has the physical meaning of the incoming particle flux density trough the boundary. The particle flux of the carriers entering into D is

ΦD =

dpb

K+

dσ jb (p, r) .

(8.26)

∂D

As the flux from the reflecting boundaries is zero, such domains can be excluded from ∂D, Eq. (8.26).

8.3 Iteration Approach: Single-Particle Algorithm and Ergodicity

81

8.3 Iteration Approach: Single-Particle Algorithm and Ergodicity The for inhomogeneous conditions generalized counterparts of (5.10) and (5.11) are  (j ) derived with the help of the Neumann expansion fΩ = j fΩ . In particular, the second term in the expansion will be analyzed in accordance with the Monte Carlo theory. fΩ(2)





= ΦD

dt 0

⎧ ⎨ ⎩

K+

dσ ∂D

− λ(Pb (y),Rb (y))dy

e

0



e



e

dpb



dt1

∞ dpa1

0

t2

t1

λ(P1 (y),R1 (y))dy

0



dt2

0

t

⎧ ⎪ ⎨ ⎪ ⎩

dpa2

 jb (pb , rb ) × ΦD

⎫ ⎬

S(Pb (t), pa1 , Rb (t)) θD (Rb (t)) × ⎭ ⎫ ⎪ ⎬

S(P1 (t1 ), pa2 , R1 (t1 )) θD (R1 (t1 )) × ⎪ ⎭

λ(P2 (y),R2 (y))dy

θΩ (P2 (t2 ), R2 (t2 ))

0

(8.27)

The term in the curly brackets in the first line selects the initial trajectory point according to the flux density on the boundary. The expressions enclosed in the next curly brackets describe the probabilities for drift and scattering exactly as in Eq. (6.11). The in this way ordered conditional probabilities correspond to the natural sequence of drift and scattering events. The end phase space point of a drift process is used to determine the after-scattering state. The latter initializes the next trajectory for the subsequent free flight and so on. The two indicator functions in (8.11) have a particular role in the estimator of (8.27). A beginning from the boundary trajectory has nonzero contribution only if: 1. all points of the trajectory belong to the domain D. 2. the trajectory associated to the second free flight intersects Ω. The value of fΩ(2) is thus a product of ΦD with the only integral left for evaluation, namely



If =



dt2 e

t2 0

λ(P2 (y),R2 (y))dy

θΩ (P2 (t2 ), R2 (t2 )) .

(8.28)

0 (j )

This analysis is straightforwardly generalized for any term fΩ . In this way one obtains the main steps for the trajectory construction of the inhomogeneous

82

8 Inhomogeneous Stationary Transport

Single-Particle algorithm. The basic difference with the Ensemble algorithm, where a single trajectory evaluates one and only one term j at a given instant of time, here is that a trajectory evaluates the whole sum at once. Indeed, before leaving the domain, the trajectory contributes to all terms j = 0, 1, . . . l. After this event the contribution to the rest of the terms j > l is zero. In this way the simulation of this trajectory can be terminated and a new trajectory can begin as a next independent realization of the estimator for fΩ .

8.3.1 Averaging on Before-Scattering States The following theorem generalizes (5.11) for the inhomogeneous case. Theorem 8.3 The averaged value AΩ of a generic physical quantity is evaluated according to 

AΩ = fD

−1 ((P(ti ),R(ti ))∈Ω) A(P(ti ), R(ti ))λ (P(ti ), R(ti ))  , −1 ((P(ti ),R(ti ))∈Ω) λ (P(ti ), R(ti ))

(8.29)

where ti are the time instants of scattering. The number of the carriers fD in the simulation domain D is assumed to be known. In the following we present the proof. From the definition of the averaged value

AΩ =

dp

drf (p, r)A(p, r)θΩ (p, r)

(8.30)

follows that the free term in Eq. (8.11) is generalized by the expression A(p , r )θΩ (p , r ). The steps used to derive (8.28) from (8.10) give rise to the following formulation of the integral corresponding to the j th term:

∞ IA = 0

⎫ ⎧ t j ⎪ ⎪ ⎬ ⎨ −  λ(Pj (y),Rj (y))dy dtj e 0 λ(P(tj ), R(tj )) × ⎪ ⎪ ⎭ ⎩ θΩ (Pj (tj ), Rj (tj ))A(Pj (tj ), Rj (tj )) λ(P(tj ), R(tj ))

(8.31)

It follows that an in the simulation domain evolving trajectory contributes at the end of each free flight by the value of the random variable θΩ A/λ, which is zero 0, if Pj (tj ), Rj (tj ) ∈ / Ω.the estimator for N trajectories is then: 

AΩ = ΦD

((P(ti ),R(ti ))∈Ω) (Aλ

N

−1 )(P(t

i ), R(ti ))

(8.32)

8.3 Iteration Approach: Single-Particle Algorithm and Ergodicity

83

and in particular  fD = ΦD

((P(ti ),R(ti ))∈D) λ

N

−1 (P(t

i ), R(ti ))

.

(8.33)

In contrast to the number of the carriers in the simulation domain, their flux ΦD is usually not known. Equation (8.33) allows to rule out ΦD /N and thus to obtain (8.29). If the number of the carriers in D is not known initially, it can be obtained with the help of (8.29) from the knowledge of fD in some subdomain, e.g. in the contacts. fD is assumed unity in the homogeneous case, (5.11). The following algorithm is then derived. Algorithm 8.1 1. Choose the number of trajectories N and initialize two estimators ν and μ with 0; 2. Choose the initial point p, r of the trajectories n = 1, . . . , N with the probability fb |v⊥ |(p, r)/ΦD ; 3. While R(t) ∈ D, at the end of each free flight and at the local time ti of a trajectory evolving in D, update ν u μ: – ν by adding (AθΩ λ−1 )(P(ti ), R(ti )) – μ by adding λ−1 (P(t), R(t)); 4. If R(t) exits the simulation domain at a time t ≤ ti , then a novel trajectory begins according to Step 2. 5. After completion of the simulation of the Nth trajectory, ν is divided by μ. The ratio gives a stochastic estimate of AΩ /fD .

8.3.2 Averaging in Time: Ergodicity Theorem 8.4 The stationary system corresponding to the boundary value problem is ergodic with respect to the rule for obtaining the physical averages. The following relation holds:  dt (AθΩ )(P(t), R(t)) , (8.34)

AΩ = fD T The quantity t, called simulation time, accumulates the evolution times of the consecutive segments of the consecutive trajectories, and T is the total simulation time. In the following we present the proof. The alternative definition of t is a time of a watch, which is started with the first free flight of the first trajectory and accumulates the flight time without being restarted between the consecutive trajectories. The proof of ergodicity is equivalent to the proof of the averaging

84

8 Inhomogeneous Stationary Transport

procedure corresponding to (5.10) and relies on the alternative representation of the integral (8.28). −

e

t2

λ(P2 (y),R2 (y))dy

0





dt3 e

t3

=

(8.35)

λ(P2 (y),R2 (y))dy

0

λ(P2 (t3 ), R2 (t3 ))θ (t3 − t2 )

0

After the change of the integration order and with the help of the θ function one obtains ⎫ ⎧ t 3 ⎪ ⎪

∞ ⎬ ⎨ −  λ(P2 (y),R2 (y))dy λ(P2 (t3 ), R2 (t3 )) × If = dt3 e 0 ⎪ ⎪ ⎭ ⎩ 0

t3 dt2 θΩ (P2 (t2 ), R2 (t2 ))

(8.36)

0

Now the term in the curly brackets is the probability for the choice of the next scattering instant of time, whereas the next integral has a clear meaning: It provides the dwelling time of the trajectory in Ω during the interval 0, t3 of the last free flight. By following the arguments used to derive (8.31), we first generalize (8.36) for a physical quantity A and for arbitrary j . ⎧ ⎪ ⎨

∞ IA =

dtj +1 0

⎪ ⎩



e

⎫ ⎪ ⎬

tj +1



λ(P2 (y),R2 (y))dy

λ(P2 (tj +1 ), R2 (tj +1 ))

0

tj +1 dtj A(P2 (tj ), R2 (tj ))θΩ (P2 (tj ), R2 (tj ))

⎪ ⎭

×

(8.37)

0

The contributions of the tj integrals are first summed on the iterations j within a single trajectory and then on the consecutive trajectories n = 1, . . . N. The sum can be reformulated as an integral over the simulation time t 

AΩ = ΦD

dt (AθΩ )(P(t), R(t)) N

(8.38)

8.3 Iteration Approach: Single-Particle Algorithm and Ergodicity

85

and in particular  fD = ΦD

dt (θD )(P(t), R(t)) . N

(8.39)

Finally, ΦD /N is ruled out with the help of (8.39), which gives (8.34). In this way the ergodic property of the rule for obtaining averages of the physical quantities follows from the Monte Carlo method and the a priori assumption of this property becomes obsolete. Algorithm 8.2 The ergodic counterpart of Algorithm 8.1 is: 1. Choose the number of trajectories N and initialize two estimators ν and μ with 0; 2. The initial point p, r of trajectory n = 1, . . . , N is chosen with the probability fb |v⊥ |(p, r)/ΦD ; 3. While R(t) ∈ D, after each free flight determine ti as the lesser between the time of the free flight to the boundary of D and the time of the next scattering event. Update ν u μ: t – add to ν the quantity 0i dt (AθΩ )(P(t), R(t)) t – add to μ the quantity 0i dt; 4. If R(t) exits the simulation domain at a time t ≤ ti , Step 3 is omitted and a novel trajectory begins according to Step 2. 5. After completion of the simulation of the Nth trajectory, ν is divided by μ. The ratio gives a stochastic estimate of AΩ /fD .

8.3.3 The Choice of Boundary This section addresses some interesting features of the stationary stochastic approach and in particular the consistency with respect to the choice of boundary. In practical tasks boundaries are suggested by considerations, related to our knowledge of the physical system. From a numerical point of view however, if f is known in a given domain D, so that one is able to specify boundary conditions over a (smooth as discussed in Sect. 8.2.2) surface S, then the Single-Particle algorithm should provide f independently on the choice of S. In the following we consider a simulation domain D with a boundary SD , chosen by physical considerations. Inside the domain we chose a subdomain Ω which has no common points with ¯ which completes the boundary on D: SΩ ∩ SD = ∅, as well as the subdomain Ω, Ω to D, Fig. 8.1. The task is to evaluate fΩ¯ , provided that at least both, fD , and ΦD are known. We consider three possible approaches for application of the Single-Particle approach. In what follows we use the indices i and o to denote quantities associated with transport into or out of Ω.

86

8 Inhomogeneous Stationary Transport

Fig. 8.1 The domain D comprises the two complementary subdomains Ω and Ω¯

A The trajectories begin from SD and end on SD . They evolve in Ω¯ and are used to evaluate fΩ¯ . The trajectories evolve in the entire domain D, which comprises ¯ both Ω and Ω. ¯ The trajectories are initialized at the B The algorithm is applied only in Ω. boundary SΩ¯ comprised by the union SD ∪SΩ and evolve inside Ω¯ until reaching back SΩ¯ . In this case a simulation in Ω is avoided on the expense of the knowledge of the particle flux density fb |v⊥ |o over SΩ , which is now required ¯ to initialize the trajectories entering Ω. C fΩ¯ is given by the difference of fD and fΩ , the latter obtained by a simulation only in Ω. The trajectories begin and end on SΩ . A knowledge of the particle flux density fb |¯v⊥ |i on the border of Ω is needed to initialize the trajectories entering Ω. A simulation in the whole domain D apparently gives the most detailed information as it includes the two subdomains. The question now is, if the trajectories obtained with approach A can be split on the border SΩ into subtrajectories, which can be interpreted as generated by approaches B and C. We will show that all the three approaches consistently provide estimates for fΩ¯ . Theorem 8.5 The distribution of the trajectories over SΩ provided by approach A corresponds to the particle flux densities needed for approaches B and C. In the following we present the proof. We begin with an analysis of the free flight processes between the two subdomains. We consider the points a and b over a Newtonian trajectory, as shown on Fig. 8.1, where b is the boundary crossing point. In approach A, SΩ is crossed in b by the free flight trajectory initiated at point a. In approaches B and C the trajectories begin from b and their number, according to

8.3 Iteration Approach: Single-Particle Algorithm and Ergodicity

87

(8.16) is proportional to the weight fb |v⊥ |i,o in this point. We evaluate the number of trajectories, which approach A allots to the vicinity of b. Consider the free flight trajectory beginning from pa , ra at time zero, which crosses SΩ in pb , rb at a time tb . The neighboring trajectories, initialized in the vicinity of pa , ra , encompass the differential phase space volume dφ = dωdp around pb , rb The probability that the next scattering moment is higher that tb can be expressed in the following way: t − λ(Pa (y),Ra (y))dy



dtλ(Pa (t), Ra (t))e

0

tb −

=e

tb

λ(Pa (y),Ra (y))dy

t − λ(Pb (y),Rb (y))dy



dtλ(Pb (t), Rb (t))e

0

0

0

This equation has a clear heuristic interpretation in terms of elementary transport events. The t-dependent terms on the right give the probability for a free flight beginning from b. The exponential factor in front of the integral gives the probability for the free flight beginning from a, to reach b. This probability, multiplied by the number of the trajectories beginning from a, gives the number of the trajectories reaching b. The point a can be any arbitrary point on the corresponding Newtonian trajectory beginning from the boundary of D. Carriers can appear in any point of the trajectory segment before b due to the scattering processes, and their contribution must be taken into account. To estimate the total effect of these elementary events we recall that approach A evaluates the averaged distribution function in any subdomain of D and in particular in dφ. We choose dω to be the cylinder around the normal to the surface SΩ direction in point b as shown in Fig. 8.1. The height of the cylinder surrounds a surface Sdω ∈ SΩ and has a height dx. Since f is a continuous function, fdφ can be evaluated as: fdφ = fb (pb , rb )Sdω dxdp

(8.40)

The time dtdφ , spent by a free flying carrier with a velocity v(p), crossing the boundary in dω is dx/|v⊥ (pb )|. Then we can estimate fdφ with the help of (8.39) by assuming that N trajectories begin from the boundary of D and that n of them encounter Sdω . fdφ = ΦD

ndtdφ N

(8.41)

From (8.40) and (8.41) follows fb (pb , rb )|v⊥ (pb )|Sdω dp = n

ΦD . N

(8.42)

88

8 Inhomogeneous Stationary Transport

The relative number of trajectories per unit phase space (Sdω dp = 1 ) provided by approach A is proportional to fb |v⊥ |. This result holds for both, i and o directions of the trajectories. What left is to check, if the relation (8.39) holds for the particular domain Ω. We consider these numerical trajectories, which can be in segments belonging to Ω and ¯ In Ω these segments with a number NΩ , begin from the segments belonging to Ω. point set Sω , KΩi and end in Sω , KΩo , where the indices i and o of the momentum space subsets are related to the velocities pointing in and out of Ω. An integration of (8.42) on Sdω dpbi gives NΩ ΦΩi , = ΦD N

Φ Ωi =

fdS |v⊥ |dSdpbi .

(8.43)

Here ΦΩi is the flux of carriers entering Ω, which is exactly the quantity needed for the approach C. If tΩ is the total time spent by the trajectories NΩ inside Ω, it follows that fΩ =

ΦΩi tΩ ΦD tΩ = . N NΩ

(8.44)

This proves that the trajectories NΩ , provided by approach A, can be interpreted as a proper set of trajectories having an initialization, chosen according to approach C and giving rise to the correct evaluation of fΩ in accordance with (8.39). We complete the analysis with the following considerations. Let us now consider Ω¯ in conjunction with approach B. The NΩ¯ = NΩ trajectories exiting from Ω have the proper, according to SΩ , KΩo , initialization to give the evaluation of fΩ¯ . The number of the numerical trajectories for this approach is N + NΩ . The times spent by the trajectories of these two approaches, A and B, are equal and are denoted by tΩ¯ . It holds tΩ¯ = fΩ¯

N N + NΩ = fΩ¯ , ΦD ΦD + ΦΩo

where ΦΩo is the integral of fSω |v⊥ | on dSω , KΩo . The last two relations can be consistent only if ΦΩo = ΦΩi which follows from (8.44). This result is a manifestation of the stationary character of the problem: The flux of carriers entering an arbitrary domain must be equal to the flux of the exiting carriers.

8.5 Iteration Approach: Modified Backward Algorithm

89

8.4 Iteration Approach: Trajectory Splitting Algorithm The Iteration Approach can be applied to derive the statistical enhancement algorithm. This intuitive Trajectory Splitting algorithm of Phillips and Price can be obtained with the help of the adjoint equation (8.11). The iteratively applied kernel of the equation can be presented as a sum of M branches. 



τ − λ(P(y),R(y))dy

S(p , pa , r )e

0



M − λ(P(y),R(y))dy  1 = S(p , pa , r )e 0 M i=1

This gives rise to a weight 1/M of any of the branches. The branching can be related to the fulfillment of certain conditions such as a penetration into a given phase space domain or reaching after a certain type of interaction. In general the branching is independent on the previous history and Mj can vary with the consecutive iterations j . The main condition is to update the weight of the trajectory accumulated from the previous iterations by a multiplication with the factor 1/Mj .

8.5 Iteration Approach: Modified Backward Algorithm The Iteration Approach can be applied according to (6.1) for the derivation of the Modified Backward Algorithm. A useful information about the consecutive evolution events of the Backward algorithm can be retrieved from the analysis of the iterative expansion of the solution. We consider the second iteration f (2) of the stationary inhomogeneous equation (8.9).

0 f

(2)

(p, r) =

dt1

tb (p,r)

0

dt2

dp1

dp2

tb1 −





S(p1 , P(t1 ), R(t1 ))e

0 t1

dyλ(P2 (y),R2 (y))

tb2

fb (P2 (tb2 ), R2 (tb2 )) e S(p2 , P1 (t2 ), R1 (t2 ))e

0

0

dyλ(P1 (y),R1 (y))

t2

×

×

dyλ(P(y),R(y))

(8.45)

If the last row of (8.45) is multiplied by (λ(P(t1 ), R(t1 ))/λ (P(t1 ), R(t1 ))), we obtain the probability for a drift backward in time for a trajectory initialized at p, r at time 0, multiplied by the probability for scattering in a backward to the physical process direction. To conserve the value of f (2) we further need to introduce the

90

8 Inhomogeneous Stationary Transport

inverse factor λ /λ, which is treated as weight. The same can be done with the previous term, which gives rise to a similar algorithm, as in (6.1). The difference is that now the backward trajectory is not bounded by the time, but by the boundary. The backward evolution ends, when the current trajectory hits the boundary of the simulation domain. Despite that this algorithm is well asserted from a numerical point of view, the practical implementation can pose problems due to the specific character of the physical processes of scattering. The basic scattering processes involve phonons, which strive to maintain the equilibrium in the system. A high energy carrier would preferably transfer its energy to the phonons, because of the higher probability for such a process. Accordingly, a backward scattering process will add energy to the carrier state, so that when a trajectory reaches the spatial boundary, it encounters the exponentially small high energy tail of the distribution in the momentum space. In contrast to the time dependent counterpart, the mean number of scattering events is not bounded by the evolution time. The number of scattering events varies in a wide range, giving rise to a huge variance of the accumulated weights. A modification of the algorithm is suggested in [88] aiming at a variance reduction. The idea is to use the physical scattering probabilities also for the construction of the backward trajectories. In this way, the energy of the carrier state associated to the backward trajectory remains in the equilibrium energy region, where the momentum distribution is not negligible. Used is the principle of detailed balance 

S(p, p ) = S(p , p)e((p)−(p ))/kB T ,

(8.46)

which allows to reformulate (8.45).

0 f

(2)

(p, r) =

dt1

0

dt2

dp1

dp2

tb1

tb (p,r) −

fb (P2 (tb2 ), R2 (tb2 )) e

0

dyλ(P2 (y),R2 (y))

tb2

×

(p2 )−(P1 (t2 )) kB T

× ⎫ ⎧ 0 ⎪  ⎬ ⎨ − dyλ(P1 (y),R1 (y)) ⎪ S(P1 (t2 ), p2 , R1 (t2 )) × λ(P1 (t2 ), R1 (t2 ))e t2 ⎪ ⎪ λ(P1 (t2 ), R1 (t2 )) ⎭ ⎩ e

(p1 )−(P(t1 )) kB T

× ⎫ ⎧ 0 ⎪  ⎬ ⎨ − dyλ(P(y),R(y)) ⎪ S(P(t1 ), p1 , R(t1 )) λ(P(t1 ), R(t1 ))e t1 ⎪ ⎪ λ(P(t1 ), R(t1 )) ⎭ ⎩ e

(8.47)

8.6 A Comparison of Forward and Backward Approaches

91

The conditional probabilities for the trajectory construction are enclosed in the curly brackets, whereas the exponential weight factors are displayed in separate lines. The interpretation of the consecutive terms is straightforward so that we only sketch the procedure of the trajectory construction. The phase space point where f (2) will be evaluated, p, r, and the time 0 initialize the backward trajectory P, R. The product of the conditional probabilities is applied in a backward direction, so that the trajectory construction begins with the very last term in the curly brackets in (8.47). This determines the scattering time t1 . Then, a scattering event follows from P(t1 ) to p1 selected by the next term in the curly brackets. The initial weight 1 is multiplied by the exponent of the upper row. p1 , R(t1 ), 0 initialize the next trajectory. A free flight and a scattering event follow as well as a multiplication of the weight according to the next two lines in the top direction. p2 , R1 (t2 ), 0 initialize the last trajectory segment, until the boundary is reached at time tb2 . The term in the first row can be interpreted as a weight factor. Alternatively, we see that the exponent may be treated as a probability similarly to the approaches described in Sect. 8.

8.6 A Comparison of Forward and Backward Approaches We present a simple example in order to compare the peculiarities of the forward and the backward approach. While in the former the task is weakly formulated, namely to determine an integral in a given phase space domain, the latter determines the distribution function point wise. Thus a forward approach provides information in the whole simulation domain, the backward counterpart is the algorithm of choice, when selected points or small domains of the phase space are subject of interest, but with a directly controlled precision. A simple example outlines these considerations. The evolution of an initial equilibrium distribution of carriers in silicon at 300 K is followed for 3 ps under the action of a 10 kV/cm applied electric field. At such an evolution time the system is practically in a stationary state. The energy distribution of the carriers obtained by the Forward and the Modified Backward algorithm is shown on Fig. 8.2. For a fare comparison, the Forward algorithm uses statistical enhancement. The Modified Backward algorithm is based on the swap of the probabilities for emission and absorption. Sab (p, p ) = An(q)δ((p) − (p ) + hω ¯ q) Sem (p, p ) = A(n(q) + 1)δ((p) − (p ) − h¯ ωq )

(8.48) (8.49)

where A is a factor depending on the physical characteristics of the system. One can use the Bose-Einstein distribution function to show that n(q) + 1 = n(q)exp(hω ¯ q /kb T )

(8.50)

92

8 Inhomogeneous Stationary Transport

Fig. 8.2 Carrier distribution as a function of the energy computed by the forward • and backward evolution ◦

so that the following relations hold: Sab (p, p ) = Sem (p , p)exp(−hω ¯ q /kb T )

(8.51)

Sem (p, p ) = Sab (p , p)exp(hω ¯ q /kb T )

(8.52)

These expressions are in accordance with the principle of detailed balance.

Chapter 9

General Transport: Self-Consistent Mixed Problem

The most general computational task of modeling classical transport in semiconductor devices involves the Boltzmann equation accomplished by initial and boundary conditions. Furthermore, the self-consistency between charge transport and electromagnetic fields demands to include in the task a computation of the latter, which usually involves the Poisson equation. Finally, the raising importance of certain rare events suggests to incorporate in the computational scheme algorithms for statistical enhancement. In particular, most of the simulators use the Monte Carlo Ensemble algorithm in conjunction with algorithms for population control based on trajectory splitting considered in Sect. 5.3. Alternatively, statistical enhancement can be realized with event biasing. Thus our goal is to develop self-consistent algorithms suitable for mixed conditions and involving event biasing. This is accomplished in two steps. In this chapter we develop the stochastic model in terms of numerical trajectories and statistical weights. In the next chapter we consider suitable models for event biasing and then we analyze the behavior of the generated weights in the BoltzmannPoisson coupling scheme. The stochastic model is based on the application of the Iteration Approach to the to the Boltzmann equation corresponding adjoint integral equation. It is important to show that the stochastic model generalizes the existing Ensemble algorithm, namely to show that we can obtain it as a particular case of the approach. The numerical aspects of the stochastic model allow to distinguish numerical particles originating from the initial condition or from the boundaries. On contrary, the phenomenological Ensemble algorithm does not discriminate such particles. Indeed, all carriers follow a natural evolution and contribute to the physical averages independently of their origin. From a statistical point of view their weight is always unity. We show that this feature is maintained by the stochastic model provided that a particular initialization and proper trajectory evolution probabilities are chosen. In this case the weight associated to the numerical trajectories remains equal to unity during the evolution [89]. Furthermore the Iteration Approach allows to consider the problem of © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_9

93

94

9 General Transport: Self-Consistent Mixed Problem

variance of the stochastic process and thus to estimate the natural carrier number fluctuations in desired device regions.

9.1 Formulation of the Problem We consider the weak formulation of the problem, namely to compute the mean value A(t) of a generic physical quantity A determined by the solution of the seven-dimensional Boltzmann equation in the presence of initial and boundary conditions. The preferred approach for such a task is to use the adjoint equation and the forward parametrization of the trajectories. The task is now augmented to involve the time variable.

A(t) = dp drA(p, r)f (p, r, t) D

=

dp



dr D

dt  A(p, r)δ(t − t  )f (p, r, t  )

(9.1)

0

The distribution function f is normalized on the number of carriers in the domain D, which in the single-particle problem is denoted by fD , (8.33). Here, we use another notation N (t) = θD , to underline the correspondence with the number of the in parallel simulated trajectories. Accordingly N(t, Ω) = (θΩ , f ) is the number of carriers in the subdomain Ω. The derivation of the integral form of the Boltzmann equation follows the steps giving rise to (7.12) and (8.8), where, in particular the integration is in the limits (t0 , t). t is the evolution time, whereas t0 is the larger between the initial and the boundary times. Here, the condition is that all points of the backwards in time parametrized trajectory in the interval τb = t −tb belong to D. In both cases f (t0 ) is known, given by the initial or the boundary conditions. In contrast to the stationary case, where τ depends only on the initialization point (p, r), now it depends also on t. The existence of two alternative times leads to the appearance of two alternative terms in the integral form of the equation.

t

f (p, r, t) =

dt 

dp θD (R(t  )) ×

(9.2)

0

f (p , R(t  ), t  )S(p , P(t  ), R(t  ))e t − λ(P(y),R(y))dy

e

0

t − λ(P(y),R(y))dy

fi (P(0), R(0)) + e

t



t tb

+

λ(P(y),R(y))dy

fb (P(tb ), R(tb ), tb )

9.2 The Adjoint Equation

95

For a given initialization p, r, t of the backward trajectory one of these terms, corresponding to either initial or boundary conditions, is always zero.

9.2 The Adjoint Equation The following theorem generalizes (8.11) for the mixed problem. Theorem 9.1 The equation, adjoint to (9.2) has the form g(p , r , t  ) = g0 (p , r , t  ) +





dpa θD (r )S(p , pa , r )e



t

(9.3) τ

− λ(P(y),R(y))dy t

g(P(τ ), R(τ ), τ ) ,

where g0 , corresponding to (9.1), is g0 (p , r , t  ) = A(p , r )δ(t − t  ),

(9.4)

with a trajectory initialization provided by pa , r , t  . In the following we present the proof. The kernel of (9.2) is degenerated, which imposes to reformulate it as a seven-dimensional integrand with the help of the functions of Dirac and Heaviside. f (p, r, t) =



dt 

dp

(9.5)

t

dr f (p , r , t  )S(p , P(t  ), r )e

− λ(P(y),R(y))dy t

0

θD (r )δ(r − R(t  ))θ (t − t  ) + f0 (p, r, t)

(9.6)

The free term in (9.2), denoted by f0 , is a function of the set of variables p, r, t. Indeed, the set determines P(t0 ), R(t0 ) at the initial time t0 , chosen to be zero, and tb (and hence P(tb ), R(tb )). The adjoint equation has the same kernel, but the integration is on the unprimed variables. 









g(p , r , t ) =

dt

dp







t − λ(P(y),R(y))dy

drg(p, r, t)S(p , P(t ), r )e

t

×

0

θD (r )δ(r − R(t  ))θ (t − t  ) + g0 (p , r , t  )

(9.7)

96

9 General Transport: Self-Consistent Mixed Problem

The choice of the free term g0 determines the solution g. The particular expression (9.4) provides A(t) in accordance with (9.1). It is important to note that the time variable is t  , while t is an external parameter. A relevant notation would be e.g. g0,t , but t will be skipped for the sake of simplicity. A change of the integration variables from p, r to pa = P(t  ), r  = R(t  ), which, according to the Liouville theorem, does not introduce additional factors to the kernel, gives rise to a forward parametrization as in the case of (8.11). g(p , r , t  ) =

∞ dt

dpa

dr g(P(t), R(t), t)S(p , pa , r )e

t − λ(P(y),R(y))dy t

×

0

θD (r )δ(r − r )θ (t − t  ) + g0 (p , r , t  ) . It is now possible to compute the δ and the θ functions, which gives rise to (9.3) under the condition that the integration variable t is replaced by τ , as the former is reserved for the evolution time. Finally, a replacement in (9.1) gives rise to (9.4). The solution of (9.3) in the particular choice A = θΩ (p, r) can be used to identify the random variable associated to the Ensemble algorithm. Indeed, it will be shown that the quantity

G(θΩ , t; pα , rα , 0) =



t  − λ(Pα (y),Rα (y))dy

dt  e

0

g(Pα (t  ), Rα (t  ), 0)

(9.8)

0

gives the probability of a carrier in the initial coordinates pα , rα , 0 initializing the trajectory Pα (y), Rα (y)) to appear in Ω at time t, without leaving D during the evolution. For this purpose we use the probabilities (5.1) and (5.3), which now depend on the following arguments: pS (p, p , r)dp =

S(p, p , r)  dp , λ(p, r) −

w(t; p, r, t0 )dt = e

t t0

(9.9)

λ(P(y),R(y))dy

λ(P(t), R(t))dt

With the help of (9.9) we formulate the following theorem:

9.2 The Adjoint Equation

97

Theorem 9.2 The quantity G can be presented as an infinite-dimensional integral of the type m 



G = lim

m→∞

dtl+1

dpal+1

(9.10)

l=0 tl

w(tl+1 ; pal , Rl−1 (tl ), tl )pS (Pl (tl+1 ), pal+1 , Rl (tl+1 )) . . . × ⎛ ⎞ m i−1   θ (t − ti ) ⎝ θD (Rj (tj +1 ))⎠ θΩ (Pi (t), Ri (t))θ (ti+1 − t), j =0

i=0

where the convention about the notations is t0 = 0, pa0 = pα , R−1 = rα , P0 = Pα , $ R0 = Rα and −1 j =0 θD = 1. In the following we present the proof. As usual, we analyze the contribution of the second iteration term of (9.3) to the value of (9.8), assuming that A = θΩ . (2)

G

∞ t

= dt  dt1 dpa1 dpa2 t

0 t

 − λ(Pα (y),Rα (y))dy

e −

e −

e

0

t1 t

t t1

λ(P1 (y),R1 (y))dy

S(Pα (t  ), pa1 , Rα (t  ))θD (Rα (t  )) ×

S(P1 (t1 ), pa2 , R1 (t1 ))θD (R1 (t1 )) ×

λ(P2 (y),R2 (y))dy

θΩ (P2 (t), R2 (t))

(9.11)

The equation can be augmented as in the case of (6.11) with factors which accomplish the consecutive terms exp and S to the probabilities (9.9). Furthermore, it is convenient to change the notations according to the scheme t  → t1 , t1 → t2 and to augment the integration interval over t2 to infinity by introducing the function θ (t − t2 ). In this way, it is easy to see that G(2) is a product of conditional probabilities which generate the natural process of evolution of the Boltzmann carriers. In particular, we can recognize the elementary events of drift over the trajectory Pα , Rα until time t1 , when a scattering event occurs. The carrier coordinates at the end of the free flight are input parameters for the generation of the after-scattering state, which, in turn, is used to determine the next free flight trajectory and so on. The last exponent is the probability for a drift without scattering on the trajectory P2 , R2 until reaching the target time t. Indeed, (8.35) is generalized

98

9 General Transport: Self-Consistent Mixed Problem

for the time dependent case as follows: −

e

t

λ(P(y),R(y))dy

t2

=





t3

λ(P(y),R(y))dy

t2

dt3 e

λ(P(t3 ), R(t3 ))θ (t3 −t)dt,

(9.12)

t2

showing, that the exponent unifies all events with scattering times larger than t. Finally, θΩ in (9.11) rejects all processes in which the carrier is not in Ω at time t. The integrals on time and momentum gather all elementary events consisting of three free flights and two scattering processes. The domain indicators ensure the condition, that the whole trajectory is in D until time t. The function θ takes care for the proper time ordering t1 ≤ t2 ≤ t. To summarize, Eq. (9.10) comprises processes of two scattering events and three free flights, under the condition that the trajectory entirely belongs to D and is in Ω at time t. By adding the term corresponding to a single scattering event, i = 1, one obtains with the help of (9.12): 2 

∞ (i)

G

=

i=0

∞ dt2

dt1 0



t1

dt3

dpa1

dpa2

dpa3 ×

(9.13)

t2

w(t1 ; pα , rα , 0)pS (Pα (t1 ), pa1 , Rα (t1 )) × w(t2 ; pa1 , Rα (t1 ), t1 )pS (P1 (t2 ), pa2 , R1 (t2 )) × w(t3 ; pa2 , R1 (t2 ), t2 )pS (P2 (t3 ), pa3 , R2 (t3 )) ×  θΩ (Pα (t), Rα (t))θ (t1 − t) + θ (t − t1 )θD (Rα (t1 ))θΩ (P1 (t), R1 (t))θ (t2 − t)  + θ (t − t2 )θD (Rα (t1 ))θD (R1 (t2 ))θΩ (P2 (t), R2 (t))θ (t3 − t) The Heaviside functions effectively decompose the random variable enclosed in the brackets into events with zero, one, and two scattering events. This result can be generalized to the whole series, giving rise to the infinite-dimensional integral (9.10) and a sum of θ functions which represent the random variable Ψ , associated to the transport process. Next, we analyze the physical aspects of the obtained equation. We consider the special case of evolution in the whole phase space T , so that Ω = D = T . In this case, θΩ = θD = 1 and the random variable in (9.10) is reduced. ψ=

m−1  i=0

θ (t − ti )θ (ti+1 − t);

0 < t1 < t2 . . .

(9.14)

9.2 The Adjoint Equation

99

The time is partitioned into intervals between the consecutive times in (9.10). For any time t and for any choice of the consecutive times ti , there is one and only one interval tj < t < tj +1 containing t. then, θ (t − tj )θ (tj +1 − t) = 1, whereas the rest of the terms in (9.14) is zero. The Heaviside functions partition the evolution into complementary events, having probabilities, which sum up to unity, ψ = 1. In this case, the value of (9.10) is easily evaluated, G(θT , t; pα , rα , 0) = 1

∀ t,

pα , rα .

(9.15)

The derived conservation of the probability corresponds to the fact that the associated carrier must be somewhere in the space during the evolution. It is worth to note that the traditional derivation of the probability conservation law is based on Eqs. (9.1) and (9.15) along with the condition θΩ = θT = 1. The following relation holds:

G(θT , t; pα , rα , 0) =

drf (p, r, t)

dp

Then, since the integration of the distribution function in the whole space gives unity, one obtains (9.15). The above considerations are very useful for the analysis of the general case Ω ∈ D = T , when the indicators of the simulation domain reject a part of the events contributing to (9.14). The random variable ψ in (9.10) is no more fixed to 1, but has 0 as an alternative value. In this way, ψ rejects the events completing G(θΩ , t; pα , rα , 0) to G(θT , t; pα , rα , 0). It follows that G(θΩ , t; pα , rα , 0) is the probability for a particle, initially in pα , rα , 0 to appear in Ω at t without leaving D during the evolution. In what follows, the condition that the trajectory remains in D is called condition I . This analysis leads to an estimate of the variance of the random variable associated to G [90]. Assert 9.1 The variance of the random variable ψ, with expectation value Eψ is defined by Eq. (9.10), as σψ2 = G(1 − G) .

(9.16)

In the definition σψ2 = Eψ 2 − Eψ2 we use EΨ = G, along with the fact that ψ takes values 0 and 1, from where it follows that ψ 2 = ψ. Equation (9.10) can be generalized to an equation corresponding to a physical quantity A just by the replacement of θΩ by A. The random variable ψ, associated with A, takes values ψ = A(p, r)

or

ψ = 0,

(9.17)

where p, r is the position reached by the trajectory at time t. The condition I is assumed to hold in the first case, otherwise ψ = 0.

100

9 General Transport: Self-Consistent Mixed Problem

9.3 Initial and Boundary Conditions We analyze the contributions from the initial and the boundary conditions to the physical averages. From a physical point of view, carriers originating from either of them contribute equally to the averaging procedure. It is important to address this fact by applying the formal Monte Carlo theory. The averaged value of a quantity A at time t is given by the solution of (9.3) and the free term of (9.6).

A(t) =

dp

dr



dt  f0 (p , r , t  )g(p , r , t  )

(9.18)

0

Since f0 comprises both, initial and boundary conditions, their contribution to (9.18) can be analyzed separately.

9.3.1 Initial Condition The contribution of the initial condition

Ai (t) =

dp

dr fi (P(0), R(0))



t  − λ(P(y),R(y))dy

dt  e

0

g(p , r , t  )

(9.19)

0

is expressed with the help of the backward trajectory (P(y), R(y)) initialized at p , r , t  . We change to a forward parametrization and use the fact that the phase space volume is conserved along the trajectories. First, the phase space point P(0), R(0) for convenience is denoted by pi , ri . This point initializes the forward trajectory pi (y), Ri (y) at time 0. The integration variables are expressed in the novel notations as p = Pi (t  ) and r = Ri (t  ):

Ai (t) =



dpi

(9.20) ∞

dri fi (pi , ri )

t



 − λ(Pi (y),Ri (y))dy

dt e

0

g(Pi (t  ), Ri (t  ), t  )

0

The interpretation of this expression is similar to that of (9.11). The function fi gives the distribution of the trajectories at time 0. The rest of the expression is identified as G(A, t; pi , ri , 0). Indeed the particular choice fi (pi , ri ) = δ(pi − pα )δ(ri − rα ) and A = θΩ transforms (9.20) into (9.8). We note that, if A = θΩ and under homogeneous conditions, (9.20) is simplified to (6.9).

9.3 Initial and Boundary Conditions

101

9.3.2 Boundary Conditions The expression for the contribution of the boundary conditions is equivalent to (9.19), but fi is replaced by fb . The forward formulation, is obtained by the steps used to derive (8.16).

Ab (t) =

%

t

dtb 0

dσ (rb ) ∂D

K+

 ∞

dt  e

 t



tb

dpb v⊥ (pb )fb (pb , rb , tb ) × 

λ(Pb (y),Rb (y))dy

g(Pb (t  ), Rb (t  ), t  )

(9.21)

tb

We note that the upper limit of the integral over tb is set to t due to the delta function in g0 . Despite that (9.21) looks very differently to the initial condition case, it has a similar meaning. Indeed, for a fixed tb , the term v⊥ fb has the meaning of fi and gives the distribution of the trajectories initialized at tb . The term can be interpreted as a legitimate initial condition, with the only difference that it is generated at time tb . The rest of the integrand can be easily identified as G(A, t; pb , rb , tb ). We conclude that the trajectories steaming from both, initial and boundary conditions correspond to the same random variable G and thus cannot be segregated according to their origin in the procedure of statistical averaging. To complete the analysis of (9.21), we recall the stationary expressions (8.25) and (8.26) which can be generalized to

j⊥ (rb , tb ) = dpb % ΓD (tb ) =

NΓ (t) =

∂D t

K+ (rb )

v⊥ (pb )fb (pb , rb , tb ),

j⊥ (rb , tb )dσ (rb ),

dtb ΓD (tb ) .

(9.22)

0

These are the normal components of the incoming flux of particles j⊥ , the flux ΓD (tb ) at time tb , and the total number NΓ (t) of the until time t through the boundary injected trajectories. In this way, the tb integral accounts for the contribution of all trajectories injected at times tb ≤ t to the value of Ab (t). Finally, if fb is time independent, the integral (9.22) becomes a linear function of t.

102

9 General Transport: Self-Consistent Mixed Problem

9.3.3 Carrier Number Fluctuations The fluctuations of the number of carriers in a given domain Ω provide useful information about the stability of the simulation task. We set A = θΩ and look for an estimate of the random variables associated to the contributions from (9.20) and (9.21). Theorem 9.3 If the initial condition fi is normalized to unity, the variance of the corresponding random variable is σΩ2 = fΩ (1 − fΩ ) .

(9.23)

If fi corresponds to the physical task with l = 1, . . . , N carriers, the variance is  2 σΩ2 = σψl . (9.24) l

If l(tb ) = 1, . . . , N(tb ) carriers are injected at given instant tb from the boundary, the variance is

t  2 2 σΩ (t) = dtb σψl(t (t) . (9.25) b) 0

l(tb )

In the following we present the proof. Based on Assert 9.1. If fi is normalized to unity, it can be interpreted as a probability distribution, reflecting, for example, the lack of information of the exact initial coordinates of the single carrier. In this case, fi is attached to the rest of the probabilities in (9.10). Equation (9.24) follows from the additive property of the variance of a sum of independent random variables. The independence of the evolution of the considered carriers does not only mean that there is no carrier-carrier interaction, but also that the initial conditions are not correlated. Thus, the numerical problem comprises N Boltzmann equations associated with N initial conditions, with no additional constraint in terms of equations relating the particular distribution functions. Equation (9.25) is similar to (9.24). Indeed, for any fixed time a boundary condition can be interpreted as initial one, so that (9.25) accounts for the flux through the boundary. Notably, there is not a constant increase of the variance as the dwelling time of the carriers in constrained structures is finite, and thus we expect 2 that σψl(t (t) → 0 if t − tb → ∞. b)

9.4 Stochastic Device Modeling: Features The contemporary stochastic, or often called particle approach to device modeling solves the self-consistent mixed problem, suitable for both stationary and timedependent physical conditions. The link with the formal numerical theory is

9.4 Stochastic Device Modeling: Features

103

established by the expression (9.10). A typical representative of the terms in this series is given by (9.13). This comparison identifies the natural probability for the transition P B as a product of the conditional probabilities w × pS , originating from the kernel of the Boltzmann equation. P B generalizes the free-flight and scattering process which is the basic step in homogeneous (bulk) simulations. The time tr of the next scattering is determined by the equation

tr

w(t; p, r, t0 )dt = r,

(9.26)

t0

where r is uniformly distributed number in the interval [0, 1]. It should be noted that the most important property of the random number generator is the speed, because of the diversity of stochastic processes involved in the evolution, as described below. The next step is to obtain the after-scattering state according to pS . As a rule, S comprises different scattering mechanisms Sj , S = j Sj . pS =

 λj Sj  λj S = pS,j ; = λ λ λj λ j

j

jr  λj j

λ

≤r
1 .

(10.9)

The value of w2 is determined from w1 and 1 in the normalization condition for f bias . w2 =

w1 ¯ fe (1 , T ) w1 − 1 + ¯ fe (1 , T )

(10.10)

The natural distribution (10.8) is recovered from the choice w1 = 1. In the case of w1 > 1, the number of particles with energies below 1 . effectively decreases. Accordingly, the number of the light weighted particles characterized by energies above 1 increases. Indeed from (10.10) follows that w2 < 1 if w1 > 1. In both choices of f bias , particles with heavy weights can perturb the statistics in the rare visited region of interest, accumulated by the light weighted ones. In this case it is efficient to apply the Trajectory splitting algorithm (5.3).

10.2 Biasing of the Natural Evolution The natural evolution process is modified by a replacement of the Boltzmann probabilities in P B , (9.10), by other probabilities, P bias , which account for all delta functions corresponding to the natural conservation laws. Any modification at given time t gives a factor w(t) = P B /P bias (t), which multiplies the accumulated during the evolution weight we , giving rise to a process of evolution of the weights. we =

 i

w(ti )

(10.11)

10.2 Biasing of the Natural Evolution

111

10.2.1 Free Flight The probability for the free flight duration can be biased to wbias by a change of the total out-scattering rate λ to λb . This gives rise to the weight w(t) =

p(t; p, r, t0 ) . w bias (t; p, r, t0 )

(10.12)

In the particular case of λb = 0 numerical particles move along the trajectory without scattering in the time interval (t0 , t), accumulating the weight w(t) = p(t; p, r, t0 ). In this way, particles are ‘encouraged’ to enter a desired region of the phase space.

10.2.2 Phonon Scattering Phonon scattering can be modified by a change of any of the two steps performed for the selection of the after-scattering state. An ‘artificial heating’ of the particle system is obtained, if the probability for absorption is increased on the expense of the emission counterpart. Depending on the chosen mechanism, the process can be controlled by the parameter m ≥ 1.   1 , = λa + λe 1 − m

λbias a

λbias = e

λe m

(10.13)

The weight obtained in the case of absorption is w = λa /λbias a . In the alternative case of emission, the weight is w = λe /λbias = m. An interesting peculiarity is e that this type of biasing leaves the free flight duration unaffected, as the sum of the emission and absorption probabilities remains the same as in the natural process. Alternatively, particles can be directed towards a desired region by a change of the after-scattering angles. For example we consider an isotropic process with an angle θ of the after-scattering momentum with a preliminary defined direction. Because the random variable χ = cos θ is a constant, the natural distribution is p(χ ) = 1/2 for χ ∈ (−1, 1). The following distribution biases the probability towards forwardly oriented scattering: ⎧ 1 ⎪ ⎨ −1 ≤ χ < χ0 pbias (χ ) = 2m ⎪ ⎩ m χ0 ≤ χ < 1 2

(10.14)

m ≥ 1 is a given parameter, while χ0 is defined from the normalization condition. χ0 =

m−1 , m+1

P bias (χ0 ) =

1 χ0 + 1 = 2m 1+m

(10.15)

112

10 Event Biasing

P bias is the cumulative probability. If r is a uniformly distributed random number in the interval [0, 1] and if r < P bias (χ0 ), the value of the random variable and the weight are given by χr = 2wr − 1,

w=

p pbias

=m.

(10.16)

In the alternative case holds χr = 1 −

2(r − 1) , m

w=

p pbias

=

1 . m

(10.17)

The particle weight is modified by the weight factor w each time χ is generated from (10.14). The discussed properties of the weights are summarized as follows.: Assert 10.1 Any particle contributes to the estimate of A by the product wn An where An is the value of the physical quantity A, reached by the particle phase space point, and wn is the product of all weights accumulated due to the modification of the natural process. If only the initial and boundary conditions are modified, the origin of the particle can be, in certain cases, identified by the weights.

10.3 Self-Consistent Event Biasing The Coulomb interaction between the semiconductor carriers are accounted for by the Poisson equation.

∇(∇V ) = q(CI + C),

C(r, t) =

dpf (p, r, t),

E(r) = −∇V (r) (10.18)

V is the electrostatic potential and CI is the concentration of the ionized impurities. The equation links V with the distribution function f . It can be regarded as a constraint which renders the Boltzmann equation nonlinear, via the electric force F(f )(r, t), which now depends on f . Since all results in the previous section essentially exploit the linearity of the transport equation, there is no way to apply directly the steps deriving event biasing algorithms. The solution is sought in the iterative procedure of the self-consistent coupling between the two equations. The Poisson equation is discretized, so that the simulation domain is partitioned into cells Ψl . The particle ensemble is moved with time steps Δt  0.1 fs. The distribution of the charge qC(rl , t) is computed at the end of every step, say at time t. The link between Cl and f , is established according to (9.31) with the help of a mesh Φm in the impulse space. Then Ωm,l = Φm Ψl

pm , rl ∈ Ωm,l

(10.19)

10.3 Self-Consistent Event Biasing

113

and N (t) f (pm , rl , t)  C(rl , t)  N(t) 





θΩm,l (n) , VΦm VΩl

n

f (pm , rl , t)VΦm ,

m

(10.20)

C(rl , t)VΨl .

l

Here, the argument of the θ function is a short notation of the phase space coordinates of the nth particle. The above relations become equalities in the thermodynamic limit N → ∞, VΦm , VΩl → 0. The charge density Cl is used in the Poisson equation to update the electric force F(r, t) in the framework of the following scheme:

f0

EMC −→ Δt

PE fΔt

EMC −→ 2Δt

PE f2Δt

EMC −→ t

···

PE ft

EMC and PE correspond to the steps of performing ensemble Monte Carlo or Poisson Equation simulations. The electric force, on its turn, governs the trajectories during the next time interval t, t + Δt. The generalization towards event biasing is based on two properties of the above scheme: (1) The electric field remains constant in time during each of the consecutive steps of the evolution. (2) The Markovian character of the Boltzmann evolution, discussed below. Furthermore, in accordance with the causality concept, a change of the field at given time does not affect the distribution at previous times. Assert 10.2 The Boltzmann transport has a Markovian character. This important property can be proved with the help of the integral form (9.2) of the equation. The evolution interval [0, t] is split into two parts [0, τ ] and [τ, t], and the equation is rewritten in a lengthy, but transparent form: f (p, r, t) =

t





e

t − λ(y)dy

τ

τ



























t − λ(y)dy t

⎡ ⎣



dt θD (R(t )) 0



dp f (p , R(t ), t )S(p , P(t ), R(t ))e

dt θD (R(t )) τ



dp f (p , R(t ), t )S(p , P(t ), R(t ))e

τ − λ(y)dy t

+

114

10 Event Biasing

+e

+e

τ − λ(y)dy



0

t tb

fi (P(0), R(0)) + e





⎤ λ(y)dy

tb

fb (P(tb ), R(tb ), tb )⎦

λ(y)dy

fb (P(tb ), R(tb ), tb ),

λ(y) = λ(P(y), R(y)). Furthermore, the boundary condition is represented by two parts, depending on the boundary time, such that tb ≤ τ and tb > τ . In the last case, tb , determined by p, r, t, can be considered as determined by P(τ ), R(τ ), τ . This allows to identify the term in the square brackets as f (P(τ ), R(τ ), τ ). It follows that the solution f at time τ is the initial condition for the future evolution t > τ . We consider an event biasing process, where the numerical particles at time τ = t − Δt have weights wn . The distribution function of the carriers f and the distribution of the numerical particles are evaluated with the help of (10.1) and (10.20). N bias (τ )

wn θΩm,l (n) , VΦm VΩl N bias (τ ) θΩm,l (n) bias f (pm , rl , τ )  n VΦm VΩl

f (pm , rl , τ ) 

n

(10.21)

As before, the argument of the θ function is a short notation of the phase space coordinates of the nth particle. The carrier density at time τ is given by f . Accordingly, the updated potential and force are obtained by the Poisson equation. The evolution of the numerical particles can proceed in two ways after this moment. They can be associated with the same weights attained at time τ , or alternatively a novel ensemble of numerical particles (with novel weights and positions) can be initialized. Theorem 10.1 An ensemble of numerical particles can be initialized at any step of the self-consistent evolution by using the rules for the choice of a modified initial condition. A particular choice could be the previous set of numerical particles with their positions and weights. In the following we present the proof. In the limit N bias → ∞, VΦm , VΩl → 0, the quantities f and f bias obtain well determined values in any point of the phase bias as a counterpart of the physical space. This allows to introduce the quantity Pτ,i density. Pτ,i . bias Pτ,i =

f bias , N bias (τ )

w(p, r) =

Pτ,i bias Pτ,i

=

Pτ,i = f f bias

f N(τ )

N bias (τ ) N(τ )

(10.22)

10.3 Self-Consistent Event Biasing

115

Here, the weight is evaluated according to (10.2), whereas N(τ ), N bias (τ ) give the numbers of the carriers and numerical particles in the simulation domain. Now, due to the fact that Pτ,i is known, we can apply all arguments around (Sect. 10.1.1) to define a novel, independent on f bias , modified initial distribution. In particular, we can choose the number of the numerical particles which continue the evolution to be equal to their number at time τ , namely N bias (τ ). This leads to a factor N(τ )/N bias (τ ) in (10.22) with a corresponding estimate for the weights. f (pm , rl , τ )  f bias (pm , rl , τ ) N bias (τ ) N bias (τ ) wn θΩm,l (n) wn θΩm,l (n) n = n N bias (τ ) bias Nm,l (τ ) θΩm,l (n) n

w(pm , rl ) =

(10.23)

bias (τ ) numerical particles must be selected from p , r with weights Then, Nm,l m l w(pm , rl ). However, as suggested by (10.23), the latter is the mean value of the bias (τ ) of particle weight, where the averaging is with respect to the number Nm,l modified particles in the domain around (pm , rl ). Thus, the old weights wn can be interpreted as independent realizations of the random variable w. It follows that the numerical particles, which attain weight wn at time τ , represent a legitimate choice of the modified initial condition for the next step of the evolution. The following scheme summarizes these results for the Weighted Ensemble Monte Carlo algorithm.

WEMC WEMC PE PE bias bias bias f0bias −→ fΔbias → → f → f −→ f f Δt 2Δt → f2Δt · · · Δt 2Δt t Δt 2Δt

Part III

Stochastic Algorithms for Quantum Transport

Chapter 11

Wigner Function Modeling

The first applications of the Wigner formalism for device simulations [39, 92, 93] consider the stationary problem of ballistic (coherent) transport, posed by boundary conditions. Single-dimensional structures, such as resonant-tunneling diodes were in the main focus of the developed numerical approaches. Deterministic methods such as discretization schemes for the position and momentum variables of the phase space, where the Wigner function is defined, were developed. Certain problems related to the correct formulation of the boundary conditions were solved to ensure proper numerical convergence and to allow for a self-consistent coupling with the Poisson equation [94]. Soon it was realized that dissipative processes of decoherence are an important part of modeling quantum dynamics. Nonphysical solutions can develop in the coherent limit, if the interplay between coherent and decoherent processes is neglected [36]. The inclusion of the latter also ensures numerical stability, so that the relaxation time approximation could be used in the simulation approaches [94–97]. The Boltzmann scattering operator has been initially included in the Wigner equation [98] as a deductive assumption, which can be ‘adequate at certain level’ [36]. The co-existence of the Wigner and the Boltzmann operators is not trivial and needs a theoretical investigation, which has to address the following questions: (1) Analysis of the scales of the in the decoherence process involved physical quantities and their particular interrelation, leading to classical types of scattering. (2) A theoretical derivation of the equation based on the first principles of quantum mechanics. 1. Typical physical scales characterizing quantum electron decoherence are embodied in the early time evolution of optically generated carriers interacting with different kinds of lattice vibrations and in particular by polar optical phonons. A variety of quantum effects can be observed on these scales, such as nonMarkovian evolution, giving rise to the Retardation effect (RE), Collision Broadening (CB), which is a result of the lack of energy conservation in the early times of the evolution [28, 99–101], and the Intra-Collisional Field Effect (ICFE), © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_11

119

120

11 Wigner Function Modeling

which is the effect of the field on the process of interaction [38, 102–105]. Such effects are accounted by models which are beyond the Boltzmann equation, such as the system of the semiconductor Bloch equations, [106], the Levinson [107] and the Barker-Ferry equations [108], and other models [109, 110]. It is worth to note the role of modeling on the research in the field. Simulations have been used to investigate this rich field of quantum phenomena for more than 10 years, before the first experiments confirmed the CB and RE effects [100]. This particularly shows how difficult it is to arrange experiments with a precision, approaching the uncertainty relation limit. Both, theory and experiment demonstrate the violation of the energy conservation during the initial phase of the electron-phonon interaction, when the system can exhibit classically forbidden transitions [28, 38]. The energy conserving delta function needs accumulation of evolution time which can be in the order of a few hundred femtoseconds for typical semiconductors such as GaAs. The establishment of the energy conservation has been theoretically explained in [111] with the help of a small parameter analysis. A necessary condition is the product of the time and phonon energy scales to become larger than the Planck constant. This means that for larger evolution times the quantum effects became less pronounced. Accordingly, ICFE cannot be observed under stationary transport conditions independently of the scale of the field [112]. The classical Boltzmann scattering model is thus valid on a much larger evolution scale. The carrier transport in nowadays electronic structures is characterized by the nanometer scale and evolution times in the order of a few picoseconds. Under such conditions classical scattering coexists with quantum coherent dynamics and thus, actual becomes the Wigner-Boltzmann description. 2. The derivation of the Wigner-Boltzmann equation has been carried out for the case of phonons [38] and for the case of ionized impurities [3, 113]. The approach used in [38] allows for a systematical analysis of the assumptions and approximations giving rise to the hierarchy of models shown on Fig. 11.1. On the top is the for the whole electron-phonon system general Wigner function [40]. The hierarchy includes the Levinson and the Wigner-Boltzmann equations. Obtained is a novel equation, where the electron-potential interaction is fully considered as in the standard Wigner theory, while the electron-phonon coupling is described on the quantum level like with the Levinson equation [114]. Such an approach has been also used to generalize the equations of Levinson and Barker-Ferry, originally introduced for homogeneous conditions, to the evolution of an initial packet in a quantum wire [43]. Here we first generalize the homogeneous Levinson and Barker-Ferry equations for the case of a quantum wire. The derivation follows the first principles of quantum mechanics, allowing for a detailed analysis of the assumptions and approximations. This gives rise to a heuristic picture providing a deep understanding of the CB, CR, and ICFE processes. Numerical results are shown for the case of strong field and ultra-fast evolution [38, 43, 112, 115–117], and of optically induced carriers [26, 28]. Then the general hierarchy of models, beginning with the general electron-phonon Wigner function and ending with

11 Wigner Function Modeling

121

generalized Wigner equation one electron / many phonon

weak scattering limit

equilibrium phonon system (trace)

reduced Wigner equation coupled to two auxiliary equations

constant field

constant field in auxil. eq.

classical limit in auxil. eq.

Levinson equation CB, ICFE, memory effect

Levinson–like equation nonlocal in space

Wigner equation Boltzmann scattering term classical limit in potential term

Bolztmann equation

Fig. 11.1 Hierarchy of kinetic models

the classical Boltzmann equation is presented. These models are in a synergistic linkage with the developed numerical approaches. The same procedure used to derive classical algorithms is followed also now: The equation is expressed into an integral form, which is then analyzed from a Monte Carlo point of view. The experience with the classical cases can be straightforwardly spread into the quantum counterpart. More interesting is the inverse relation: Quantum particle concepts and notions are derived, which allow to heuristically understand the quantum behavior hidden in the formal mathematical structure of the model. Amazingly these notions can be traced further to the set of assumptions used to derive the models.

Chapter 12

Evolution in a Quantum Wire

12.1 Formulation of the Problem The physical system under consideration comprises carriers and phonons, which interact in a quantum wire under the action of an electric filed applied along the wire. It is assumed that the wire is embedded in a semiconductor crystal, so that the phonons have bulk properties. Furthermore, the carriers are noninteracting, so that a single-carrier description is used. The system Hamiltonian H = H0 + V + Hp + He−p = −  q

aq† aq hω ¯ q + i h¯



h¯ 2 ∇r + V (r) + 2m

F (q)(aq eiqˆr − aq† e−iqˆr )

(12.1)

q

is given by the free electron and phonon components H0 , Hp , the potential in the wire V (r) and the electron-phonon interaction He−p . The operators aq† and aq create or annihilate a phonon with momentum q, hω ¯ q is the phonon energy, and F (q) is a function, depending on the type of the considered phonons. The phonon state is described by the set of numbers {nq } = {nq1 , nq2 , . . .}, where nq is the number of phonons having wave vector q. For convenience we consider only a single type of phonons, so that the wave vector coordinate can be identified also as a phonon mode. The basis vectors of the system are product of the basis states of the electron and phonon sub-systems |{nq }, r = |{nq }|r. The electric field E(t) is homogeneous along the direction of the wire z. Finally the carrier is assumed in a ground state Ψ in the normal plane. These assumptions allow to write H0 + V (r) = H⊥ + Hz = H0⊥ + V⊥ + H0z + V (z),

© Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_12

123

124

12 Evolution in a Quantum Wire

where H⊥ Ψ = E⊥ Ψ,

V (z) = −eEz,

|r = |r⊥ |z .

The Wigner function of the general electron-phonon system is obtained from the density operator ρˆt . fw (z, pz , {nq }, {nq }, t) = 

e−ipz z /h¯ z +

1 2π h¯

dz

dr⊥

z z , {nq }| r⊥ |ρˆt |r⊥ |{nq }, z −  2 2

This allows to present the carrier state as a product of longitudinal (along z) and transverse components ρˆt = |Ψ  Ψ |ρˆtz .

r, {nq }|ρˆt |{nq }, r  = Ψ ∗ (r⊥ )Ψ (r⊥ )ρ(z, z , {nq }, {nq }, t) The normalization of Ψ leads to the definition of the General Wigner Function (GWF). fw (z, pz , {nq }, {nq }, t) =

z 1 z  dz e−ipz z /h¯ ρz (z + , z − , {nq }, {nq }, t). 2π h¯ 2 2

(12.2) (12.3)

Next, we derive the corresponding Generalized Wigner Equation (GWE).

12.2 Generalized Wigner Equation The Von Neumann equation for the evolution of the density matrix can be transformed according to (12.3). ∂fw (z, pz , {nq }, {nq }, t) ∂t

z +

=

dz i2π h¯ 2



dr⊥ e−ipz z /h¯

+ * z z , {nq }| r⊥ | H, ρˆt − |r⊥ |{nq }, z −  2 2

We denote the right hand side of the equation by W T (H ) and begin to evaluate the contribution from each of the components of the Hamiltonian (12.1) [40, 118, 119].

12.2 Generalized Wigner Equation

125

First, we see that W T (H⊥ ) gives zero for the ground state in the normal plane. W T (H0z ) and W T (−eEz) are evaluated with the help of integration by parts. W T (H0z ) = −

pz ∂fw (z, pz , {nq }, {nq }, t) , m ∂z

W T (−eEz) = −eE

∂fw (z, pz , {nq }, {nq }, t) ∂pz

The contribution from the energy of the noninteracting phonons is W T (Hp ) =

1 ({nq }) − ({nq } fw (z, pz , {nq }, {nq }, t), i h¯

 where ({nq }) = q nq hω ¯ q . The expression for W T (He−p ) comprises four terms, evaluated with the help of the spectral decomposition of the unity.

1=







dz |z >< z |

dr⊥ |r⊥ >< r⊥ |

The first contribution I to W T (He−p ) is evaluated as

dz −ipz z /h¯ z z   e

z + , {nq }| r⊥ |aq eiq r |r  r |ρˆt |r⊥ |{nq }, z −  2π h¯ 2 2

 z    = nq + 1 dr⊥ eiq⊥ r⊥ |Ψ (r⊥ )|2 dz e−ipz z /h¯ eiqz (z+ 2 ) × dr⊥ dr

z z , {n1 , . . . , nq + 1, . . .}|ρˆtz |{nq }, z −  = 2 2  hq  ¯  nq + 1G(q⊥ )eiqz z fw (z, pz − z , {nq }+ q , {nq }, t). 2

z +

Here G denotes the Fourier transform of |Ψ (r⊥ )|2 and we have used that r|r  = δ(r − r ), and that aq becomes a creation operator if acting to the left. Finally we use  the short notation for the phonon basis functions {nq }± q = {n1 , . . . , nq ± 1, . . .}. + − Accordingly, {nq }q ( {nq }q ) is the phonon state, obtained from {nq } by increase (decrease) of the number of phonons in q by unity. Similarly the following relations are obtained:

dz −ipz z /h¯ z z   e I I = − dr⊥

z + , {nq }| r⊥ |aq† e−iq r ρˆt |r⊥ |{nq }, z −  2π h¯ 2 2 h¯ q   √  = − nq G∗ (q⊥ )e−iqz z fw (z, pz + z , {nq }− q , {nq }, t). 2

126

12 Evolution in a Quantum Wire

Notably, if the left basis is increased (decreased) by a phonon in q , half of the z component of the phonon momentum is subtracted (added) from the electron momentum pz . In the next two terms the phonon operators act on the right phonon basis.

dz −ipz z /h¯ z z   e I I I = − dr⊥

z + , {nq }| r⊥ |ρˆt aq eiq r |r⊥ |{nq }, z −  2π h¯ 2 2  hq  ¯ = − nq G(q⊥ )eiqz z fw (z, pz + z , {nq }, {nq }− q , t) 2

dz −ipz z /h¯ z z   e I V = dr⊥ dr

z + , {nq }| r⊥ |ρˆt aq† e−iq r |r⊥ |{nq }, z −  2π h¯ 2 2  hq  ¯ = nq + 1G∗ (q⊥ )e−iqz z fw (z, pz − z , {nq }, {nq }+ q , t) 2 These results allow to formulate the equation of motion of the GWF of the electronphonon system. 

 ∂ pz + ∇z + eE∇pz fw (z, pz , {nq }, {nq }, t) = ∂t m

1 ({nq }) − ({nq }) fw (z, pz , {nq }, {nq }, t) + i h¯   hq  ¯   F (q ) G(q⊥ )eiqz z nq + 1fw (z, pz − z , {nq }+ q , {nq }, t) 2 

(12.4)

q

h¯ q   √  − G∗ (q⊥ )e−iqz z nq fw (z, pz + z , {nq }− q , {nq }, t) 2 hq  ¯  − G(q⊥ )eiqz z nq fw (z, pz + z , {nq }, {nq }− q , t) 2  hq  ¯  , t) + G∗ (q⊥ )e−iqz z nq + 1fw (z, pz − z , {nq }, {nq }+  q 2 The GWE links the element fw (. . . , {n}, {m}, t) with four neighboring elements, having one phonon added or removed from the left or the right basis for a particular value of the summation argument q . For any q the number of the phonons nq can be any natural number. Finally the summation over the discrete momentum q links all these phonons to the GWF. In this way diagonal elements fw (. . . , {n}, {n}, t) involve first off-diagonal (FOD) elements, the latter involve the second off-diagonal (SOD) elements, etc. In this respect the GWE can be considered as an infinite system of equations for the infinite set of elements. This system cannot be solved without

12.3 Equation of Motion of the Diagonal Elements

127

certain assumptions, which allow to truncate it to some level in order to close it. Furthermore, one does not need detailed information about the phonon sub-system, but just about the reduced Wigner function of the electron sub-system. Hence, one needs to elaborate an approach which truncates the system to a certain level and then to eliminate the detailed dependence from the phonon coordinates by some kind of averaging to obtain a closed model for the reduced Wigner function. A relevant assumption, needed to eliminate the nonessential information about the phonon state is that the phonons remain in equilibrium during the electron evolution. Furthermore, the trace operation in the definition of the reduced Wigner function fw (z, pz , t) =



fw (z, pz , {nq }, {nq }, t)

(12.5)

{nq }

shows that the physical information about the electron is contained in the diagonal elements fw (. . . , {n}, {n}, t). This introduces a natural hierarchy in (12.4) and suggests the idea to cut the latter at a certain distance from the main diagonal. Very suitable are the assumptions of initially decoupled electrons and phonons, for a weak interaction, as well as therandom phase approximation (RPA) [106]. According to the latter, sums of type q,q exp{i(f (q) − f (q ))t} are approximated imposing q = q , which eliminates the rapidly oscillating terms, giving small contribution to the physical averages. In particular, such a form has the imaginary term determined by the difference of the frequencies of the left and the right phonon states. This term determines the time oscillations of the GWF. These considerations suggest as a main entity the equation for the diagonal elements.

12.3 Equation of Motion of the Diagonal Elements 

 ∂ pz + ∇z + eE∇pz fw (z, pz , {nq }, {nq }, t) = ∂t m   hq  ¯  FG eiqz z nq + 1fw (z, pz − z , {nq }+ q , {nq }, t) 2  q

hq   √ ¯  −FG∗ e−iqz z nq fw (z, pz + z , {nq }− q , {nq }, t) 2 hq  √ ¯  −FG eiqz z nq fw (z, pz + z , {nq }, {nq }− q , t) 2  hq   ¯  + FG∗ e−iqz z nq + 1fw (z, pz − z , {nq }, {nq }+ , t)  q 2

(12.6)

Here FG = F (q )G(q⊥ ). This equation is considered in conjunction with an initial condition, where the two sub-systems are noninteracting, and the phonons are in equilibrium.

128

12 Evolution in a Quantum Wire

A diagonal element is coupled to FOD elements, which are diagonal on all phonon coordinates, but the one in mode q given by the index of the sum. These FOD elements are the four neighbors of the element with coordinates nq , nq and have coordinates nq ± 1, nq and nq , nq ± 1 respectively. It is sufficient to consider only the first two rows of (12.6), because from the definition of fw and Eq. (12.6) follows that the FOD elements on the third and fourth row are complex adjoint to the second and first FOD elements respectively. The corresponding equation for the FOD element fF±OD = fw (·, {nq }± q , {nq }, ·) introduces SOD elements. Notably, the factor of coupling between a diagonal element and the off-diagonal counterparts is given by FGn , where n increases with the distance from the main diagonal. Thus the truncation of the hierarchy at a certain level is justified by the assumption for a weak electron-phonon coupling. We first consider the case, where the hierarchy is truncated at FOD level, so that SOD and higher order elements are neglected.

12.4 Closure at First-Off-Diagonal Level At this level of truncation we retain in the FOD equations all algebraic combinations, where SOD elements gives rise to diagonal elements. Theorem 12.1 The truncation of the hierarchy at the FOD level along with the assumption for equilibrium phonons gives rise to the following equation for the reduced, or electron Wigner function. 

,

  t pz ∂ + ∇z + eE∇pz fw (z, pz , t) = dt  ∂t m 0  

(12.7)

q⊥ ,pz

S(pz , pz , q⊥ , t, t  )fw (z− (t  ), pz  (t  ), t  ) −S(pz , pz , q⊥ , t, t  )fw (z− (t  ), pz (t  ), t  ) ,

where ⎞ ⎛ 

t   (τ )) − hω  dτ (p (τ )) − (p ¯ z q ⎟ ⎢ ⎜ z S(pz , pz , q⊥ , t, t  ) = 2|FG (q )|2 ⎣n(q)cos ⎝ ⎠ h¯ ⎡

t

⎞⎤ ⎛ 

t   (τ )) + hω  dτ (p (τ )) − (p ¯ z q ⎟⎥ ⎜ z +(n(q) + 1)cos ⎝ ⎠⎦ , h¯ t

(12.8)

12.4 Closure at First-Off-Diagonal Level

129

and n(q) is the equilibrium number of phonons in mode q. the relation pz = pz −hq ¯ z accounts for the momentum conservation along the wire (z axis) and z− (t  ) = z −

1 m

t t

pz −

 h¯ qz − eE(t − t  ) dτ. 2

The equation describes the electron dynamics in a quantum wire. For homogeneous conditions it reduces to an equation derived by Levinson [107] within a density matrix approach. Equation (12.7) can be called inhomogeneous Levinson equation. The physical and numerical aspects are discussed in Sect. 12.6. In the following we present the proof. We introduce the shortcut FG = FG (q ). The equations of motion for the fF+OD and fF−OD elements obtained from (12.4) can be presented as follows: ⎛

⎞ hq  pz ∓ ¯ 2 z hq ∂ ¯  ⎝ + ∇z + eE∇pz ± iωq ⎠ fw (z, pz − z , {nq }± q , {nq }, t) = ∂t m 2  q

h(q  ¯ z ± qz ) + , {{nq }± FG eiqz z nq + 1 fw (z, pz ∓ q }q , {nq }, t) 2

h(q   √ ¯ z ∓ qz ) − , {{nq }± − FG∗ e−iqz z nq fw (z, pz ∓ q }q , {nq }, t) 2 h(q  √ ¯ z ∓ qz ) − , {nq }± − FG eiqz z nq fw (z, pz ∓ q , {nq }q , t) 2  h(q   ¯ z ± qz ) + , {nq }± + FG∗ e−iqz z nq + 1fw (z, pz ∓ {n } , t)   q q q 2

(12.9)

The particular equations are denoted as (12.9+ ) and (12.9− ) accordingly. The next approximation eliminates all SOD elements in the right hand side. The only elements remaining in (12.9+ ), are given by the second and fourth term on the right √ in the case of q = q . Noteworthy, the factor nq becomes nq + 1, because the number of the phonons in q are raised by unity. Similarly, the first and the third term in (12.9− ), become diagonal elements in the case q = q . Accordingly, the √ factor nq + 1 becomes nq . We further use the integral forms of the in this way truncated equations. The characteristics of the Liouville operator on the left of (12.9) are the Newtonian trajectories, initialized by the time and phase space variables identifying the left hand sides of (12.9± ), namely z∓ (t  ) = z −

1 m

pz∓ (t  ) = pz ∓

t

t h¯ qz

2

pz∓ (τ )dτ

and

− eE(t − t  ) = pz (t  ) ∓

(12.10) hq ¯ z . 2

130

12 Evolution in a Quantum Wire

The initial conditions for fF±OD are zero due to the fact that the evolution begins with noninteracting carrier and phonon subsystems. hq ¯ z , {nq }± q , {nq }, t) = 2 2

1 1 t  ∓iωq (t−t  ) ∓iqz z∓ (t  ) dt e e × FG∓ (q ) nq + ± 2 2 0  ∓ fw (z∓ (t  ), pz (t  ), {nq }, {nq }, t  )

fw (z, pz ∓





± fw (z (t ), pz (t



(12.11)



±  ) ∓ h¯ qz , {nq }± q {nq }q , t )

,

where FG+ = FG , FG− = FG∗ , and pz (t  ) = pz − eE(t − t  ) in accordance with (12.10). After inserting (12.11) into (12.6) we obtain an equation containing only diagonal elements: 

   ∂ pz  2 + ∇z + eE∇pz fw (z, pz , {nq }, {nq }, t) = 2Re |FG (q )| ∂t m  q

t





 − (t  )

dt  (nq + 1)eiqz z e−iωq (t−t ) e−iqz z

×

(12.12)

0



+  −    fw (z− (t  ), (pz − h¯ qz )(t  ), {nq }+ q {nq }q , t )−fw (z (t ), pz (t ), {nq }, {nq }, t )

t −





 + (t  )

dt  nq e−iqz z eiωq (t−t ) eiqz z

×

0



 − −  +    +    fw (z (t ), pz (t ), {nq }, {nq }, t )−fw (z (t ), pz (t ) + hq ¯ z , {nq }q {nq }q , t ) With the help of (12.10) the arguments of the exponents can be rewritten as ±qz z ∓ ωq (t − t  ) ∓ qz z∓ (t  ) =  

t   h¯ q  qz pz − eE(t − τ ) ∓ z ∓ ωq dτ = − m 2 t

t  1 ∓ (pz (τ )) − (pz (τ ) ∓ h¯ qz ) ∓ hω ¯ q dτ. ¯ t h

12.5 Closure at Second-Off-Diagonal Level

131

Next we need to get rid of the dependence on the phonon coordinates. The electron Wigner function (12.5) is obtained after the assumption that the phonon system is a thermostat, and thus remains in equilibrium during the evolution. This formally means that the variables of the two subsystems can be separated. fw (z, pz , {nq }, {nq }, t  ) = fw (z, pz , t  )



Peq (nq )

(12.13)

q

P (nq ) is the equilibrium distribution of the number of phonons nq in mode q. Then the equilibrium number of phonons n(q) is given by the Bose-Einstein distribution ∞ 

n(q) =

nq P (nq );

nq =0

∞ 

P (nq ) = 1,

(12.14)

nq =0

having the following properties n(q) +

1 1  1 1 ∓ = (nq + ± )P (nq ± 1). 2 2 2 2 n

(12.15)

q

A replacement of (12.13) in (12.12) and performing the trace operation gives rise to a replacement of the phonon coordinates by the numbers (12.15). Finally, by using the fact that the functions ωq and FG are even, we can change the sign of q in the last row of (12.12), to introduce the variable pz = pz − hq ¯ z and to perform simple algebraic operations which compact the equation to obtain (12.7).

12.5 Closure at Second-Off-Diagonal Level The truncation of the hierarchy on the next, SOD level allows to introduce an important physical parameter, giving the finite lifetime of the carriers due to the interaction with the phonons on the expense of further complication of the derivations. Theorem 12.2 The truncation of the hierarchy at the SOD level, along with the assumption for equilibrium phonons with a constant energy gives rise to the following equation for the reduced Wigner function: 

 ∂ pz + ∇z + eE∇pz fw (z, pz , t) = ∂t m

t  , dt  S(pz , pz , q⊥ , t, t  )fw (z− (t  ), pz  (t  ), t  )

q⊥ ,pz

0

−S(pz , pz , q⊥ , t, t  )fw (z− (t  ), pz (t  ), t  ) ,

(12.16)

132

12 Evolution in a Quantum Wire

where   t p +p − γ¯ ( z 2 z )(τ ) dτ

× S(pz , pz , q⊥ , t, t  ) = 2|FG (q )|2 e t  ⎡ ⎛ t  ⎞ 

(pz (τ )) − (pz (τ )) − hω ¯ q dτ ⎠ ⎣n(q)cos ⎝ + h¯ t

(12.17)



 ⎞⎤

t  (pz (τ )) − (pz (τ )) + hω ¯ q dτ ⎠⎦ , (n(q) + 1)cos ⎝ h¯ t

and γ¯ (p) =



 2 2π h|F ¯ G (q )|

q

*      (n(q ) + 1) δ  (p) −  p − hq ¯ ¯ z − hω  +   . + n(q )δ  (p) −  p + h¯ qz + hω ¯

(12.18)

The quantity γ¯ is the single-dimensional counterpart of the Boltzmann outscattering rate. In the following we present the proof. The analysis aims to find criteria for retaining of certain SOD elements on the right hand side of (12.9± ).

12.5.1 Approximation of the fF+OD Equation ++, +−, +,− Equation (12.9+ ), contains four types of SOD elements: fSOD , fSOD , fSOD , and +,+ . In the chosen short notations the comma separates the left from the right fSOD basis so that e.g. ++, means two additional phonons in mode q and q in the +−, +,+ left basis. The elements fSOD and fSOD giving rise to diagonal elements have ++, been already considered. The remaining elements in the first and third term fSOD +,− and fSOD . have two phonons more in the left basis. The corresponding equations contain the frequency term i2ω, in the Liouville operator to the left. In comparison to the diagonal elements these SOD elements oscillate rapidly in time and can be neglected within a random phase approximation. Thus it is to conclude that only the second and the fourth SOD elements remain to be analyzed for giving additional contributions to the diagonal elements. Such a conclusion, based on jumps between

12.5 Closure at Second-Off-Diagonal Level

133

the levels in the hierarchy is however wrong, because it is only correct to compare terms directly linked into an equation, which is not the case with the combination ++, +,− SOD element and diagonal element. A careful analysis shows that fSOD and fSOD give a contribution. Therefore we evaluate the contributions from the SOD elements to the right hand side of (12.9+ ). The corresponding equations of motion in general involve third-order off-diagonal elements. We follow the same strategy as before, by considering only the cases when third-order elements reduce to SOD elements. After that the truncated SOD equations are substituted into (12.9+ ) to obtain an equation containing FOD elements only. Proceeding in the same way with (12.9− ), we obtain another equation involving FOD elements. Both equations are used for a closure of the system (12.6). A necessary assumption is that for a constant phonon frequency ωq = ω, which is fulfilled to a large extend for the optical part of the phonon spectra.

12.5.1.1

++,

Contribution from fSOD

++, The equation of motion of fSOD follows from (12.4).



h(q  +qz )

p − ¯ z2 ⎝∂ + z ∂t m

⎞ ∇z + eE∇pz + i2ω⎠ ×

(12.19)

 h¯ (qz + qz ) + fw (z, pz − , {{nq }+  }q , {nq }, t) = q 2 



q

 z

FG (q )eiqz



nq + 1fw (z, pz −

h(q ¯ z

+ qz 2

+ qz )

+ + , {{{nq }+ q }q }q , {nq }, t)

h(q √ ¯ z + qz − qz ) + − , {{{nq }+ nq fw (z, pz − q }q }q , {nq }, t) 2 h¯ (qz + qz − qz )  √ + − , {{nq }+ −FG (q )eiqz z nq fw (z, pz − q }q , {nq }q , t) 2  +FG∗ (q )e−iqz z nq + 1 ×  h¯ (qz + qz + qz ) + + , {{nq }+ } , {n } , t) fw (z, pz − q q q q 2  z

−FG∗ (q )e−iqz

Only two terms on the right, namely the second and the fourth introduce FOD elements in the case when q = q or q = q . The approximated equation is h(q  +q  ) integrated along the Newtonian trajectory initialized by (z, pz − ¯ z2 z , t), as suggested by the Liouville operator in (12.19). We recall that the free term is zero

134

12 Evolution in a Quantum Wire

because of the initially not interacting systems.

t h(q ¯ z + qz ) + , {{nq }+ } , {n }, t) = dt  |FG (q )|2 q q q 2 0   hq     ¯  e−i2ω(t−t ) − eiqz z e−iqz z(t ) nq + 1fw (z(t  ), pz (t  ) − z ), {nq }+ q , {nq }, t ) 2 F ∗ (q ) iq  z −iq  z(t  ) hq ¯ z    z e z  + 1fw (z(t ), pz (t ) − − G n e , {nq }+ q q , {nq }, t ) FG∗ (q ) 2 

FG (q )eiqz z fw (z, pz −

h(q    √ ¯ z + 2qz ) + +  , {{nq }+ +eiqz z e−iqz z(t ) nq fw (z(t  ), pz (t  ) − q }q , {nq }q , t ) 2 F ∗ (q ) iq  z −iq  z(t  ) √ + G nq × (12.20) e z e z FG∗ (q )  h(2q ¯ z + qz ) + + +    , {{nq }q }q , {nq }q , t ) fw (z(t ), pz (t ) − 2 The left hand side of (12.20) is now substituted into (12.9+ ), so that the right hand side appears as a correction to the right hand side of the latter. Since the correction  terms depend on e−i2ω(t−t ) , it is convenient to consider the integral form of the obtained equation, which is (12.11+ ) with additional terms on the right coming from the correction. We evaluate consecutively the contribution of these terms. The first term in (12.20) is rewritten by expressing the qz dependent arguments of the exponents in terms of electron energies qz z

− qz z(t  )

t =

h¯ qz h¯ qz 2 ) − (pz (τ ) − 2

(pz (τ ) −

− hq ¯ z )dτ



t

,

(12.21)

where we use Δ as a short notation of the numerator of the integrand function. It gives the following contribution to the right hand side of (12.11+ ): fw (z, pz −

 h¯ qz , {nq }+ (nq + 1)  , {nq }, t) = · · · − q 2  q

|FG (q )|2

t

dt 

 t

t 

Δ(τ )dτ h¯







dt  e−iω(t−t ) e−i2ω(t −t ) ×

(12.22)

0

0

ei

t

fw (z(t  , t  ), pz (t  ) −

hq ¯ z  , {nq }+ q , {nq }, t ), 2

where z(t  , t  ) is the Newtonian trajectory z(t  , t  ) = z− (t  ) −

1 m

t t 

(pz (τ ) −

h(q ¯ z + qz ) )dτ. 2

(12.23)

12.5 Closure at Second-Off-Diagonal Level

135

Equation (12.22) is convenient for analysis of the dependence on the time variables. t  t t t By using the identity 0 dt  0 dt  = 0 dt  t  dt  we obtain

t

dt





t





dt  e−iω(t−t ) e−i2ω(t −t ) ei

0

0

t

dt  e−iω(t−t

 )

t

t 

0

dt  ei

 t

t 

 t

t 

(Δ(τ )−h¯ ω)dτ h¯

Δ(τ )dτ h¯

fw =

fw .

(12.24)

Now the internal integral I can be approximated by the limit h¯ → 0. The latter is a short notation of the definite relationship in the physical scales, discussed in Appendix A.5. 1 h¯ →0 h ¯

lim

t

dτ ei

τ h¯

0

i = π δ() + v.p. . 

In this limit t  becomes equal to t  , so that z(t  , t  ) in fw , (12.22), becomes z− (t  ). If we neglect the principal value v.p., (12.22) transforms into h¯ qz , {nq }+ q , {nq }, t) = 2

t   ··· − dt  e−iω(t−t ) (nq + 1)|FG (q )|2 π h¯ × fw (z, pz −

0

(12.25)

q

      hq hq ¯ z ¯ z    −  pz (t ) − − hq δ  pz (t ) − ¯ ¯ z − hω 2 2 fw (z(t  ), pz (t  ) −

hq ¯ z  , {nq }+ q , {nq }, t ). 2

The time derivative of (12.25) gives (12.9+ ) with an additional term γe fF+OD , appearing in the equation, which, transformed to the left, gives the correction γe (pz −

 hq ¯ z )= (nq + 1)|FG (q )|2 2 

(12.26)

q

      hq hq ¯ z ¯ z  −  pz − − hq π h¯ δ  pz − ¯ ¯ z − hω 2 2 added to the Liouville operator in (12.9+ ). Noteworthy, the summation on q contains only positive contributions to γe . The imaginary part gives rise to a modification of the phonon frequency—an effect called polaron energy renormalization, which is neglected here. The rest of the terms stemming from (12.20) can be neglected with the help of the RPA [43].

136

12 Evolution in a Quantum Wire

12.5.1.2

+,−

Contribution from fSOD

We follow the same approach by retaining only the FOD elements on the right h(q  −q  ) − hand side of the equation for fw (z, pz − ¯ z2 z , {nq }+ q , {nq }q , t) The equation is integrated with the help of the Newtonian trajectory, initialized by (z, pz − h¯ (qz −qz ) , t). 2 h(q ¯ z − qz ) − , {nq }+ q , {nq }q , t) = 2 

t    √   2 −i2ω(t−t  ) − eiqz z e−iqz z(t ) nq × dt |FG (q )| e 

FG (q )eiqz z fw (z, pz −

0

h(q ¯ z − 2qz ) − −  , {{nq }+ q }q , {nq }q , t ) 2 h¯ q   + 1fw (z(t  ), pz (t  ) + z , {nq }, {nq }− q , t ) 2

fw (z(t  ), pz (t  ) − −

FG∗ (q ) iq  z −iq  z(t  ) nq e z e z FG∗ (q )

hq    √ ¯   +eiqz z e−iqz z(t ) nq fw (z(t  ), pz (t  ) − z , {nq }+ q , {nq }, t ) 2 F ∗ (q ) iq  z −iq  z(t  ) e z e z + G nq + 1 × FG∗ (q )

(12.27)

h(2q ¯ z − qz ) − +  , {nq }+ fw (z(t ), pz (t ) − q , {{nq }q }q , t ) 2 





With the help of the RPA it can be shown that only the third term is relevant. The argument of the corresponding exponent is rewritten according to qz z

− qz z(t  )

t =−

(pz (τ ) −

h¯ qz h¯ qz 2 ) − (pz (τ ) − 2



t

+ hq ¯ z )

dτ.

(12.28)

This expression differs from (12.21) by the front sign and the sign of qz . The application of the classical limit, followed by the steps used to derive (12.26), gives rise to the term γa (pz −

 h¯ qz )= nq |FG (q )|2 π h¯ 2  q

(12.29)

      h¯ q  hq ¯   + hω , δ  pz − z −  pz − z + hq ¯ ¯ z 2 2 which is added to the Liouville operator in (12.9+ ).

12.5 Closure at Second-Off-Diagonal Level

12.5.1.3

137

+−,

Correction from fSOD

The second term in (12.9+ ) already contributed by a diagonal element to the +−, first level closure of the system. Now we need to find the correction δfSOD , corresponding to a closure on the SOD level (1 − δq ,q )fw (z, pz −

h(q ¯ z − qz ) − , {{nq }+ q }q , {nq }, t), 2

so that the corresponding equation of motion implies that q = q . ⎛

h(q  −qz )

p − ¯ z2 ⎝∂ + z ∂t m

⎞ ∇z + eE∇pz ⎠ × +−, δfSOD (z, pz −

  FG (q )eiqz z nq + 1 × +

h¯ (qz − qz ) − , {{nq }+ q }q , {nq }, t) = 2 (12.30)

q

h¯ (qz − qz + qz ) − + , {{{nq }+ q }q }q , {nq }, t) 2 h(q  √ ¯ z − qz − qz ) − − , {{{nq }+ −FG∗ (q )e−iqz z nq fw (z, pz − q }q }q , {nq }, t) 2 h(q  √ ¯ z − qz − qz ) − − −FG (q )eiqz z nq fw (z, pz − , {{nq }+ q }q , {nq }q , t) 2  +FG∗ (q )e−iqz z nq + 1 ×  h¯ (qz − qz + qz ) − + , {{nq }+ } , {n } , t) fw (z, pz −    q q q q 2 fw (z, pz −

The four terms to the right provide FOD elements, if q = q , q = q , q = q , and q = q , respectively. In this cases nq corresponds to the number of phonons in q or q . The in this way truncated equation is integrated over the Newtonian h(q  −q  ) trajectory (z(t  ), pz (t  )) initialized by (z, pz − ¯ z2 z , t). The following equation is obtained: h(q ¯ z − qz ) − , {{nq }+ q }q , {nq }, t) = 2 

t h¯ q     √   2  dt |FG (q )| e−iqz z eiqz z(t ) nq fw (z(t  ), pz (t  ) − z , {nq }+ q , {nq }, t ) 2 0 

+−, FG∗ (q )e−iqz z δfSOD (z, pz −

138

12 Evolution in a Quantum Wire



FG∗ (q ) −iq  z −iq  z(t  ) hq ¯ z    z e z  + 1fw (z(t ), pz (t ) + e , {{nq }}− n q q , {nq }, t ) FG (q ) 2

h¯ qz    √ + −   −e−iqz z eiqz z(t ) nq fw (z(t  ), pz (t  ) + hq , {{nq }− ¯ z − q }q , {nq }q , t ) 2 F ∗ (q )    + G  e−iqz z(t ) e−iqz z nq + 1 × (12.31) FG (q )  hq ¯ z + − +     , {{nq }q }q , {nq }q , t ) fw (z(t ), pz (t ) − hq ¯ z+ 2 The contribution of the first term on the right of (12.11) is determined by the difference qz z(t  ) − qz z

t (pz (τ ) − =

h¯ qz h¯ qz 2 ) − (pz (τ ) − 2

+ h¯ qz )



t

dτ.

(12.32)

Using again Δ as a temporary short notation one obtains  hq ¯ z , {nq }+ nq |FG |2  , {nq }, t) = · · · − q 2 

fw (z, pz −

q

t

dt 

t



dt  e−iω(t−t ) ei

 t

t 

Δ(τ )dτ h¯

(12.33)

0

0

h¯ qz  , {nq }+ q , {nq }, t ), 2

fw (z(t  , t  ), pz (t  ) − where 1 z(t , t ) = z (t ) − m 







t t 

(pz (τ ) −

h(q ¯ z − qz ) )dτ. 2

(12.34)

The time integrals are reordered to obtain

t

dt



t

 −iω(t−t  )

dt e 0



dt  e−iω(t−t ) ei

0

0

t

t t 

dt  ei

 t

t 

 t

t 

(Δ(τ ))dτ h¯

fw =

(Δ(τ )+h¯ ω)dτ h¯

fw .

(12.35)

12.5 Closure at Second-Off-Diagonal Level

139

Finally, by applying the classical approximation to the inner integral and by neglecting the principal value we obtain the contribution to the Liouville operator in (12.9+ ): γa (pz −

 hq ¯ z nq |FG (q )|2 π h¯ )= 2  q

      h¯ qz hq ¯ z  . −  pz − + hq δ  pz − ¯ ¯ z + hω 2 2 The rest of the terms in (12.31) are neglected with the help of the RPA.

12.5.1.4

+,+

Correction from fSOD

+,+ The correction δfSOD due to the fourth term in (12.11) satisfies the equation



h(q  +qz )

p − ¯ z2 ⎝∂ + z ∂t m

⎞ ∇z + eE∇pz )⎠ × +,+ δfSOD (z, pz −

+



 z

FG (q )eiqz



nq + 1 ×

h(q ¯ z + qz ) + , {nq }+ q , {nq }q , t) = 2 (12.36)

q

h(q ¯ z + qz + qz ) + + , {{nq }+ q }q , {nq }q , t) 2 h¯ (qz + qz − qz )  √ − + , {{nq }+ −FG∗ (q )e−iqz z nq fw (z, pz − q }q , {nq }q , t) 2 h(q  √ ¯ z + qz − qz ) + − , {{nq }+ −FG (q )eiqz z nq fw (z, pz − q }, {{nq }q }q , t) 2  +FG∗ (q )e−iqz z nq + 1 fw (z, pz −

 h(q ¯ z + qz + qz ) + + + , {nq }q , {{nq }q }q , t) , fw (z, pz − 2

140

12 Evolution in a Quantum Wire

under the condition q = q . The four terms on the right give rise to FOD elements, if q = q , q = q , q = q , and q = q , respectively. The truncated equation is h(q  +q  ) integrated with the help of a Newtonian trajectory initialized by (z, pz − ¯ z2 z , t). h(q  ¯ z + qz ) +,+ + , {nq }+ FG∗ (q )e−iqz z δfSOD (z, pz − q , {nq }q , t) = 2 

t      2 e−iqz z eiqz z(t ) nq + 1 × dt |FG (q )|

(12.37)

0

hq ¯ z + +  , {{nq }+ q }q , {{nq }}q , t ) 2 F ∗ (q ) hq    ¯   − G  e−iqz z e−iqz z(t ) nq + 1fw (z(t  ), pz (t  ) − z , {nq }, {{nq }}+ q , t ) FG (q ) 2 hq    ¯   −e−iqz z eiqz z(t ) nq + 1fw (z(t  ), pz (t  ) − z , {nq }+ q , {nq }, t ) 2 F ∗ (q )    + G  e−iqz z(t ) e−iqz z nq + 1 × FG (q )  hq ¯ z + + +     , {nq }q , {{nq }q }q , t ) fw (z(t ), pz (t ) − h¯ qz − 2  fw (z(t  ), pz (t  ) − hq ¯ z −

An application of the RPA eliminates all terms to the right except the third one. Following the established approach we obtain qz z(t  ) − qz z = −

t

(pz (τ ) −

h¯ qz h¯ qz 2 ) − (pz (τ ) − 2

− h¯ qz ) dτ



t

.

(12.38)

the quantity differs from (12.32) by the leading sign and the sign of qz . Giving rise to an addition of the term γe (pz −

 hq ¯ z )= (nq + 1)|FG (q )|2 π h¯ 2  q

      hq hq ¯  ¯  δ  pz − z −  pz − z − h¯ qz − h¯ ω 2 2 to the Liouville operator in (12.9+ ).

12.5 Closure at Second-Off-Diagonal Level

141

These derivations lead to the following equation for the first FOD element: ⎞ hq  pz − ¯ 2 z h¯ qz ∂ ⎝ + ∇z + eE∇pz + γ (pz − ) + iω⎠ × ∂t m 2 ⎛

fw (z, pz −

(12.39)

h¯ qz , {nq }+ q , {nq }, t) = 2

 FG∗ (q )e−iqz z nq + 1 ×

+ +  fw (z, pz − hq ¯ z , {nq }q {nq }q , t) − fw (z, pz , {nq }, {nq }, t) Here γ = 2(γa + γe ).

12.5.2 Approximation of the fF−OD Equation The right hand side of (12.9− ) contains SOD elements with one q phonon removed from the left basis and one q phonon added to the left or right basis, which are −+, −−, −,− −,+ denoted by: fSOD , fSOD , fSOD , and fSOD , respectively. Diagonal elements are −+, −,− and fSOD . The equations of motion provided by the first and the third term. fSOD −+, −,− for the corrections δfSOD and δfSOD corresponding to these terms provide the −−, relevant contributions to (12.9− ). The equations for the remaining two terms fSOD −,+ and fSOD , also give their contributions. The analysis follows the same steps of retaining the FOD elements in the right hand sides of these equations and neglecting the rest of the terms with the help of the RPA. Then, these equation are approximated with the help of the classical limit. The long algebraic transformations are similar to the already discussed cases and are thus omitted. The contributions to the Liouville operator in (12.9− ) are summarized by the following table: h¯ qz ); 2 hq ¯  → γa (pz + z ); 2

−+, fSOD → γe (pz + −,− fSOD

hq ¯ z ); 2 h¯ q  → γe (pz + z ). 2

−−, fSOD → γa (pz + −,+ fSOD

142

12 Evolution in a Quantum Wire

Thus, by using γ = 2(γa + γe ), one can write the equation for the second FOD element as ⎞ ⎛ hq   pz + ¯ 2 z hq ∂ ¯ ⎝ + ∇z + eE∇pz + γ (pz + z ) − iω⎠ × (12.40) ∂t m 2 fw (z, pz +

hq ¯ z , {nq }− q , {nq }, t) = 2

 √ FG (q )eiqz z nq ×

− −  fw (z, pz , {nq }{nq }, t) − fw (z, pz + hq ¯ z , {nq }q , {nq }q , t) .

12.5.3 Closure of the Equation System Equations (12.39) and (12.40) approximate (12.9) and are used to obtain a closed equation for the electron Wigner function by applying the steps used for deriving (12.7). The corresponding integral equations are equivalent to (12.11) except the correction due to the additional terms e− (12.6) generalizes Eq. (12.12) by the

t

h¯ q 

γ (pz (τ )∓ 2z )dτ . Their replacement into t hq ¯  − t  γ (pz (τ )− 2z )dτ term e associated to the t h¯ q  − t  γ (pz (τ )+ 2z )dτ t

exponents in the second row, and the term e associated to the exponents in the last row. The next step is to perform the trace over the phonon coordinates. However, this operation is not so straightforward as in the case of (12.7), because γ introduces nonlinear terms. The relations (12.15) can be applied only after an additional analysis of the phonon subsystems, which will be discussed in a while. Here we note that the Boltzmann total out-scattering rate γ¯ is obtained from γ by a formal replacement of the phonon coordinates nq by their mean value n(q ). In the homogeneous case the quantum-kinetic equation (12.16) is reduced to the Barker-Ferry equation, originally derived with the help of projection operator analysis. The here derived equation can be thus termed inhomogeneous BarkerFerry equation or Barker-Ferry equation for a quantum wire.

12.6 Physical Aspects 12.6.1 Heuristic Analysis The derivation of the two models for the electron Wigner function are viewed here from a physical perspective. The basic concept is to truncate the system of GWF equations at a first or at a second off-diagonal level. This is asserted by

12.6 Physical Aspects

143

Fig. 12.1 Levinson transitions involve diagonal and FOD elements. FOD is marked by an empty square, while n, n denotes the left and right phonon basis {nq }, {nq }

the assumption for a weak electron-phonon interaction. Namely, the scale of the electron-phonon coupling function F multiplied by the time scale βt , characterizing the interaction, must be a small quantity. An analysis of the weak interaction limit is given in [111]. The initial condition involves noninteracting subsystems, so that all off-diagonal elements are zero. The interaction induced transition in the Levinson model is of the type ‘diagonal–FOD–diagonal’ element. Two kinds of such transitions in the left basis are shown in Fig. 12.1. While the duration of a transition is determined by the phonon energy hω, ¯ the time between two transitions is inversely proportional to F [111]. A single phonon in a mode q is involved in a single transition. After the transition is completed, the system returns again into a diagonal state. Thus the assumption of weak interaction means that only a single transition occurs at a given time. No other interaction process can begin before the current transition completes. The integral form of (12.7), suggests that the transition duration is given by the difference t − t  in (12.8). The system evolution occurs at a time scale βe , which is much larger than βt . this is in accordance with the assumption (12.13) for an equilibrium phonon subsystem (Bloch assumption), which associates a fast process acting to recover the phonon equilibrium between the individual transitions. A feature of the Levinson equation is that the classical limit in (12.8) gives rise to the Boltzmann equation. Thus for very long evolution times the Levinson transitions can be regarded as instantaneous and turn into the Boltzmann model. The main transitions in the Barker-Ferry model are of Levinson type, but it additionally accounts for the coupling between FOD and SOD elements. Figure 12.2 shows a transition between two diagonal elements n, n and n + 1 , n + 1 through the FOD element n + 1 , n. The latter is modified, however, due to the coupling with four

144

12 Evolution in a Quantum Wire

Fig. 12.2 Barker-Ferry type of transitions

SOD elements as shown on the figure. The integration involves a second phonon q and corresponds to an instantaneous transition between the involved FOD and SOD elements. These transitions are accounted by the exponent in (12.17). The time limits in the exponent correspond to the duration of the main (Levinson) transition, while due to the classical limit the interaction with the SOD elements is instantaneous. All SOD elements are unified under the sum on q . The BarkerFerry model in this way introduces a mechanism which constrains the time duration of the main transitions. Such a mechanism is missing in the Levinson model, where the transitions with different duration are equally weighted. An open problem for the model is that the long time (classical) limit does not recover the energy conserving delta function, but gives a Lorentzian [120]. Both models are featured by enormous computational demands which rise exponentially with the evolution time.

12.6.2 Phonon Subsystem The appearance of the Boltzmann out-scattering rate γ¯ in (12.18) can be legitimated in two ways. One widely used physical approach is to replace complicated functional dependencies by macro-parameters, obtained as a result of averaging. Since the phonons are (and remain) in equilibrium, the probability to find nq phonons in mode q is P (nq ). By using the dependence of γ on {nq }, it is sufficient to average γ . γ¯ =



Peq (nq )γ

(12.41)

{nq } q

After that γ¯ is replaced in (12.12) followed by the steps used between Eqs. (12.13) and (12.15). There is a less formal approach which gives additional insight into the assumpt tions giving rise to (12.41) suggested in [109]. The exponent e− t  γ in (12.12) is

12.6 Physical Aspects

145

expanded into a series until the K-th order, corresponding to the desired precision. This is followed by the trace operation according to (12.13). Obtained are terms k ≤ K of the type:  {nq }

nq

 q1



nq1

 (nq2 + 1) . . . Peq (nq + I )

(12.42)

q2

Peq (nq )|FG (q1 )|2 |FG (q2 )|2 . . . δqz1 δqz2 . . .

q =q

 The sum {nq } involves all natural numbers from zero to infinity, while the sum over q1 , q2 involves all phonon modes. In this way a particular mode q is repeated in each consecutive sum. Nevertheless, under proper assumptions discussed in Appendix A.6 all repeating modes of the type nq nq . . . Peq (nq ) can be neglected. Then, the first factor nq , corresponding to q in (12.12), is distributed according to P (nq + I ), where I is ±1 or 0. After averaging with the help of (12.15) it gives n(q ) or n(q ) + 1 The modes q1 q2 . . . are involved in the products of the type γ γ . . . in the expansion of the exponent. The phonons in these modes are distributed according to P (nq ) and can be averaged accordingly, so that the equilibrium number n(qi ) replaces the corresponding variable nqi in (12.42). in this way γ¯ appears in all terms, giving rise to (12.17).

Chapter 13

Hierarchy of Kinetic Models

The hierarchy of kinetic models, discussed in Chap. 11, can be derived with the help of the same analysis, used to derive transport equations describing the electron evolution in quantum wires (Chap. 12). The six-dimensional counterpart of (12.4) can be treated in exactly the same way, as in Sect. 12.2 to approximate the phonon system. Only the linear potential −eEz of the single-dimensional electric field is now replaced by a very general three-dimensional potential. This leads to the appearance of the integral with kernel Vw on the right hand side of the equation 

 ∂ p + · ∇r fw (r, p, {nq }, {nq }, t) = ∂t m

1 ({nq }) − ({nq }) fw (r, p, {nq }, {nq }, t) i h¯

+ dp Vw (r, p − p)fw (r, p , {nq }, {nq }, t)

+

 q

(13.1)

 h¯ q   , {nq }+ F (q ) eiq r nq + 1fw (r, p − q , {nq }, t) 2

h¯ q  √  , {nq }− −e−iq r nq fw (r, p + q , {nq }, t) 2 h¯ q  , {nq }, {nq }− −eiq r nq fw (r, p + q , t) 2  hq  ¯  , {nq }, {nq }+ , t) . + e−iq r nq + 1fw (r, p −  q 2

© Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_13

147

148

13 Hierarchy of Kinetic Models

The last two terms in the curly brackets can be obtained from the first two with the help of the following operations: • the sign in front of the imaginary unit i is switched; • the number of phonons in the state determined by the summation index (q ) is increased/reduced (superscript +/−) by one in the right instead of in the left basis; • nq is replaced by nq in the square root arguments. For convenience we denote the terms obtained by this operation with i.c.. The analysis of the dependence on the phonon coordinates resembles the case with a quantum wire. The equation links an element fw (. . . , {n}, {m}, t) with four neighbor elements having one extra or one less phonon in the state determined by the summation index q in the left or in the right basis nq . Furthermore, for any state q the value of nq can be any natural number which makes the numerical treatment impossible. Due to the trace operation (12.5), the physical information is carried by the diagonal with respect to the phonons elements. fw (r, p, t) =



fw (r, p, {nq }, {nq }, t)

(13.2)

{nq }

These elements are linked to the F OD elements, having one phonon difference between the left and the right basis and so on, which allows to classify the elements according to their distance from a diagonal element. A truncation on a F OD level repeats all assumptions and approximations related to the phonon and in particular a weak interaction and equilibrium phonons exactly as in the case of a quantum wire. As a result the dependence on the {nq } coordinates is replaced by a single number, namely the equilibrium phonon number n(q).

13.1 Reduced Wigner Function We begin with a set of three equations for the reduced Wigner function fw and the FOD functions f1 and f2 , namely fw (r, p, t) = fw (r(p, 0), p, 0) +

t

dt 



(13.3)

dp Vw (r(p, t  ), p − p)fw (r(p, t  ), p , t  )+

0

 q



F 2 (q ) e

iq r(p,t  )

f1 (r(p, t  ), p− , t  ) − e

−iq r(p,t  )

⎤  f2 (r(p, t  ), p+ , t  ) + i.c. ⎦ ,

13.1 Reduced Wigner Function

f1 (r , p− , t  ) =  ,

t

149





dt  e−iωq (t −t ) ×

(13.4)

0  − (p,t  )

dp Vw (r− (p, t  )p − p− )f1 (r− (p, t  ), p , t  ) − e−iq r 

−







−





× 

(n(q ) + 1)fw (r (p, t ), p, t ) − n(q )fw (r (p, t ), p − hq ¯ ,t )

-



and f2 (r , p+ , t  ) =  ,

t





dt  eiωq (t −t ) ×

(13.5)

0  + (p,t  )

dp Vw (r+ (p, t  ), p − p+ )f2 (r+ (p, t  ), p , t  ) eiq r 

+







+



+ 



n(q )fw (r (p, t ), p, t ) − (n(q ) + 1)fw (r (p, t ), p + h¯ q , t )

-

 ,

where p∓ (τ ) = p∓ = p ∓

h¯ q ; 2

r∓ (p, t  ) = r −

t

t 

p∓ (τ ) dτ. m

(13.6)

In particular r(p, t  ) corresponds to the case q = 0 and integration in the limits t and t  . Equations (13.4) and (13.5) correspond to the over an equilibrium phonon system averaged equations (12.11). The difference is that, while the potential in the latter is linear, the former corresponds to a general potential. This leads to Fredholm integral equations of the second kind with kernels Vw and free terms, given by the terms enclosed in the curly brackets. Accordingly (13.6) is analogous to (12.10), but the impulses remain constant in time due to the lack of the electric force. The integral equation allows to directly account for the weak interaction condition by assuming zero initial conditions. In this way the transitions begin from a diagonal element only and end with a diagonal element, so that the next transition begins again from a separated electron-phonon system in accordance with condition (12.13). The free terms in (13.4) and (13.5) depend on fw , so that the iteration terms with the kernel Vw describe the correlations between the electric potential and the electron-phonon interaction. The system can be reduced to a single equation for fw . Equations (13.4) and (13.5) can be approximated in two ways. In the first one, one can use the limit of a slowly varying potential and approximate the kernel VW in the two equations by the electric force. Then the equations can be solved exactly and the solutions substituted into (13.3). The obtained equation for the reduced Wigner function accounts for the interactions with both, phonons and electric potential, on a quantum level. In the second way the kernel Vw of the equations can be entirely neglected, giving rise to

150

13 Hierarchy of Kinetic Models

the Wigner-Boltzmann equation. Actually the two ways can be linked into a two stage process as explained in the following section.

13.2 Evolution Equation of the Reduced Wigner Function It is convenient to begin with the integro-differential form of (13.4) and (13.5). The slowly varying potential limit replaces the integral with the Wigner potential with the classical force term, which completes the differential part to the Liouville operator and gives rise to an accelerating force F = eE in (13.6). p(τ ) = p − eE(t − τ ),

r∓ (p, t  ) = r −

t t

p∓ (τ ) hq ¯  dτ = r(p, t  ) ± (t − t  ) m 2m

The obtained equations can be solved explicitly for f1 and f2 . Furthermore, the sign of the variable q in the equation for f1 is switched to −q , by using the fact that n(q) and ωq are symmetric functions, and the variable p = p + hq ¯ is introduced. A replacement in the integro-differential form of (13.3) gives rise to the equation 



∂ p + · ∇r fw (r, p, t) = dp Vw (r, p − p)fw (r, p , t  ) + ∂t m   

t

hq ¯ dt  dp S(p , p, t, t  )fw r(p, t  ) + (t − t  ), p (t  ), t  − 2m 0   hq ¯       , (t − t ), p(t ), t S(p, p , t, t )fw r(p, t ) + 2m

S(p , p, t, t  ) = Ω(p, p , t, t  ) =

2V Fq2   n(q)cos(Ω(p , p, t, t  )) + (n(q)+1)cos(Ω(p, p , t, t  )) , 3 (2π h) ¯

t dτ t



(13.7)

(p(τ ) ) − (p(τ ) ) + hω ¯ q h¯

;

p (τ ) = p − F(t − τ ),

with q = p h−p . ¯ This equation generalizes both, the Levinson equation and its quantum wire counterpart (12.7) for general electric potentials in the six-dimensional phase space. No approximations in the coherent part of the transport process are introduced. Indeed, if the phonon interaction is neglected, the equation reduces to the regular Wigner equation which is fully equivalent to the evolution equation for the density matrix. An interesting feature of (13.7) is the nonlocality of the scattering in the real space. Indeed, in a contrast to the Boltzmann equation, where the distribution function in r, p at time t is linked by the scattering process to other points r, p

13.3 Classical Limit: The Wigner-Boltzmann Equation

151

at the same spatial location and time instant t, the dependence on the second time, t  of r(p, t  ) in the argument of fw provides contributions from other spatial points.

13.3 Classical Limit: The Wigner-Boltzmann Equation Next, we assume that all assumptions about the physical scales, giving rise to the classical limit (Appendix A.5) are fulfilled, and apply the limit to (13.7). Formally the limit is expressed in terms of the generalized function 1 h¯ →0 h ¯

lim

0



  i 1 . dτ e h¯ τ φ(τ ) = φ(0) π δ() + iP 

(13.8)

The limit can be applied to the time integral of the form

t

i

dτ e h¯ τ φ(τ )

(13.9)

0

in Eq. (13.7). Alternatively, the same result is obtained, if the limit is first taken into Eqs. (13.4) and (13.5) which are then substituted into (13.3). The from the limit obtained principal values P conveniently cancel each other. This can be explained by the fact, that (13.7) contains only real terms. Importantly the limit gives rise to energy and momentum conservation. Indeed the cosine functions in S, (13.7), do not provide energy conservation even in the homogeneous (Levinson) version of the equation, while momentum conservation is provided only after the interaction process is completed, at the instants between two consecutive interactions. Obtained is an equation, where the Wigner operator exists together with the Boltzmann scattering term. 



∂ p + · ∇r fw (r, p, t) = dp Vw (r, p − p)fw (r, p , t  ) (13.10) ∂t m

  + dp fw (r, p , t)S(p , p) − fw (r, p, t)S(p, p ) , S(p , p) =

V 2π  |F(q)|2 δ((p) − (p ) − hω ¯ q )n(q) h3 h¯

 + |F(−q)|2 δ((p) − (p ) + hω ¯ −q )(n(−q) + 1) ,

q=

p − p h¯

|F|2 = h¯ 2 F 2 is the electron-phonon matrix element. As already discussed, the first operator is relevant for nanometer scale semiconductor structures, while the scattering operator requires picosecond evolution times.

152

13 Hierarchy of Kinetic Models

The classical limit of a slowly varying potential recovers the Boltzmann equation, which is at the bottom of the hierarchy in Fig. 11.1. Stochastic approaches for solving (13.10) are derived in the next section for both stationary and evolution cases.

Chapter 14

Stationary Quantum Particle Attributes

Quantum transport in terms of the Wigner-Boltzmann equation has been treated with deterministic and stochastic methods, which have different numerical properties. In general, deterministic methods find the numerical solution of the equation first and then use it to calculate the desired physical averages. These methods evaluate the solution with a high precision, which is very important in regions of interference, where the Wigner function oscillates. Oscillations pose a problem for discretization approaches and in particular, the diffusion term ∇r fw in the Liouville operator. Third-order discretization schemes are needed for a precise evaluation of the subthreshold regime of operation of nanometer transistors [121, 122]. It has been shown that the computed I − V characteristics of a resonant-tunneling diode are sensitive to even fourth order schemes [123]. To avoid these problems, both, alternative formulations of the equation and alternative numerical methods, are developed. An integral form of the equation, which avoids the diffusion operator has been considered within a numerical scheme with reduced complexity [124]. The Wigner equation has been reformulated by using a spectral decomposition of the classical force field instead of the potential energy [125]. Spectral element methods which reduce the computational cost and can treat unbounded potentials have also been developed [126, 127]. The main challenge of the deterministic approaches is the growth of the memory requirements with the dimensionality of the computational problem. In contrast to the Boltzmann equation, where threedimensional problems are efficiently treated nowadays due to the fact that the matrix associated to the scattering operator is relatively sparse as a consequence of energy and momentum conservation laws, the matrix associated to the Wigner potential is dense. In such problems stochastic approaches offer the advantage to reduce the memory requirements on the expense of longer computational times. The first stochastic methods to the Wigner equation are already three decades old [40, 128, 129]. Inspired by the effective application of the Monte Carlo method to the Boltzmann equation, these methods employ the formal analogy with the Wigner equation to introduce numerical quantum particles [27, 101, 120]. The © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_14

153

154

14 Stationary Quantum Particle Attributes

method of the effective potential [130] retains the concept of Boltzmann particles by associating the quantum information to a generalized electric potential, which becomes a function of particle momenta [131–133]. Ultra-fast processes in optically excited semiconductors are described by the system of Bloch equations for the current carriers (electrons and holes) and inter-band polarization. A Monte Carlo method, successfully solving the system has been derived [27]. It is remarkable that an in the framework of the method developed particle model is associated to the inter-band polarization which is a complex valued quantity associated with the coherence of the photo-generation process. Another model interprets the quantum evolution as a Markovian process of scattering of particles, caused by the Wigner potential [41]. Particle trajectories, based on modified Hamilton equations, are formulated with the help of a quantum force [129]. The latter is a spatially nonlocal quantity, determined by the Wigner function, Wigner potential, and their momentum derivatives. These trajectories can be created and destroyed in the extreme points of the Wigner function [129], and provide a heuristic description of the quantum evolution and in particular the process of tunneling [128, 134, 135]. A quantum particle model introduces Wigner paths, based on the fact that a phase space point, described by a delta function, evolves by carrying the value of the Wigner function over Newtonian trajectories. The action of the Wigner potential is interpreted as scattering, while the electron-phonon interaction is treated at a full quantum level of the generalized Wigner description, [115–117, 136]. Accordingly the numerical challenges grow enormously with the evolution time. Two decades ago two stochastic approaches demonstrated numerical stability and were applied to structures such as resonant-tunneling diodes. The affinity method utilizes an operator splitting method, where the Wigner equation is discretized in time and the solution is presented by a linear combination of delta functions which are associated as particles carrying the quantum information, called affinity. The latter is updated by the Wigner potential at the end of the consecutive time steps [137, 138]. In the second approach, the quantum information is carried by the weight, or alternatively by the sign of the particles [139]. It comprises a variety of methods, which can be derived with the help of the Iteration Approach. In general Monte Carlo methods can be derived by using the formal numerical rules for solving integral equations. One can evaluate the solution in a given point by developing backward algorithms. The latter ensure controlled precision on the expense of a lack of heuristic interpretation. In particular the simulation approaches used to analyze the Levinson and Barker-Ferry equations discussed in the previous chapter are obtained exactly in this way. This is not surprising as the very nature of these equations is characterized by lack of a probabilistic clearness. The WignerBoltzmann equation is approached by an alternative way, where the task is first reformulated in terms of functionals of the solution of the adjoint equation. Then the functionals are directly evaluated by the Monte Carlo method without the necessity to explicitly obtain the Wigner function. The adjoint equation involves a forward evolution of the numerical trajectories and thus the option for a heuristic interpretation [139]. The probabilistic interpretation of the Boltzmann operator can

14.1 Formulation of the Stationary Problem

155

be then extended towards the Wigner potential term. A particle picture can be associated to any of the variety of algorithms for construction of the numerical trajectories. This gives rise to alternative sets of particle attributes which carry the quantum information. Accordingly the evolution rules vary with the concrete attributes. In particular, the concept for particle signs, or particle weights, or both of them in conjunction, the rules for their generation/annihilation, for the choice of the free flight, and for the boundary injection give rise to the signed-particle approach comprising alternative algorithms. The concepts of the signed-particle approach have been further developed for transient transport algorithms. We first consider the application of the Iteration Approach to stationary transport determined by the boundary conditions. We note that the coherent problem under such conditions is ill-defined, due to the zero velocity polarity [140] and due to the existence of bounded states [37, 141]. As a result the conventional boundary condition scheme fails, unless scattering is included [142]. A fundamental property of stochastic methods is their independence on the dimensionality of the problem. This fully holds for the stationary problem, which can be of any dimension for a single-particle systems, so that the phase space can be six-dimensional. Furthermore, an N-particle Wigner equation can be treated in the same way, so that the considered phase space can become N times six-dimensional. In the following we consider the for the device modeling important case of a singleparticle phase space r, p.

14.1 Formulation of the Stationary Problem Stationary transport problems are characterized by the time independence of the external quantities such as fields and boundary conditions on the contacts, which ensures the time independence of the corresponding physical averages. The time independence guarantees invariance with respect to the choice of the initialization time of the carrier trajectories describing the transport process. The boundary conditions fb (rb , p) on the contacts b are usually given by an equilibrium distribution function, where the position dependence enters via the chemical potential determined by the dopant concentration [36]. The task is to evaluate the averaged value AΩ of a generic physical quantity A, in a given domain Ω ∈ D, where D is the domain of the device. For convenience we skip the subscript and attach the domain indicator θΩ to the phase space function A(r, p) representing A. Thus the meaning of the latter becomes ‘physical quantity A in Ω’. The averaged value is given by the functional (fw , A).



A =

dr

dpfw (r, p)A(r, p) = (fw , A),

(fw , A) = (g, f0 )

(14.1)

D

Thus the knowledge of the mean value requires the knowledge of the solution fw (r, p) of the Wigner-Boltzmann equation. The Monte Carlo theory gives the

156

14 Stationary Quantum Particle Attributes

option to reformulate the problem in terms of the solution g of the adjoint equation, where f0 is the free term of the integral Wigner-Boltzmann equation. This is expressed by the second equality in (14.1). The derivation of this equation begins with the formulation of the integral form of the stationary Wigner-Boltzmann equation. In particular f0 is expressed explicitly trough fb .

14.1.1 The Stationary Wigner-Boltzmann Equation The stationary equation reads:

v(p) · ∇r fw (r, p) =

dp Vw (r, p − p)fw (r, p) +

dp fw (r, p )S(p , p) − fw (r, p)λ(p)

(14.2)

We recall that v(p) = hp/m and m are the velocity and the effective mass ¯  , p) is the frequency for scattering from (r, p ) to (r, p), and of the carrier, S(p  λ(p) = S(p, p )dp is the total out-scattering rate. The characteristics of the reduced Liouville operator are the carrier trajectories: r(t) = r + v(p)t,

p(t) = p

(14.3)

The initialization of the of the stationary trajectory (r(t), p(t)) is given by the point (r, p). The time invariance allows to conveniently choose the initialization time to be 0. We also will use the anti-symmetry of the Wigner potential Vw (r, p) = Vw (r, −p). It allows the decomposition Vw (r, p) = Vw+ (r, p) − Vw+ (r, −p),

γ (r) =

dpVw+ (r, p),

where Vw+ = Vw , if Vw > 0 and is zero otherwise, or equivalently Vw+ = Vw ξ(Vw ), with ξ the Heaviside step function. The physical interpretation of the function γ is discussed in the nest section. By adding γ (r)fw (r, p) to both sides of (14.2), one obtains: (v(p) · ∇r + μ(r, p)) fw (r, p) =

dp Γ (r, p, p )fw (r, p ) ,

(14.4)

Γ (r, p , p) = Vw+ (r, p − p) − Vw+ (r, p − p ) + S(p , p) + γ (r)δ(p − p ) , μ(r, p) = λ(p) + γ (r)

(14.5) (14.6)

14.1 Formulation of the Stationary Problem

157

Integro-differential equations of the type of (14.4) have been already considered in the chapter on classical transport, so that the corresponding integral form will be formulated directly.

14.1.2 Integral Form The integral equation is formulated with the help of the backward trajectory (r(t  ), p(t  )), t  < 0, initialized by (r, p). f (r, p) =



0

dp

(14.7)  f (r(t ), p ) Γ (r(t ), p , p(t )) exp − 







0



tb−

  − −  f0 (r, p) = fb r(tb ), p(tb ) exp −

0 tb−

t

 μ(r(y), p(y))dy + f0 (r, p),

 μ(r(y), p(y))dy ,

tb− = tb− (p, r) (14.8)

The physical interpretation of this equation follows the analysis of the Boltzmann counterpart, obtained by setting Vw = 0. We recall that in this case the exponent in (14.7) becomes the probability for a free flight in the time interval (t  , 0) over the trajectory β(r, p) initialized by (r, p). The solution f in this point obtains two contributions. The term f0 is the part of the to the boundary function fb corresponding particles, ’arriving’ over β, and accordingly reduced by the processes of scattering, which is accounted by the exponent in (14.8). The other term accounts for the contributions at previous times t  to f in the point r(t  ), which S scatters from everywhere in the momentum space to the particular value p(t  ) (on the trajectory β). A subsequent multiplication with the exponent filters out the part of the particles, which will be scattered out of the trajectory during the evolution in the interval (t  , 0). Now we enable the Wigner potential and follow the same considerations. The vivid analogy between the classical and quantum counterparts suggests a heuristic interpretation of the quantum-related terms. Vw+ in (14.5) can be clearly related to scattering caused by the Wigner potential. This type of scattering is local in space as r remains fixed during the transition from p to p and instantaneous as is the classical counterpart. The function S in general depends on r because the impurity distribution or the lattice temperature can be position dependent, but is kept implicit in the notations. Accordingly the function γ plays the role of a total out-scattering rate due to the Wigner potential in a clear analogy with λ. Then the function γ δ can be related to a self-scattering process. A fundamental difference with the classical equation is, however, the existence of a term with a minus sign, which precludes a direct probabilistic interpretation.

158

14 Stationary Quantum Particle Attributes

The boundary conditions f0 appear explicitly in the equation. f0 together with the solution g of the adjoint to (14.7) equation give rise to the desired expression for the expectation value of the physical averages.

14.2 Adjoint Equation Theorem 14.1 The to (14.7) adjoint equation, corresponding to the physical task (14.1) is: g(r , p ) =





dp

dtθD (r )Γ (r , p , p) ×

(14.9)

0

  t  μ(r (y), p(y))dy g(r (t), p(t)) + A(r , p ) exp − 0

Here (r (t), p(t)) is the forward (t > 0) trajectory initialized by (r , p) and A is the phase space function representing the physical quantity in the domain Ω. In the following we present the proof. To identify the kernel in (14.7), the equation must be expressed as a Fredholm integral equation of the second kind.

f (r, p) =



dp f (r , p )K(r , p , r, p) + f0 (r, p)

dr

(14.10)

Thus an integration on r must be introduced and accordingly compensated by a delta function in position. The kernel becomes: 



K(r , p , r, p) =  exp −

0 t

0

−∞

dt  Γ (r , p , p(t  )) ×

(14.11)

 μ(r(y), p(y))dy δ(r − r(t  )) θD (r )

The indicator of the simulation domain θD ensures the correct lower bound tb− (r, p) in the time integral. The adjoint equation has the same kernel, but the integration is carried out over the unprimed variables. In accordance with (14.1), the free term is the phase space function associated to the physical quantity of interest. g(r , p ) =



dr

dp K(r , p , r, p)g(r, p) + A(r , p )

(14.12)

The solution g obviously depends on the function A, which is not written explicitly in favor of plain notations.

14.3 Iterative Presentation of the Mean Quantities

159

Equation (14.12) is formulated in terms of backward trajectories. To complete the derivation we need a forward parametrization, which is obtained with the help of the change of the integration variables r, p according to r = r(t  ), p = p(t  ). We recall that, according to the Liouville theorem, drdp = dr dp . Now the r integration can be carried out due to the delta function in K. A replacement of p by p and −t  by t gives rise to the compact form (14.9) The solution of (14.9) can be expressed as a series with the help of the iterative replacement of the equation into itself. It is convenient to use the extended version, (14.12), which leads to (14.9) after the integration over the spatial variable. By denoting Q = (r, p), the extended series is written as 3

g(Q) =

dQ





δ(Q − Q ) +

∞ 

4 K (Q, Q ) A(Q ) = (I − K)−1 A, n



n=1

(14.13)  where K n (Q, Q ) = dQ1 K(Q, Q1 )K n−1 (Q1 , Q ) and I is the identity operator. The terms of the iterative expansion of (14.9) are equivalent to these of (14.13), with the only difference that all spatial integrations are already performed. In the following we consider the series (14.13). This allows to clearly derive a recursive relation between the consecutive terms, which is not the case with (14.9), where the spatial integration entangles the variables.

14.3 Iterative Presentation of the Mean Quantities The mean value A is calculated according to the second relation in (14.1).



A =

dr D

 exp −

0 tb−

dpfb (r(tb− ), p(tb− )) ×

 μ(r(y), p(y))dy g(p, r)

(14.14)

tb− and the backward trajectory (r(t), p(t)) are determined by (r, p). Since fb is defined only on the boundary ∂D, a transform of the volume integral into an integral over the boundary is needed. The analogy with the case of stationary classical transport allows to repeat the already used steps to reformulate (14.14). %



A =

dσ (rb ) ∂D

 exp −

P+

t0 0

∞ dt0 |v⊥ (pb )|fb (rb , pb ) ×

dpb 0

 μ(rb (y), pb (y))dy g(rb (t0 ), pb (t0 ))

(14.15)

160

14 Stationary Quantum Particle Attributes

A replacement of g with the series (I − K)−1 A, (14.13), gives rise to the following iterative expansion for the mean value: ˜ −1 A) ˜ =

A = (v⊥ fb , (I − K)

∞ 

Ai

(14.16)

i=0

˜ is obtained from K by adopting the exponent to the left and Here the operator K, releasing the exponent to the right for the next K˜ in the repeating term of the consecutive iterations. Accordingly A˜ = e(··· ) A, where e(··· ) is the nearest to A exponent. The latter is literally the last exponent in the iterative term, placed to the ˜ is given by the right left of A. In this way, the zeroth order term A0 = (v⊥ fb , A) hand side of (14.15), with A(rb (t0 ), pb (t0 )) replacing g. The first term is: %



A1 =

dσ (rb ) ∂D

 exp −



dpb P+

 exp −

dt0 0



dp1 0

dt 1 |v⊥ (pb )|fb (rb , pb ) ×

t0

 μ(rb (y), pb (y))dy θD (rb (t0 ))Γ ((rb (t0 ), pb (t0 ), p1 ) ×

t1

 μ(r1 (y), p1 (y))dy A(r1 (t1 ), p1 (t1 ))

0

(14.17)

0

The trajectory (r1 (t), p1 (t)) is initialized by (rb (t0 ), p1 ). The next terms in (14.16) are obtained in the same way.

14.4 Monte Carlo Analysis The task is to decompose the repeating K˜ of (14.16), which as a rule end with ˜ into a products of conditional probabilities and corresponding weights. We A, will frequently refer to (14.17) to obtain an heuristic picture of the Monte Carlo procedure.

14.4.1 Injection at Boundaries We begin with the boundary term, which initiates all components of the series for the mean value. Because the boundary term is the same as in the classical counterpart, we can use the same normalization factors

% dp|v⊥ (p)|fb (p, rb ), Φ = j⊥ (r)dσ (r), (14.18) j⊥ (rb ) = P+

∂D

14.4 Monte Carlo Analysis

161

giving the normal component of the density of the carrier flux entering D, as well as the total flux. the boundary term pb (rb , pb ) =

j⊥ (rb ) |v⊥ (pb )|fb (rb , pb ) = pb1 (rb )pb2 (rb , pb ) Φ j⊥ (rb )

is then rewritten as a product of the same conditional probabilities used in the classical task for initialization of the numerical trajectories. We also recall that the explicit knowledge of Φ can be replaced by the knowledge of the density in a given subdomain of D.

14.4.2 Probability Interpretation of K˜ K˜ is multiplied and divided by μ to obtain: 



˜  , p , p, t) = pt (t, r , p )θD (r (t)) Γ (r (t), p (t), p) , K(r μ(r (t), p (t))

t pt = μ(r (t), p (t)) exp(− μ(r (y), p (y))dy) 0

The factor pt is well known. It gives the probability for a free flight until the time of scattering t, under the condition that the total scattering rate is μ. The proper normalization is proved by a direct integration over t in the limits (0, ∞). The initial point of the trajectory is (r , p ). The end point (r (t), p (t)) gives the just before scattering coordinates, which determine the conditional probabilities composing the next term Γ (r (t), p (t), p)/μ(r (t), p (t)) . The latter has a meaning of a generator of the after-scattering momentum p. It can be rewritten as follows: Γ (r , p , p) = pλ (r , p )pph (p , p) + pγ (r , p ) × μ(r , p )   1 +   1 −  1   p (r , p − p) − pw (r , p − p ) + pδ (p − p ) 3 3 w 3 3 pλ (r , p ) =

λ(p ) , μ(r , p )

pγ (r , p ) =

γ (r ) , μ(r , p )

pph (p , p) =

S(p , p) , λ(p )

±  pw (r , p) =

Vw+ (r , p) γ (r )

(14.19)

− = p + is introduced to assist the Monte Carlo analysis. According to Here pw w Eq. (14.6), pλ and pγ are two complementary probabilities, which can be used to alternatively select either pph , or the term in the brackets in (14.19). They determine the type of interaction. The choice of Boltzmann scattering is according

162

14 Stationary Quantum Particle Attributes

to pλ , while with the probability pγ we select the interaction with the Wigner potential. Accordingly, these processes are complementary. They cannot happen simultaneously and generate consecutive events during the time evolution. The interaction with the Wigner potential comprises the three terms in the brackets. + , p − , or p is With a probability p2 = 1/3 one of the three probabilities pw δ w  chosen to generate the after scattering state (r , p). In this way the action of the Wigner potential is interpreted as scattering by one of these probabilities. Details are discussed in the next section. Here we conclude that the probability ˜ generate transitions between phase space points (r , p ) densities, composing K,  and (r (t), p). This is interpreted as evolution of a particle subject to a drift and scattering process. In this way a considerable part of K˜ is used to define the transition probability:    + − Ke = pt pλ pph + pγ p2 pw − p2 p w + p2 p δ ,

(14.20)

for generation of the numerical trajectories, which are associated with a sequence of events composing the evolution of numerical particles. The remaining term in K˜ is interpreted as a weight wθD = (±3)i θD . The power i depends on the type of interaction. i = 0 stands for the case of Boltzmann scattering, which implies that w = 1. In the case of Wigner scattering we have i = 1 and − in (14.19). w appears a weigh w = (±3), where the minus sign is generated by pw ˜ factor after each iteration of K. Finally, θD is unity if the particle trajectory belongs to D and is zero otherwise.

14.4.3 Analysis of A˜ ˜ which is the product of the We recall that any iteration term completes with A, last exponent in the term with the function A. In particular, in the case of the first iteration term A1 , A˜ is represented by the last row in (14.17). After a renormalization with μ, A˜ can be explicitly written as: A(r(t), p(t)) A˜ = pt (t, r, p) μ(r(t), p(t))

(14.21)

Furthermore, it is the integrand of an integral with respect to the time t, as can be particularly seen in (14.17). This reveals that the random variable ψA , corresponding to A, is the term A/μ. The latter is evaluated at the end of a free flight, generated by pt . We recall the physical average is calculated in the domain Ω, so that A is zero outside this domain. Thus if the endpoint of the free flight is outside Ω, the random variable is zero.

14.5 Stochastic Algorithms

163

The alternative way of representing ψA is obtained by a time integration by parts as in the classical case.

t ˜ dyA(r(y), p(y)) (14.22) A = pt (t, r, p) 0

Thus ψA is obtained from the integral over the part of the Newtonian trajectory, which belongs to Ω. These two representations give rise to the Before-Scattering and the Trajectory Integral algorithms in the classical limit of the equation. In this case the weight is exactly w = 1. In the general case, when the quantum component of the kernel K˜ is enabled, the weight w = 1. Two basic Monte Carlo algorithms are derived for this case.

14.5 Stochastic Algorithms 14.5.1 Stationary Wigner Weighted Algorithm The stationary Wigner Weighted algorithm belongs to the class of Monte Carlo Markov chain algorithms for construction of numerical trajectories. Its main peculiarity is the accumulation of quantum weights during the trajectory construction, which contribute to the random variable associated to the algorithm.

14.5.1.1

Weight Generation

We first consider the auxiliary term (14.17). The numerical trajectories for evaluation of A are constructed with the help of pb , pt , and the probability components of the transition probability Ke . The random variable ψ1 = Φθ (±3)i ψA is calculated for any particular trajectory. The sample mean value over N such trajectories evaluates A1 in (14.16). This fact is directly generalized for any term An . The corresponding random variable ψn is: ψn = Φ

n  k=1

θDk (±3)ik ψA = ΦWn ψA ,

ψ A =



ψn

(14.23)

n

 The estimator An  N l ψn,l /N evaluates An over l = 1, · · · , N trajectories. According to (14.16) such estimates have to be performed for any n and then to be added. One observes, that if a trajectory leaves D after k < n < N iterations, ψn = 0 ∀ n > k. It follows that a single trajectory simulated until leaving D can be used to estimate simultaneously all terms in the sum (14.16). In this way a trajectory

164

14 Stationary Quantum Particle Attributes

beginning and ending on the boundary of D becomes an independent realization of the random variable ψ A in (14.23). The physical picture corresponds to a from the boundary injected particle which carries the information via the during the evolution accumulated weight in ψ. The following stochastic algorithm is derived in this way. Algorithm 14.1 1. Initialization of the stochastic variables: the value of ν which evaluates A is ± , and set to zero, the value of N is chosen, calculated are pt , pλ , pγ , pλ , pph , pw the values of S and λ are computed. 2. The initial point of each trajectory l is chosen according to pb (rb , pb ) and the weight W is set to 1. 3. The time of the free flight is chosen according to pt . The value of ψA , is calculated in accordance with (14.21) or (14.22), at the end of each free flight and then, after a multiplication by W , is added to ν. Increase l by unity and go to Step 2 if the trajectory leaves the simulation domain and l < N , else if l = N go to Step 5. 4. A Boltzmann type of scattering is chosen with a probability pλ , otherwise the scattering is of a quantum type. In the former case the after-scattering momentum is chosen by S. In the latter case with a probability 1/3 the after-scattering state + , p , or p − . In the first two cases W is is chosen by one of the probabilities pw δ w − the weight is multiplied by multiplied by 3, while in the case of a choice of pw −3. The procedure is continued from Step 3. 5. The value of A/Φ is evaluated by ν/N. The value of Φ is an input quantity known from physical considerations, as discussed in Sect. 14.4.1.

14.5.1.2

Weight Accumulation

We focus on the coherent mode of transport, where (14.6) is considered under the condition λ = 0, μ = γ . In this way each scattering event with the Wigner potential terms in (14.19) changes the weight and eventually the sign of the particles in (14.23), as illustrated in Fig. 14.1. The mean weight W¯ accumulated by particles during the evolution is estimated as follows: Theorem 14.2 The mean accumulated weight W¯ depends exponentially on the dwelling time T and the out-scattering rate γ . W¯ = exp (2γ T )

with

Tγ → ∞

(14.24)

In the following we present the proof. Denote by T the sum of all free flight times of an injected particle before it leaves the domain of the device. If their number is n, the number of the scattering events is n − 1. Because the scattering frequency is

14.5 Stochastic Algorithms

165

Fig. 14.1 Two possible scenarios for the evolution of the weight. The position of the circles on the time line is associated with the instants of interaction with the Wigner potential. The increase of their weight is visualized by the growth of the diameter of the circles, while the sign—by the black/white color

γ , it holds n  T γ . For large n the last relation becomes an equality, giving rise to the estimate 2γ n 2γ T n W¯ = ±(3)n = (1 + ) = (1 + )  exp (2γ T ) . γ n This result is in accordance with the exponential growth of the variance of the stochastic approaches for evaluation of Feynman integrals [143]. The typical scales involved in modern nanoelectronic devices are of lengths  10 nm, of times  1 ps, and of electric potentials of 100 meV, characterized by γ  1015 s−1 . Then the exponential factor is evaluated to γ T = 1000, which hinders to use the algorithm for realistic devices. The algorithm has been successfully applied only to much lower potential barriers [139]. The next algorithm is suggested for solving the problem with the accumulated weight.

14.5.2 Stationary Wigner Generation Algorithm The stationary Wigner Generation algorithm can be viewed as a Monte Carlo Markov branching chain algorithmic approach. The latter gives rise to branched trajectories which are typical for nonlinear problems. Nevertheless, trajectory branching appears to be a vivid concept for the solution of the considered linear Wigner equation. In particular, the absolute value of the weight remains unity, which allows to associate a particle to each trajectory branch.

14.5.2.1

Particle Generation

The algorithm generalizes the idea of the method of trajectory splitting, known also as particle splitting and used in classical transport problems for statistical

166

14 Stationary Quantum Particle Attributes

Fig. 14.2 Generation of signed particles along the time line of the quantum evolution. Any particle, generated or injected from the boundary contributes by its weight W = ±1, or equivalently its sign, to the statistical estimate

enhancement. Here however, the criterion for splitting is not the entrance of the trajectory into a rarely visited region, but the accumulated weight provided that it exceeds a chosen threshold value. The freedom of choice of the way for splitting, allows a variety of strategies, in particular, the absolute value of the split particle can be kept unity, [144]. In the last case the splitting occurs at the instants of interaction with the Wigner potential, which formally means a replacement of the probability p2 = 1/3 in (14.19) by unity and application of the components of the kernel to generate three new states. This changes entirely the interpretation of the quantum interaction [145]. Now an interacting particle in the phase space point Q = (r , p ) gives rise to three particles, initialized in points determined by (14.19). The interaction is local in space, so that all three particles have the same position r . The phase space location of one of the particles is again Q due to the delta function in (14.19), so that we can alternatively say that the interacting, or parent particle, survives. Two other particles are generated in Q± = (r , p± ). Because the parent trajectory branches, the absolute weight remains unity, but can change the sign, as illustrated in Fig. 14.2. The branching of the numerical trajectory means a splitting1 of the kernel in (14.17) into three integrals. Because all of them are needed to contribute to the statistical estimate, all particles have to be further evolved to account for the next iteration with the kernel. Then all three generated particles give rise to novel processes of splitting and so on. After each interaction, or equivalently particle splitting, or equivalently numerical trajectory branching, each branch contributes to the estimate a ψ A with the quantity Φ(±1)ψA , which, in accordance with 14.23, depends on the particular value of the physical quantity of interest, multiplied by the particle sign. In this way the weight is replaced by the sign as a quantum particle attribute.

1 The

phrases ‘trajectory branching’ and ‘particle splitting’ are used as synonyms in the literature. The former stems from the Monte Carlo theory, while the latter is related to the application of the Monte Carlo method.

14.5 Stochastic Algorithms

14.5.2.2

167

Particle Annihilation

Thus far, the presented algorithm does not offer numerical advantages. From a practical point of view, instead of the weight of the particles, now their number increases, which is an even more difficult numerical problem. A way to reduce the particle number is by using the fact that a single phase space point initializes a single Newtonian trajectory. Thus two particles at the same phase space point have a common probabilistic future and can be merged. Instead of creating particles with a weight of two, only particles with opposite sign are merged. As their contribution to ψ A is zero, their future simulation can be skipped. The procedure invokes the picture of particle annihilation. The numerical implementation of the latter involves a grid in the phase space. The position of each particle is approximated to the nearest grid node, where the annihilation occurs. The above stochastic analysis of the process of coherent transport provides a picture of particles with the following attributes: The evolution occurs on field-less Newtonian trajectories, the quantum interaction is interpreted as generation, and particles with positive and negative sign cancel each other.

14.5.2.3

Inclusion of Scattering

It is easy to develop a picture for the case of mixed transport, because, as already discussed, the Boltzmann scattering processes are complementary to the quantum interaction. Thus, according to (14.6), the former occurs with a frequency λ and the latter with a frequency γ . Only the former changes the trajectory of an evolving particle, as depicted in Fig. 14.3.

14.5.3 Asymptotic Accumulation Algorithm The simulation of the evolution particles can be consecutive, or only limited number of particles can be processed simultaneously. In contrast to classical particles, which are simulated by the Single-Particle method from the boundary injection to the exit trough the boundary, quantum particles cannot be removed from the simulation

Fig. 14.3 Processes of scattering and quantum particle generation for the time line of an evolving particle

168

14 Stationary Quantum Particle Attributes

domain. Indeed, if one simulates the boundary to boundary evolution of a quantum particle, the generated secondary particles can be ‘stored’ on the phase space for a further removing, a property which can be directly related to the lack of a time origin in stationary problems. Their processing, however, creates ternary particles which need to be further removed, and so on. Similarly, the accumulated weight of the Wigner Weighted algorithm, depends exponentially on the dwelling time in the simulation domain. This imposes to store periodically a part of the weight during the trajectory evolution, so that the same problem arises. The simulation domain cannot be emptied from particles, or alternatively from the accumulated weight, so that one cannot reach the next step of the algorithm, namely the injection of the next boundary particle. Thus the classical algorithm is not applicable. Furthermore, the problem cannot be solved by heuristic considerations. We derive an algorithm, [139], which asymptotically computes A, by using a reformulation of the iteration series (14.16). Theorem 14.3 The averaged value A of the physical quantity A can be represented as

A =

∞  



v⊥ fb , (I − L)

−1

k M

(I − L)

−1

 ˜ A ,

(14.25)

k=0

where L and M are the operators L = pt θD (pλ pph + pγ pδ ),

+ − M = pt θD pγ (pw − pw ).

(14.26)

In the following we present the proof. The two operators L and M decompose the kernel K˜ = L + M. We proof first the following operator relation: ˜ −1 M(I − L)−1 ˜ −1 = (I − L)−1 + (I − K) (I − K)

(14.27)

˜ from the right, Indeed, after a multiplication by (I − L) from the left and by (I − K) one obtains ˜ −1 M(I − L)−1 (I − K). ˜ ˜ + (I − L)−1 (I − K) (I − L) = (I − K)

(14.28)

A replacement of M by the equality ˜ M = (I − L) − (I − K) gives rise to cancellation of the inverse terms in (14.28) and as a consequence of all terms, which proves the relation.

14.5 Stochastic Algorithms

169

Equation (14.27) can be rewritten as:

−1 ˜ −1 = (I − L)−1 I − M(I − L)−1 (I − K)

(14.29)

and replaced into  (14.16). By regrouping the term with the help of the equality (I − X)−1 = k Xk , one obtains (14.25). Despite that the reformulated iteration expansion (14.25) looks even more complicated from a probabilistic point of view, a stochastic analysis in terms of evolving particles is feasible. The first step is to identify in L the Boltzmann evolution operator accomplished by a self-scattering term, which, does not influence the physical evolution. This operator is associated with the classical evolution of ˜ particles. The evaluation of the first (k = 0) term, A1 = (v⊥ fb , (I − L)−1 A) in (14.25) involves the picture of consecutive injection of j = 1, 2, . . . , N1 boundary to boundary particle trajectories, evolving in correspondence with n = 0, 1 . . . iterations of L. According to (14.23), the j th particle contributes after $ (j ) (j ) each iteration n to the statistical sum by n θDn ψ ˜ . It is helpful to recall that, An according to Eq. (14.21) or (14.22), ψAn ˜ is the value A/μ of the physical quantity A characterizing the particle in the just before nth scattering state, or the averaged over the nth trajectory segment value of A. The domain indicator θDn becomes zero, if the current particle leaves D at the nth scattering event, which allows to begin with the simulation of the next particle. Thus the averaged value A1 is ( A1 

n Φ   (j ) (j ) θDl ψ ˜ . An N1 j,n

(14.30)

l

The second, corresponding to k = 1, term A2 = (v⊥ fb , (I − L)−1 M(I − ˜ can be interpreted after the decomposition as L)−1 A)

A2 = P1 , (I − L)−1 A˜ ,



P1 (Q) = v⊥ fb , (I − L)−1 Mδ (Q) . (14.31)

P1 , up to a pre-factor, has the meaning of the density of stored particles. The series (I − L)−1 Mδ is obtained by a replacement of A˜ = pt ψA by Mδ in the consecutive + − ˜ This is actually equivalent to replacing ψA by pγ (pw terms of (I − L)−1 A. − pw )δ, which corresponds to a generation with probability pγ of two particles in states Q+ and Q− . The delta function gives rise to a projection to the phase space point Q operator, represented by the random variable ψδ = δ(Q − Q+ ) − δ(Q − Q− ). The rest of the expressions for A1 and P1 are the same. Furthermore the generation of the particles for P1 occurs at a rate, which is equal to the rate of the self-scattering events. Hence both, A1 and P1 , can be sampled simultaneously over the with the help of L constructed trajectories, where the latter quantity is sampled at the instances of the self-scattering events. Furthermore, ψδ dQ is 1, (−1), if a particle is generated into a point Q+ , (Q− ), which belongs to the domain dQ around

170

14 Stationary Quantum Particle Attributes

Q. The estimator of P1 is (14.30) with ψδ in the place of ψA . It follows that P1 has the meaning of a density wp of stored particles2 multiplied by the pre-factor Φ/N1 . Notably, the annihilation of positive and negative particles in a given domain Q is formally proven by this approach. We continue with the first equality in (14.31). The average A2 differs from A1 only by the boundary term which now is replaced by P1 . It can be evaluated with minor modifications of the algorithm. The difference is that now the trajectories begin from the device volume. The estimator

A2 

1  P1 (Q(j ) )  (j ) (j ) θ ψ , N2 p(Q(j ) ) n Dn An

(14.32)

j,n

where N2 is the number of trajectories used, offers the freedom to conveniently choose the density p for selection of the initial trajectory points Q. p(Q) = |P1 (Q)|/||P1 ||,

||P1 || =

Φ N1

|wp (Q)|dQ

If N2 is chosen to be equal to the number of all stored particles given by the last integral above, (14.32) becomes:

A2 

 (j ) (j ) Φ  signP1 (Q(j ) ) θDn ψAn N1 n

(14.33)

j,n

This step of the algorithm has a clear physical interpretation. The number of trajectories which initiate from dQ and their initial sign corresponds to the number and sign of the stored particles inside. The value of each contribution to the sum in (14.33) is calculated in the classical way and multiplied by the sign of the corresponding particle. Notably, at this step the device is discharged from stored particles. However, at this step we can simultaneously evaluate the stored weight P2 for the next, iterative term, corresponding to k = 2. Indeed, the latter can be written in analogy with (14.31) as:

A3 = (P2 , (I − L)−1 A)

(14.34)

P2 (Q) = (v⊥ fb , (I − L)−1 M(I − L)−1 Mδ)(Q) = (P1 , (I − L)−1 Mδ)(Q) By repeating the arguments used for P1 it is seen that P2 is the density of the secondary stored particles multiplied by Φ/N1 . The third term in (14.25) is then expressed as an inner product with P2 : A3 = (P2 , (I − L)−1 A). The step used for A2 can be repeated for A3 , and so forth. An

2 Since

Q+ = Q− , the case where both points belong to dQ can be avoided by the limit dQ → 0.

14.5 Stochastic Algorithms

171

iterative algorithm is obtained which computes the consecutive terms of the series (14.25) by an initial injection of N1 particles from the boundary and consecutive steps of storing and removing particles from the device. Since the estimators of the consecutive terms in the series have to be summed at the end, only a single estimator is necessary for the evaluation of A. The pre-factor C = Φ/N1 appears in the estimators of all terms and can be determined from the knowledge of some physical quantity in given region of D. The number N1 of the boundary particles must be chosen sufficiently large in order to attain a reliable approximation of the first term in (14.25). The evaluation of all remaining terms depends on this choice. In order to be less dependent on the initial guess of this value, the algorithm can be modified as follows. A moderate value of N1 is chosen. The steps of injection from the boundaries alternate with the steps of discharge of the device. The following algorithm will be proven to converge asymptotically to the desired mean value. In particular particles stored from a boundary injection are added to the particles stored from the previous injections in the device. Algorithm 14.2 1. Initialized are the quantities: ν needed to estimate A, μi , i = 0, 1, which is defined on a phase space mesh needed for the particle accumulation, the number of trajectories N1 , the iteration number S = 0, and a criterion for stopping R. i is set to zero. 2. N1 particles are consecutively injected from the boundary and simulated until exiting the domain D. During the evolution the contributions from ψA to ν are recorded. At each self-scattering event a couple of particles are generated and stored in μi according to their sign and phase space position. 3. Trajectories, initialized by the particles, accumulated in μi are consecutively simulated until exiting D. During the evolution the contributions from ψA to ν are recorded. At each self-scattering event a couple of particles are generated and stored in μ1−i according to their sign and phase space position. 4. Updated are the quantities S = S + 1, i = 1 − i. If S < R, go to Step 2. 5. The value of A/Φ is evaluated over N = RN1 trajectories as ν/N. For convenience we have assumed Φ/N1 = 1. The algorithm is illustrated by the following scheme: b → wp1

b→ b→ (wp1 + wp2 ) (wp1 + wp2 + wp3 ) . . . , → →

where b → means boundary injection, which gives rise to wp1 in all brackets. The rest of the terms wpi in each bracket is obtained from the previous step of discharging the device, which is denoted by the bottom arrow. The asymptotic behavior of the algorithm is proven by the following considerations. After the Rth boundary injection, the number of stored particles in dQ  around Q is approximately R k=1 wpk (Q)dQ, which can be proven by induction.

172

14 Stationary Quantum Particle Attributes

The subsequent R+1step of injection from the device completes the estimation of the functional k=1 (A)k . The mean of these estimates at steps 0, 1, . . . , R is an approximation of the functional A. Since the functional (A)k is summed R − k times and the ratio (R − k)/R tends to 1 as R tends to infinity, our estimate of A is asymptotically correct.

14.5.4 Physical and Numerical Aspects The two limiting cases of weight over a single trajectory and of trajectory branching can be combined into a general particle picture. This throws heuristic insight into processes of decoherence, which destroy the quantum character of the evolution. In purely coherent quantum evolution particles move along field-less Newtonian trajectories, maintaining a constant momentum. The scattering processes change instantly the momentum, which can be described as a random force, given by an impulse of time. If the force is stronger, more particles are deviated and do not reach their coherent destination. This hampers the transfer of quantum information between the phase space subdomains, which provides a microscopic interpretation of the macroscopic physical characteristics, called coherence length. Furthermore, due to the fact that Boltzmann and Wigner type of interactions are complementary, the scattering processes (assumed homogeneous) do not discriminate classical from quantum domains. The character of the domains depends only whether the potential can be approximated by the force or not. Notably, particle annihilation remains a feature of the mixed transport regime, as it relies on the Markovian character of the evolution. Particles in a given phase space point continue to have a common probabilistic future and thus the contribution to the statistical estimate of a couple of such particles is zero, if they have opposite sign. It should be noted that the presentations of the kernel (14.5), which in particular lead to the two limiting algorithms is not unique. If, instead of γfw , we add to both sides of Eq. (14.2) the quantity αfw , where α is a parameter, so that the Eq. (14.6) becomes μ = λ + α. Then the transition probability (14.20) becomes   + − − pγ p w + pα p δ . Ke = pt pλ pph + pγ pw

(14.35)

The choice of α modifies the Stationary Wigner Weighted/Generation algorithms and also gives rise to mixed algorithms, where weight accumulation and branching processes are in conjunction [146, 147]. A plethora of alternative algorithms is possible. For example the interaction frequency with the Wigner potential can be reduced two times on the expense of an increase by two of the generated additional states. In another formulation of (14.19) the quantum and phonon interactions can be forced to occur at the same time [147]. This synergy between weighted and signed particles is a consequence of the fact that the signs can be viewed as a special case of weights. There is an infinite number of

14.5 Stochastic Algorithms

173

strategies to combine the concepts for splitting/branching with weight accumulation, and the choice of the optimal one depends on the particular task. However, the representation (14.5) maintains the most transparent picture of quantum particles from the physical point of view. The particle annihilation considerably reduces the numerical burden, but still does not solve the problem with the exponential growth of particles.

Chapter 15

Transient Quantum Particle Attributes

The transient problem is characterized by an in time developing system state. The evolution is determined by the initial condition which is the Wigner function state at a given time. The knowledge of the initial condition guarantees the well posedness of the computational task for broad physical conditions, including boundaries and dissipation. In this chapter we develop a stochastic approach to the transient Wigner evolution in conjunction with the concept for a discrete momentum space. The latter offers numerical advantages, which is hinted by the fact that the convolution integral between the Wigner function and the Wigner potential is a result of consecutive applications of the forward and the inverse Fourier transforms. The physical aspects reflect the finite size of the nanoelectronic structures of interest. The role of the boundaries have been already discussed in the classical (stationary a and transient transport) as well as for the case of stationary quantum transport. A discrete momentum space description is convenient for both stationary and transient transport, however, under dominating coherent evolution conditions. Scattering processes or force, originating for example from a homogeneous magnetic field, impose a continuous momentum picture. Alternatively, algorithms are developed, which allow to treat scattering processes and force in the discrete momentum description [148, 149]. In particular, in the case of a force, the problem is solved as in the case of cellular automata algorithm [150]. The momentum space is maintained discretized, the transition from a momentum node to another one is obtained by a probabilistic transition rule, driven by the force. We continue with introducing the discrete momentum Wigner formalism.

© Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_15

175

176

15 Transient Quantum Particle Attributes

15.1 Bounded Domain and Discrete Wigner Formulation The finite device size implies to define the transport problem in an in all directions bounded domain. The boundaries encompass the device together with the contacts where injecting/absorbing boundary conditions are specified. The potential outside the domain is assumed infinite, so that the density matrix vanishes, if some of the arguments are outside the boundaries, which, furthermore, implies a lack of correlations with the rest of the universe. This fact will be used to introduce the Wigner function on a discrete momentum space. Without a loss of generality, we consider a two-dimensional structure, enclosed in a rectangular domain (0, L) = (0, Lx ) × (0, Ly ) with coordinates r = (x, y). Furthermore, we will frequently refer to the Fourier transform, so that, instead of using eisp/h¯ , it is convenient to work in terms of wave vectors k = p/h. ¯ We recall that the standard way to develop the Wigner formalism begins with the evolution equation for the density matrix ρ. The Wigner function is then introduced with the help of the Weyl-Wigner transform of ρ. fw (r, k, t) =

1 (2π )2



s s dse−iks ρ(r+ , r− , t), 2 2 −∞

r=

r1 + r2 , 2

s = r1 −r2

Because there are no carriers outside the device domain, the density matrix becomes zero unless 0 < r1 , r2 < L, which results in the following constraints for r and s: −r
N needed for a particle number growth control. In addition to the standard attributes such as position x and momentum hmΔk, where the position ¯ can be treated as a continuous argument, while the momentum is approximated on the m mesh, particles are associated with a sign. 2. The N particles are consecutively evolved over their trajectories in the time interval (t, t + Δt). An evolving particle generates some couples of particles until reaching the end of the time step. These primary particles are stored in their birth phase space position. In a next step, they are evolved until the end of the time step, generating secondary particles. All secondary particles are evolved and create ternary particles, etc. The loop continues, until all generations of generated particles reach the end of the time interval. The process converges, because the time remaining to reach t + Δt decreases with any next generation of particles. 3. The ensemble grows with the number ΔN of the generated particles, N = N + ΔN . In particular, after the evolution step, the momentum of all particles remains determined on the mth mesh. The particle position can be updated by a process of randomization around the node, or to remain determined by l. The time is updated to t = t + Δt.

186

15 Transient Quantum Particle Attributes

4. If t < T and N < Nm go to step 2. If t < T and if N > Nm , assign each particle to the nearest position node l. Particles with opposite sign cancel each other for each grid node l, m, so that only a positive or negative number of particles remain associated to the nodes. The number N , updated after the annihilation step, drops accordingly. Go to step 2. 5. The condition t = T is reached. The quantity An (x, m) is evaluated for each particle n and,  after multiplication by the particle sign, sign(n) contributes to the sum ν = n sign(n)An . Then A  An /N. These basic steps can be further modified and augmented depending on the considered task. In particular boundary injection can be included at the consecutive evolution steps Δt. The practical problem with the single choice of l is the possibility to exceed the limits 0, Nm of the finite m space. This and other computational implications are addressed in [151, 152]. The concept of a discrete momentum space conflicts with our classical understanding of Newtonian acceleration, where the force changes continuously the particle momentum or equivalently the wave vector. The Wigner formalism accounts for the force via the Wigner potential, so that there is no acceleration of the signed particles. However, particle generation/annihilation gives an equivalent effect as Newtonian acceleration in the classical limit, when the Wigner equation reduces to the Boltzmann equation. This limit offers an important opportunity to proof the concepts associated to the signed particles.

15.2 Simulation of the Evolution Duality The concept of signed particles is examined in the ultimate regime, where classical and quantum dynamics become equivalent [153]. The peculiarities of the transport in this asymptotic regime are analyzed with simulations, benchmarking the behavior of the Wigner function. Classical and quantum evolution regimes are the same for linear potentials V (x) = −Ex, which is expressed by the equality

dk  Vw (k − k  )fw (k  , t) = −

eE ∂fw (k, t) . ∂k h¯

(15.22)

The classical evolution starting from an initial condition f (k, 0) = Nδ(k) will be considered. According to the Boltzmann picture the evolution corresponds to acceleration of N classical particles, over a common Newtonian trajectory. On contrary, no direct acceleration is possible in the Wigner picture. The effect of acceleration can be achieved only by the generation of positive and negative particles. The realization of the same process in terms of positive and negative

15.2 Simulation of the Evolution Duality

187

particles is far from simple, because, according to the above equality, Vw (k) is a generalized function. Vw (k) =

eE  δ (k) h¯

Indeed, the numerical treatment can be asymptotic only, because (15.22) relies on the integral Fourier transform, which is formally obtained by the limit L → ∞ of the discrete counterpart (1/L becomes 1/2π ) corresponding to the continuous formulation of the Wigner equation. The existence of generalized functions precludes any exact numerical treatment. In particular the definition of the Wigner potential (15.7) diverges for a linear potential in this limit. A relevant numerical approach relies on a finite L, so that the Wigner potential is: Vw (n) = −

eEL cos(π n) hπ n ¯

(15.23)

We analyze the behavior of the particles in the two transport models under this condition. The continuous acceleration is not possible in the discrete momentum Boltzmann picture. The discretization with a step Δk = π/L, imposes cellular automata evolution rules [150]. The probability for a transition during a time dt between the initial and the adjacent node in the field direction is proportional to the acceleration given by dk = eEdt/h. ¯ The particle number N at the initial node gradually decreases with the evolution time, while their number on the next, adjacent node increases accordingly. For a time Δt given by Newton’s law Δk = eEΔt/h, ¯ which implies that all particles are transferred to the next node. The latter corresponds to the same momentum value which would be obtained, if the particles are continuously accelerated by the field for this time. The Signed-Particle algorithm can be directly applied, giving rise to a generation of positive and negative particles on the wave vector nodes. Figure 15.1 (left) shows how the initial density at k = 0 gradually decreases in time, while the density on the first node to the right, in the direction of the force increases. At 0.52 ps the density on the adjacent nodes becomes equal. Then, for around 1 ps the initial N = 105 particles appear on the first node. For E = 104 V/m and L = 200 nm this time is consistent with Newton’s law, Δk = eEΔt/h. ¯ The physical system changes the momentum with time, which corresponds to acceleration. However, the latter is obtained not by the classical particle transfer of the cellular automata evolution, but with the generation-annihilation steps of the Signed-Particle algorithm. Theoretical considerations show that according to (15.23) only positive particles are generated at the first node, while only negative particles are generated at the origin thus decreasing the initial peak. The generated positive and negative particles merely compensate each other on the distant nodes. The algorithm maintains a stable evolution of the peak, which at the consecutive time intervals Δt fully settles at the proper wave vector node, predicted by the Newtonian acceleration and in accordance with the cellular automata results, as shown in Fig. 15.1 (right).

188

15 Transient Quantum Particle Attributes 1

1 Node 0

0.8

0.8

Node 1

f (k,t)

f (k,t)

0.6 0.6 0.4

0.2

0.2 0

0.4 5fs

3 ps

5 ps

0

0

0.1

0.2

0.3

0.4 0.5 0.6 time [ps]

0.7

0.8

0.9

1

–0.2 108

0 k [m–1]

Fig. 15.1 Left: Time dependence of the relative densities at the two adjacent nodes. Right: The Signed-Particle algorithm provides a stable evolution of the system. The initial peak at k = 0 corresponds to 5 fs evolution time. After 5 ps all initial particles are already at the fifth node of the momentum space.

However, the quantum process is a distinctive alternative to the cellular automata evolution. The appearance of negative density values can be clearly observed in Fig. 15.1 (right). This nonphysical behavior is the price paid for the step back from the initially continuous picture. The equivalence of Wigner and Boltzmann transport is guaranteed only when L → ∞. The discrete case follows quantum rules, so that the whole system is disturbed by the violation of the uncertainty principle in the initial condition. Further simulations investigate the behavior of the negative density values with the increase of L. For L = 500 nm the negative amplitudes decrease almost two times [153] and this tendency continues for higher values of L. Consistently, in the limit Δk = π/L → 0 the two algorithms, based on cellular automata and particles with sign approach the continuous Newtonian acceleration.

15.3 Iteration Approach: Signed Particles The application of the Iteration Approach gives rise to two probability interpretations of the kernel, (15.21) and (15.20), which are analogous to their continuous stationary counterparts, cf. Sect. 14.5. The latter are represented by the Wigner Weighted/Generation algorithms introduced with the help of weights. Because the weight=±1 is equivalent to a sign, the concepts further evolve to particle signs. The latter are associated to the transient transport. Algorithms based on the steps of Algorithm 15.1 are unified by the signedparticle approach. Analogously, one can introduce particles with a positive or negative weight larger or smaller than ±1 and combined the concepts in alternative algorithms. The most important property is that particles carry a weight with a sign, so that they can fully or partially annihilate by the process of weight cancellation.

15.3 Iteration Approach: Signed Particles

189

Thus the signed-particle approach is an union of concepts and notions for both, stationary and transient problems, derived with the help of the Iteration Approach. The fundamental difference between the stationary and transient transport regimes is related to the sampled random variable. In the former case a single trajectory samples all consecutive terms in the iterative expansion (14.16). In the latter case only a single term, An , samples the averaged value (15.16). It corresponds to the nth term of the iterative expansion of (15.17), so that n is the number of ‘interactions’ with the Wigner potential until the time τ . Accordingly, only the relative time plays a role in the stationary case so that the trajectories can be simulated consecutively. The generated weights (or particles) can be stored for a further processing, which actually causes an implicit annihilation of particles (or weights) without taking into account their history and arrival time in the particular phase space cell. On contrary, the transient transport depends on the absolute time. An ensemble of trajectories is needed at time τ in order to evaluate A(τ ). The concept of particle annihilation or weight cancellation becomes explicit at fixed instants of time, so that the whole ensemble must evolve synchronously [154]. The Signed-Particle algorithm has been considered from the point of view of stochastic analysis [155]. The analysis relies on techniques and notions used in the probability theory and the theory of operators in the corresponding functional spaces. The Berry-Esseen theorem [156] is applied to evaluate the quality of the Signed-Particle algorithm. Actually, the Berry-Esseen theorem is an inequality which is used instead of the well-known Central Limit theorem. But is gives a more powerful approach to study the stability and convergence of the algorithm. The Berry-Esseen bounds in conjunction with the choice of the free flight time depending on the upper limit of γ , (15.19), are considered to evaluate the theoretical error, the speed of convergence and the stability of the algorithm. Theoretical estimates of the number of iterations, depending on the potential and the time span of the simulation, that guarantees the stability of Signed-Particle algorithm are provided. The estimates can be seen as sharpening and a more general study of stochastic algorithms for the Wigner equation also discussed in [157]. Further results which can be used to control the quality of Monte Carlo algorithms can be found in [158]. The above limiting cases of weight or particle generation can be combined into many ways depending on the particular task. Actually the whole arsenal of approaches for event biasing can be used as, e.g., discussed for the stationary case in Sect. 14.5.4. Other approaches can also be considered. The computational problem can be viewed by the theory of piecewise deterministic Markov processes [159] or similar to the Iteration Approach, as a renewal type equation problem [160].

Appendix A

A.1 Correspondence Relations The variables r and p in the Hamilton equations dp = −∇r H(r, p), dt

dr = ∇p H(r, p) dt

can be changed by the so-called canonical transform into novel variables a and a + which also obey the Hamilton equations. The transition to quantum mechanics is established by the correspondence rule, associating to r and p the corresponding operators. r → rˆ ,

ˆ p → p,

ˆ r = [ˆri , pˆ j ]− = i hδ rˆ pˆ − pˆ ¯ ij

The latter are used to obtain the operators corresponding to the physical quantities and in particular to a and a + . Alternatively, operators can be associated to the canonical variables a and a + according to the commutator equations aˆ q aˆ q+ − aˆ q+ aˆ q = δq,q and from there the operators of the physical quantities and in particular of r and p can be obtained. It is shown that the two ways are equivalent. To any physical observable corresponds a Hermitian operator. Its eigenfunctions and eigenvalues provide the expectation value of the observable in a given state. It is convenient to use the Dirac notations of the functions from the space of a given operator and the corresponding adjoint functions. In this way the matrix element of an operator Oˆ between the eigenfunctions of the Hermitian operator Mˆ with ˆ ˆ quantum numbers m and n is m|O|n. If Oˆ is Hermitian, then n|O|n is the © Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0

191

192

Appendix A

expectation value of the physical quantity associated with Oˆ in a state n. If o are ˆ one can use the completeness of the basis states to show the eigenfunctions of O, that   |o o|

n|O|n = o| o|n|2 . 1= o

o

A.2 Physical Averages and the Wigner Function We pursue the idea to reformulate

A(t) =



dr

ˆ drα(r, r )ρt (r , r) = T r(ρˆt A)

in a way, that resembles the classical integration with respect to the phase space coordinates. In particular, in such an expression, the integrand factor A(r, p) would define the quantum analog (in the sense of Sect. 3.2.2) of the classical distribution function. According to the Weyl transform ˆ = Aˆ = A(ˆr, p)

1 (2π )6



dsdq

ˆ . drdpA(r, p)e−i(rs+pq) ei(sˆr+qp)

(A.1)

A replacement in the trace operation gives rise to ˆ = T r(ρˆt A)

ˆ  dr r |ρˆt A|r

(A.2)

so that ˆ

A(t) =



drdpA(r, p)

dsdq −i(rs+pq) e (2π )6

ˆ ρˆt (t)|r  . dr r |ei(sˆr+qp)

(A.3) ˆ The mean value A(t) is now expressed in the desired form of a phase space integral on the classical function A(r, p). It remains to show that the factor I after A is the Wigner function fw , used in Sect. 3.2.3. The following relations are used [52]: ˆ = e−isqh¯ /2 eiqpˆ eisˆr ei(sˆr+qp)

eiqpˆ |r = |r − qh ¯

(A.4)

Appendix A

193

The derivations are long but straightforward.



dsdq −i(rs+pq)  e dr dr r |e−isqh¯ /2 eiqpˆ eisˆr |r  r |ρˆt (t)|r  = (2π )6

dsdq −i(rs+pq)   e dr dr e−isqh¯ /2 r |eiqpˆ |r eisr r |ρˆt (t)|r  = (2π )6

dsdq −i(rs+pq)  isr e ρ(r , r , t) = dr dr e−isqh¯ /2 δ(r − r + qh)e ¯ 6 (2π )

dsdq −i(rs+pq)   e dr e−isqh¯ /2 eis(r +qh¯ ) ρ(r + qh, ¯ r , t) 6 (2π )

I=

We use the change: r = r1 − qh/2 ¯

dsdq −i(rs+pq) e r1 − qh/2, t) = I= dr1 e−isqh¯ /2 eis(r1 +qh¯ /2) ρ(r1 + qh/2, ¯ ¯ (2π )6

dsdq −ipq e r1 − qh/2, t) = dr1 eis(r1 −r) ρ(r1 + qh/2, ¯ ¯ (2π )6

dq −ipq e r1 − qh/2, t) . dr1 δ(r1 − r)ρ(r1 + qh/2, ¯ ¯ (2π )3 Finally, after the change q = r /h¯ we obtain the desired expression for the Wigner function:

dr −ipr /h¯ I= e ρ(r + r /2, r − r /2, t) = fw (r, p, t) (A.5) (2π h) ¯ 3

A.3 Concepts of Probability Theory We introduce some notions of the probability theory, which form the basis of the numerical Monte Carlo methods. The main entity is the central limit theorem. It is based on the idea of independent, identically distributed variables which are used to define sample mean and sample variance. Fundamental probability concepts for random variables, independent experiments, expected value, and variance must first be consecutively defined. Frequency Definition of Probability A set is a collection of objects called elements. A subset B of a set A is another set whose elements are also elements of A. The empty or null set ∅ is by definition the set which does not contain an element. The set of all subsets is called space. We denote A ∩ B by AB and A ∪ B by A + B.

194

Appendix A

In the probability theory, the set of all possible outcomes of a given experiment form the probability space J . ‘Experiment’ is used in its broad physical sense. A single element ξ is called elementary event. The outcome of an experiment designed to choose an elementary event is called sample. A statistical sample comprises N such outcomes. Events are subsets of J , to which probabilities are assigned. In an experiment, the probability of an event A is naturally determined by the ratio of its occurrences NA to the number N of the statistical samples. If NA /N approaches some constant value when N increases, the limit P (A) = lim NA /N gives N →∞

the relative frequency definition of probability. Since in a real experiment N can only be finite, the limit must be accepted as a hypothesis for the P (A) value. The frequency interpretation is fundamental in the application of the probability. It is used in both directions, to provide a hypothesis for the probability which must be assigned to A and to provide an estimate for P (A) of some complex event A which cannot be estimated analytically although the probabilities of the elementary events comprising A. For large N, P (A) is approximately NA /N. Important properties follow from the frequency definition of probability. 1. P (A) ≥ 0 because N > 0 and NA ≥ 0. 2. P (J ) = 1 because J occurs at every sample. 3. If AB = ∅ then NA+B = NA + NB because, if A + B occurs, then either A or B occurs but not both. In the axiomatic formulation of the probability theory these properties are postulated. Independent Experiments Two events A and B are called independent if P (AB) = P (A)P (B). We consider two experiments J1 and J2 with probabilities P1 and P2 . The Cartesian product of the two experiments is a new experiment J , whose events are of the form A × B where A ∈ J1 , B ∈ J2 , their unions and intersections. Note that A × B = (A × J2 )(J1 × B). In this experiment the probabilities for the events are such that P (A × J2 ) = P1 (A) and P (J1 × B) = P2 (B). Two experiments are independent, if the events (A × J2 ) and (J1 × B) of the combined experiment are independent for any A and B. Hence P (A × B) = P1 (A)P2 (B). Random Variable To every outcome ξ of a given experiment we assign a number x(ξ ). The function x(ξ ) is called random variable, defined in the domain J and values belonging to the set of numbers. The subset of J consisting of all outcomes ξ such that x(ξ ) ≤ x is denoted by (x(ξ ) ≤ x). The probability F (x) = P (x(ξ ) ≤ x) is called distribution function of the random variable. Being a probability, the function P has certain general properties. It is continuous from the right as x varies from −∞ to ∞ and it increases monotonically from 0 to 1 (F (−∞) = 0, F (∞) = 1). The probability density

Appendix A

195

function f is the derivative of F with respect to x. Hence it has the following properties: it is positive, f (x) ≥ 0, and it is normalized to unity, i.e.





−∞

f (x) dx = 1

x

−∞

f (x) dx = F (x) .

(A.6)

We consider two densities of particular relevance. • Uniform Density A random variable is called uniformly distributed between x1 and x2 , if its density is constant in the interval, and zero otherwise. From the normalization requirement (A.6) follows that f (x) = 1/(x2 − x1 )

for

x1 ≤ x ≤ x2 .

(A.7)

Random variables uniformly distributed in (0, 1] are called random numbers. • Normal or Gaussian Density The normal density is given by the Gaussian function. f (x) =

2 1 − (x−η) √ e 2σ 2 σ 2π

(A.8)

The corresponding distribution is x−η ) with F (x) = G( σ

Since

∞ −∞

e−αy = 2



1 G(x) = √ 2π

x

e−

y2 2

dy .

(A.9)

−∞

π/α dy, it follows that G(∞) = 1.

Expected Value and Variance The expected value E (x) and the variance σ 2 of a random variable x with a density f (x) are defined as

∞ E (x) = ηx =

xf (x) dx

and

−∞

∞ σ = 2



(x − ηx )2 f (x) dx = E x2 − E 2 (x) .

(A.10)

−∞

In particular the expected value and the variance of a random variable x with normal 2 2 distribution G( x−a b ) are ηx =a and σx = b .

196

Appendix A

Function of a Random Variable Suppose that x is a random variable and g(x) is a function of the real variable x. The expression y = y(ξ ) is then another random variable, given by g(x(ξ ))— a composite function with domain J . The distribution function Fy (y) is the probability for the event (y(ξ ) ≤ y), consisting of all outcomes (g(x(ξ )) ≤ y). Hence Fy (y) = P (g(x) ≤ y) and the following relation holds:

∞ E (y) = ηy =

∞ yfy (y)dy =

−∞

g(x)fx (x) dx

(A.11)

−∞

To find the variance of y in terms of the density fx (x), one can use the same formula but for the random variable y = g(x)2 − ηy2 . Joint Distributions Their joint distribution F (x, y) of two random variables x and y is the probability for the event (x ≤ x, y ≤ y). In general F (x, y) cannot be expressed in terms of Fx (x) and Fy (y). However, Fx (x) and Fy (y), called marginal distributions, are determined by F . The latter obeys the normalization conditions F (−∞, y) = F (x, −∞) = 0, The joint density is the function f (x, y) =



∂ 2 F (x,y) ∂x∂y .

∞ dy f (x, y) = 1

dx −∞

F (∞, ∞) = 1 .

f (x, y) ≥ 0

−∞

The following relations hold for the marginal statistics of each random variable:

∞ fx (x) =

∞ fy (y) =

f (x, y) dy, −∞

f (x, y) dx. −∞

Joint Independence Two random variables x and y are called independent, if the events x ∈ A and y ∈ B are independent, where A and B are arbitrary sets of the x and y axes respectively. Hence P (x ∈ A, y ∈ B) = P (x ∈ A) P (y ∈ B) . Applying the above to the events (x ≤ x) and (y ≤ y) it follows that F (x, y) = Fx (x)Fy (y)

or

f (x, y) = fx (x)fy (y) .

Appendix A

197

It can be shown that, if x and y are independent, then z = g(x) and w = h(y) are also independent. Random Variables on Independent Experiments As in the case of events, the concept of independence is augmented for random variables defined on product spaces. Suppose that x is defined on a space J1 consisting of outcomes ξ1 and y on a space J2 consisting of outcomes ξ2 . In the combined experiment J1 × J2 x and y are such that x(ξ1 , ξ2 ) = x(ξ1 ) and y(ξ1 , ξ2 ) = y(ξ2 ). In other words x depends on the outcomes of the first experiment only, and y on those of the second only. If the experiments are independent, then this is true for x and y. To prove this we denote by Ax the set (x ≤ x) and by By the set (y ≤ y). In the space J1 × J2 it holds (x ≤ x) = Ax × J2 ,

(y ≤ y) = J1 × By .

From the independence of the experiments it follows that P ((Ax × J2 )(J1 × By )) = P (Ax × J2 )P (J1 × By ) . Hence F (x, y) = Fx (x)Fy (y). Joint Moments and Covariance Given two random variables x and y and a function g(x, y), one canform the random ∞ variable z = g(x, y). Its expected value is then given by E (z) = −∞ zfz (z) dz. It can be shown that the expected value is expressed in terms of g(x, y) and f (x, y).

∞ E (z) = E (g(x, y)) =

∞ dx

−∞

dy g(x, y)f (x, y)

−∞

In general E (xy) = E (x) E (y). The covariance C or Cxy is defined as follows:   C = E (x − ηx )(y − ηy ) = E (xy) − E (x) E (y) Two random variables are called uncorrelated, if their covariance is zero. If two random variables are independent, then they are uncorrelated, because E (xy) = E (x) E (y). Then also g(x) and h(y) are uncorrelated. Particularly the variance of the sum of two random variables is expressed in terms of their variances and correlation coefficient. Indeed if z = x + y then ηz = ηx + ηy , it holds:



σz2 = E (z − ηz )2 = E ((x − ηx ) + (y − ηy ))2 = σx2 + 2Cxy + σy2 If the random variables are uncorrelated then σz2 = σx2 + σy2 .

198

Appendix A

Independent, Identically Distributed Random Variables The above results are generalized for spaces given by any number N of Cartesian products. Assuming that J n = J1 ×· · ·×JN is a combined experiment and that the random variable x depends only on the outcomes ξi of Ji xi (ξ1 , . . . ξn ) = xi (ξi ), the random variables are called uncorrelated, if Cij = 0 ∀i = j . Particularly if x = x1 + · · · + xN , then the variance of the sum is the sum of the variances σx2 = σx21 + · · · + σx2N . Of special interest are combined independent experiments, where the random variables xi depend in an identical way on the outcomes ξi of Ji , namely xi (ξ1 , . . . ξn ) = x(ξi ). From this follows that the distributions Fi (xi ) of xi are equal to the distribution of the single experiment F (x) of the random variable x. Thus, if the experiment is performed N times, xi (ξi ) are independent and have the distribution of x. These variables are called independent, identically distributed (i.i.d.). Sample Mean and Sample Variance The random variables N 1  x¯ = xi , N i=1

1  σ¯ = (xi − x¯ )2 n−1 N

(A.12)

i=1

are by definition sample mean and sample variance of xi . Consider a combined independent experiment with i.i.d. random variables xi , i = 1, . . . N so that E (xi ) = E (x) = ηx , σx2i = σx2 . Then the following relations hold: E (¯x) = ηx

σx¯2 =

σx2 , N

E (σ¯ ) = σx2

(A.13)

The first equation follows from the linearity of expected values. The second follows from the fact that the variables, being independent, are uncorrelated. The variance of the sum is the sum of the variances. σx¯ =

N 1  2 σx2 σ = x i N N2 i=1

For the last equation we first prove the relation E ((xi − ηx )(¯x − ηx )) = 1 E ((xi − ηx )[(x1 − ηx ) + · · · + (xN − ηx )) = N 1 σ2 E ((xi − ηx )[(xi − ηx )) = x , N N

Appendix A

199

where we used that xi is uncorrelated with xj . Hence



σ2 σ2 N −1 2 E (xi − x¯ )2 = E ((xi − ηx ) − (¯x − ηx ))2 = σx2 + x − 2 x = σx . N N N This yields

N N −1 2 1  σx . E (xi − x) ¯ 2 = N −1 N −1 N N

E (σ¯ ) =

i=1

The Central Limit Theorem The Central Limit theorem states that under certain general conditions the distribution F (x) ¯ of x¯ approaches a normal distribution with mean ηx¯ and variance σx¯2 , when N tends to infinity. Thus for large N it follows that: F (x) ¯ = P (¯x ≤ x)  G(

x − ηx¯ ). σx¯

(A.14)

 x¯ We denote | x−η σx¯ | = x . Then

  P −x  σx¯ + ηx¯ ≤ x¯ ≤ x  σx¯ + ηx¯  G(x  ) − G(−x  ) .

Expressing the sample mean and variance in terms of the mean and variance of the random variable x and taking into account the explicit form of G, we obtain  

x  y2 2 x  σx √ e− 2 dy = Φ(x  ) . P |¯x − ηx | ≤ √ N 2π 0

This important result provides a link between the mean ηx of the random variable x, the sample mean of the combined experiment and the number N of the independent applications. Choosing a coefficient 0 < β < 1, called coefficient of confidence, we can find the solution x  = xβ of the equation Φ(x  ) = β. Then it follows that x σ the probability for |x¯ − η| ≤ √β x is approximately β. Often the value β = 0.997 is N chosen, corresponding to xβ = 3. Then the estimate, called the “the three σ rule” is obtained:   3σx  0.997 P |x¯ − η| ≤ √ N Another estimate for the error is the probable error rN given by σ rN = 0.6745 √ , N where 0.6745 is the value of xβ corresponding to β = 0, 5.

200

Appendix A

A.4 Generating Random Variables To carry out Monte Carlo integration, one has to generate the values of random variables with a given density p(x) or distribution Fx (x). For this purpose we consider the random variable γ , uniformly distributed in the interval (0, 1]. It has a density 1, a distribution function Fγ (x) = x, and an expected value 1/2. The values of γ are called random numbers. The following theorem holds for continuous and positive p: If ξ is a random variable, defined by the equation Fx (ξ ) = γ ,

(A.15)

then it has a density p(x). Since Fx strongly increases from F (−∞) = 0 to F (∞) = 1, (note that if p is defined in some finite interval (a, b) this holds for F (a) = 0 to F (b) = 1), then (A.15) has a unique solution for any γ . Besides, the probabilities P (x < ξ < x + dx) and P (Fx (x) < γ < Fx (x + dx)) are equal. In addition, γ is uniform, so that P (Fx (x) < γ < Fx (x + dx)) = Fx (x + dx) − Fx (x), and P (x < ξ < x + dx) = p(x)dx. Thus the Monte Carlo method for evaluating integrals is grounded on a combined experiment, which models i.i.d. variables by generation of random numbers. The statistical estimates for the deviation of the sample mean from the expected value can be directly applied as a measure for the precision of the method. Suppose now that the integral I is a multi-dimensional one, i.e. the density p depends on n + 1 variables and is continuous. Then the computation of the multi-dimensional random variable x can be reduced to successively modeling its coordinates denoted by

∞ p0 (x0 ) =

∞ dx1 . . . dxn p(x0 , . . . , xn ) ,

... −∞

−∞





p1 (x1 |x0 ) =

... −∞

... = ...

dx2 . . . dxn p(x0 , . . . , xn )[p0 (x0 )]−1 ,

−∞

(A.16)

pn (xn |x0 , . . . , xn−1 ) = p(x0 , . . . , xn )[p0 (x0 ) . . . pn−1 (xn−1 |x0 , . . . , xn−2 )]−1 .

Appendix A

201

Here pi (xi |x0 , . . . , xi−1 ) is the conditional probability density of xi provided that the rest of the variables have values x0 , . . . , xi−1 . We denote by

xi Fi (xi |x0 , . . . , xi−1 ) =

dx pi (x|x0 , . . . , xi−1 ) . −∞

The following theorem holds: If γ0 , . . . , γn are independent random numbers, the random variables ξ0 , . . . , ξn defined by the successive solution of the equations F0 (ξ1 ) = γ0 , F1 (ξ1 |ξ0 ) = γ1 , ... Fn (ξn |ξ0 , . . . , ξn−1 ) = γn

(A.17)

have a density p(x0 , . . . , xn ). The values γ0 , . . . , γn can be generated by sequential or parallel random number generators, based on a variety of algorithms which determine the quality of the sequence of the random numbers. In a paper of Pierre L’Ecuyer [161] one can find one of the best existing explanation of the theory and practice of uniform random number generation. The paper contains a large number of references, which makes this work quite general, but at the same time it is very practical. For the reader who is interested in random number generators we recommend the site [162], where several high quality generators are presented.

A.5 Classical Limit of the Phonon Interaction By denoting for convenience fw (.., t  ..) by φ(t  ) and introducing the times t1 = t  − t  , τ1 = τ − t  we express the integral I as follows.

I=

t

t 

t−t



i

dt  e i

dt1 e

t  t 

(Δ(τ )+h¯ ω)dτ h¯

φ(t  ) =

t1 (Δ(τ1 +t  )+h¯ ω)dτ1 h¯

0

φ(t1 + t  )

0 t−t



=

dt1 e 0

 i h¯

Δ(t  )+h¯ ω+

h¯ 2 qz eEt1 2m

 t1

φ(t1 + t  ).

202

Appendix A

We denote by α and β the scales of the energy and the time and introduce the dimensionless quantity t β = t/β. It is assumed that α is common for both, electron and phonon energies as well as for the expression accounting for the electric force.  =  α α;

α ph = hω ¯ = ph α;

E =

h¯ 2 qz eEβ = Eα α. 2m

A new integration variable is introduced β

x=

β

t t βα t1 α t1 = 1 , = 1 = h¯ h¯ h¯ α h¯ αβ

where h¯ α = h¯ /α and h¯ αβ = h¯ /(αβ). Furthermore, we use the notation T = (t − t  )/h¯ α = (t − t  )β /h¯ αβ . The expression for I becomes:

I = h¯ α



α + α x h i Δ α +ph E ¯ αβ x

T

dxe 0

φ(βx h¯ αβ + t  )

Now we assume that the energy/time scales are big enough for the limit h¯ αβ → 0 to hold. Then T → ∞, while the term with the field tends also to zero. The time integral I becomes the Fourier transform of the step function θ (t), which is expressed in terms of generalized functions as follows:

I = h¯ α





α x i Δ α +ph

dxe 0

 h¯ π δ(Δ + hω) ¯ + v.p.

φ(t  ) =

 i φ(t  ) Δ + hω ¯

A.6 Phonon Modes Assert A.1 The terms in (12.42) involving products of phonon coordinates in the same mode q can be neglected in the trace operation, provided that the number of the transverse modes is much larger than the number of the longitudinal modes. The delta functions in γ affect only the z components of the wave vectors, as indicated by the corresponding subscripts δqz1 . In this way (12.42) can be further decomposed into products where the z components are fixed, qz1 qz2 . . . qzk . ∞ 



nq1 ...nqk =0 q⊥1

nq1 . . .



nqk

q⊥k

Peq (nq1 ) . . . Peq (nqk )|FG (q1 )|2 . . . |FG (qk )|2

Appendix A

203

We also note that modes, for which FG = 0, are not considered in the sum. We now assume that the number of the longitudinal modes is negligible in comparison with the number N of the transverse ones. Consider the product ∞ 

nq1 . . . nqk Peq (nq )Peq (nq1 ) . . . Peq (nqk ),

(A.18)

nq1 ...nqk =0

which is evaluated as if

n(q1 ) . . . n(qk ),

q1 = q2 = . . . = qk .

(A.19)

If some of the modes coincide, n(q1 ) . . . n(α) (qj ) . . . n(β) (qm ) . . . n(qk ), n(α) (q) =

∞ 

nαq Peq (nq ).

(A.20)

nq =0

Now let L be the infimum and Mk the supremum of the following quantity: L = inf n(q)|FG (q)|2 q

Mk =

sup

αβ...;q ...qk

n(q1 ) . . . n(α) (qj ) . . .

. . . n(β) (qm ) . . . n(qk ); |FG (q1 )|2 . . . |FG (qk )|2 . These numbers exist due to the fact that the modes are discrete. Furthermore L is not zero as the zeros of FG have been already ruled out, while the phonon energy in (2.14) is a bounded positive quantity. We evaluate the relative contribution of the terms (A.20) to the sum (A.18). Since the numbers of the terms in (A.19) are maximally N (N − 1) . . . (N − k + 1). In the worst case scenario, when all z coordinates are equal, this contribution is less than Mk (N k − N(N − 1) . . . (N − k + 1)) . Lk N(N − 1) . . . (N − k + 1) The term tends to zero with N → ∞ for all k ≤ K.

204

Appendix A

A.7 Forward Semi-Discrete Evolution The already well established scheme for obtaining the adjoint equation works without major modifications also in the semi-discrete phase space. Introducing the functions Vw± (x, m), where where Vw+ takes the values of Vw if Vw > 0 and 0 otherwise. Accordingly Vw− takes the values of −Vw if Vw < 0 and 0 otherwise. Due to the fact that the Wigner potential is antisymmetric, the following relation holds: Vw (x, m) = Vw+ (x, m) − Vw− (x, m) = Vw+ (x, m) − Vw+ (x, −m)

(A.21)

In particular, if Vw+ (x, −m) = 0, then Vw− (x, −m) = 0 moreover Vw− (x, −m) = Vw+ (x, m). The equation can be augmented as follows: 

 d  + γ (x(t )) fw (x(t  ), m, t  ) = dt  ∞ 

(A.22)

Vw (x(t  ), m − m )fw (x(t  ), m , t  ) + γ (x(t  ))fw (x(t  ), m, t  )

m =−∞

The trajectory x(t  ) is parametrized by t  and initialized by x, m, t: x(t  ) = x −

h¯ mΔk (t − t  ); m∗

x(t) = x

(A.23)

By denoting by Γ the following expression   Γ (x, m, m ) = Vw+ (x, m − m ) − Vw+ (x, −(m − m )) + γ (x)δm,m ,

(A.24)

we obtain

d −  t γ (x(y))dy e t fw (x(t  ), m, t  ) =  dt ∞ t  Γ (x(t  ), m, m )fw (x(t  ), m , t  )e− t  γ (x(y))dy .

(A.25)

m =−∞

The evolution of the discrete momentum initial condition fi (x, m) defined at time 0 is then given by:



fw (x, m, t) =

dt 0

e−

t

t

γ (x(y))dy



∞ 

dx  fw (x  , m , t  )Γ (x  , m, m ) ×

m =−∞

θ (t − t  )δ(x  − x(t  ))θD (x  ) + e−

t 0

γ (x(y))dy

(A.26)

fi (x(0), m)

Appendix A

205

The kernel has been augmented to complete the variables Q = (x, m, t) and Q = (x  , m , t  ). The θ and delta functions retain the value of the integral unchanged, in particular θD keeps the integration within the simulation domain (if any). It is seen that the semi-discrete phase space does not cause other changes than the replacement the momentum integration by a sum and a corresponding redefinition of the mean value of a physical quantity A, given by the function A(x, m). Thus, at time τ it holds:



A(τ ) =

∞ 

dx

dt

f (x, m, t)A(x, m)δ(t − τ ) =

dQf (Q)Aτ (Q)

m=−∞

(A.27) where the summation is implicitly assumed in dQ. The adjoint equation having a solution g and a free term g0 can then be written in terms of Q without any complications

f (Q) =

dQ K(Q, Q )f (Q ) + fi (Q) → g(Q ) =

dQK(Q, Q )g(Q) + g0 (Q );

(A.28) By recalling the identity

dQ fi (Q )g(Q ) =

dQg0 (Q)f (Q)withg0 (Q) = Aτ (Q)

(A.29)

under the choice g0 (Q) = Aτ (Q) it follows that:

A =



dt 0



dx



∞ 

fi (x  , m )e−

 t 0

γ (x  (y))dy

g(x  , m , t  )

(A.30)

m =−∞

where x  (y) is the trajectory initialized by x  , m , t  , x(0) = x  . The adjoint equation is then obtained by integration on the unprimed variables: g(x  , m , t  ) = Aτ (x  , m , t  ) +

∞ dt 0



(A.31)

dxg(x, m, t)Γ (x  , m, m )e−

t

t

γ (x(y))dy

θ (t − t  )δ(x  − x(t  ))θD (x  )

m=

This equation will be reformulated by reverting the parametrization of the trajectory (A.23). Now x(t  ) = x  , m and t  initialize the trajectory and x becomes the variable changing with the running time t: x = x  (t)

x  (t) = x  +

hmΔk ¯ (t − t  ) m∗

dx = dx 

(A.32)

206

Appendix A

In particular the argument of the exponent becomes x(y) = x  (y) in the new notations. The delta function in (A.32) sets x  = x  so that x = x  (t)

x  (y) = x  +

hmΔk ¯ (y − t  ) m∗

and the position argument of g becomes x  (t). This leads to Eqs. (15.16) and (15.17).

References

1. C. Jacoboni and P. Lugli, The Monte Carlo Method for Semiconductor Device Simulation. Springer Verlag, 1990. 2. C. Jacoboni, Theory of Electron Transport in Semiconductors. Springer Series in Solid-State Sciences, 2010. 3. D. Querlioz and P. Dollfus, The Wigner Monte Carlo Method for Nanoelectronic Devices - A Particle Description of Quantum Transport and Decoherence. ISTE-Wiley, 2010. 4. D. Ferry and M. Nedjalkov, The Wigner Function in Science and Technology. Bristol, UK: IoP Publishing, 2018. 5. G. Mikhailov, Optimization of Weighted Monte Carlo Methods. Springer Series in Computational Physics, 2011. 6. “Nobel Media AB 2014, The Nobel Prize in Physics 1956.” http://www.nobelprize.org/nobel_ prizes/physics/laureates/1956/. Aug., 2015. 7. “Semiconductor Industry Association.” https://www.semiconductors.org/. 8. M. Golio, “Fifty Years of Moore’s Law,” Proceedings of the IEEE, vol. 113, pp. 1932–1937, 2015. 9. G. Moore, “Progress in Digital Integrated Electronics,” IEDM Technical Digest, vol. 21, pp. 11–13, 1975. 10. C. Ngˆo and M. V. de Voorde, Nanotechnology in a Nutshell. Atlantis Press, 2014. 11. B. Hoefflinger, “From Microelectronics to Nanoelectronics,” in Chips 2020: A Guide to the Future of Nanoelectronics (B. Hoefflinger, ed.), pp. 13–36, Springer, 2012. 12. “SUPERAID7, EU Horizon2020 Project 688101.” https://www.superaid7.eu/. Duration: 2015–2018. 13. K. Goser and P. G. J. Dienstuhl, Nanoelectronics and Nanosystems. Springer-Verlag, 2004. 14. “The End of More - The Death of Moore’s Law by Steve Blank.” https://medium.com/ @sgblank/the-end-of-more-the-death-of-moores-law-5ddcfd8439dd. Sept. 2018. 15. “International Technology Roadmap for Semiconductors,” Semiconductor Industry Association, 2015. http://www.itrs2.net/itrs-reports.html. 16. “SPICE,” (EECS Department of the University of California at Berkeley. USA), Genaral Purpose Circuit Simulation Program. http://bwrc.eecs.berkeley.edu/Classes/icbook/SPICE/. 17. X. Wang, V. Georgiev, F. Adamu-Lema, L. Gerrer, S. Amoroso, and A. Asenov, “TCADBased Design Technology Co-optimization for Variability in Nanoscale SOI FinFETs,” in Integrated Nanodevice and Nanosystem Fabtication (S. Deleonibus, ed.), 978-981-4774-222, (Singapore), pp. 215–252, Pan Stanford Publishing Pte. Ltd, 2017. 18. S. Selberherr, Analysis and Simulation of Semiconductor Devices. Springer, 1984.

© Springer Nature Switzerland AG 2021 M. Nedjalkov et al., Stochastic Approaches to Electron Transport in Micro- and Nanostructures, Modeling and Simulation in Science, Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0

207

208

References

19. J. Carrillo, I. Gamba, A. Majorana, and C. Shu, “A WENO-Solver for the Transients of Boltzmann-Poisson System for Semiconductor Devices: Performance and Comparisons with Monte Carlo Methods,” Journal of Computational Physics, vol. 184, pp. 498–525, 2003. 20. K. Rupp, C. Jungemann, S.-M. Hong, M. Bina, T. Grasser, and A. Jüngel, “A Review of Recent Advances in the Spherical Harmonics Expansion Method for Semiconductor Device Simulation,” Journal of Computational Electronics, vol. 15, no. 3, pp. 939–958, 2016. 21. L. Kadanoff and G. Baym, Quantum Statistical Mechanics: Green’s Function Methods in Equilibrium and Nonequilibrium Problems. Frontiers in Physics, W.A. Benjamin, 1962. 22. S. Datta, “Nanoscale Device Modeling: The Green’s function method,” Superlattices & Microstructures, vol. 28, no. 4, pp. 253–278, 2000. 23. A. Svizhenko, M. P. Anatram, T. R. Govindan, B. Biegel, and R. Venugopal, “TwoDimensional Quantum Mechanical Modeling of Nanotransistors,” Journal of Applied Physics, vol. 91, pp. 2343–2354, 2002. 24. A. Svizhenko and M. P. Antram, “Role of Scattering in Nanotransistors,” IEEE Transactions on Electron Devices, vol. 50, pp. 1459–1466, 2003. 25. S. Jin, Y. Park, and H. Min, “A Three-Dimensional Simulation of Quantum Transport in Silicon Nanowire Transistor in the Presence of Electron-Phonon Interactions,” Journal of Applied Physics, vol. 99, p. 123719, 2006. 26. T. Kuhn and F. Rossi, “Monte Carlo Simulation of Ultrafast Processes in Photoexcited Semiconductors: Coherent and Incoherent Dynamics,” Physical Review B, vol. 46, pp. 7496– 7514, 1992. 27. S. Haas, F. Rossi, and T. Kuhn, “Generalized Monte Carlo Approach for the Study of the Coherent Ultrafast Carrier Dynamics in Photoexcited Semiconductors,” Physical Review B, vol. 53, no. 12, pp. 12855–12868, 1996. 28. F. Rossi and T. Kuhn, “Theory of Ultrafast Phenomena in Photoexcited Semiconductors,” Reviews of Modern Physics, vol. 74, pp. 895–950, July 2002. 29. C. K. Zachos, D. B. Fairlie, and T. L. Curtright, Quantum Mechanics in Phase Space. Singapore: World Scientific, 2005. 30. E. Wigner, “On the Quantum Corrections for Thermodynamic Equilibrium,” Physical Review, vol. 40, pp. 749–759, 1932. 31. N. C. Dias and J. N. Prata, “Admissible States in Quantum Phase Space,” Annals of Physics, vol. 313, pp. 110–146, 2004. 32. J. E. Moyal, “Quantum Mechanics as a Statistical Theory,” Proceedings of the Cambridge Philosophical Society, vol. 45, pp. 99–124, 1949. 33. H. J. Groenewold, “On the Principles of Elementary Quantum Mechanics,” Physica, vol. 12, no. 7, pp. 405–460, 1946. 34. B. J. Hiley, “Phase Space Descriptions of Quantum Phenomena,” in Quantum Theory: Reconsideration of Foundations (A. Khrennikov, ed.), vol. 2, (Växjö), pp. 267–286, Växjö University Press, 2003. 35. H. Weyl, “Quantenmechanik und Gruppentheorie,” Zeitschrift für Physik, vol. 46, pp. 1–46, 1927. 36. W. Frensley, “Boundary Conditions for Open Quantum Systems Driven Far from Equilibrium,” Reviews of Modern Physics, vol. 62, no. 3, pp. 745–789, 1990. 37. P. Carruthers and F. Zachariasen, “Quantum Collision Theory with Phase-Space Distributions,” Review of Modern Physics, vol. 55, no. 1, pp. 245–285, 1983. 38. M. Nedjalkov, “Wigner Transport in Presence of Phonons: Particle Models of the Electron Kinetics,” in From Nanostructures to Nanosensing Applications, Proceedings of the International School of Physics ‘Enrico Fermi’ (A. P. A. D’Amico, G. Ballestrino, ed.), vol. 160, (Amsterdam), pp. 55–103, IOS Press, 2005. 39. N. C. Kluksdahl, A. M. Kriman, D. K. Ferry, and C. Ringhofer, “Self-Consistent Study of Resonant-Tunneling Diode,” Physical Review B, vol. 39, pp. 7720–7734, 1989. 40. F. Rossi, C.Jacoboni, and M.Nedjalkov, “A Monte Carlo Solution of the Wigner Transport Equation,” Semiconductor Science and Technology, vol. 9, pp. 934–936, 1994.

References

209

41. M.Nedjalkov, I.Dimov, F.Rossi, and C.Jacoboni, “Convergency of the Monte Carlo Algorithm for the Wigner Quantum Transport Equation,” Journal of Mathematical and Computer Modelling, vol. 23, no. 8/9, pp. 159–166, 1996. 42. M.Nedjalkov, I.Dimov, P.Bordone, R.Brunetti, and C.Jacoboni, “Using the Wigner Function for Quantum Transport in Device Simulation,” Journal of Mathematical and Computer Modelling, vol. 25, no. 12, pp. 33–53, 1997. 43. M. Nedjalkov, D. Vasileska, D. Ferry, C. Jacoboni, C. Ringhofer, I. Dimov, and V. Palankovski, “Wigner Transport Models of the Electron-Phonon Kinetics in Quantum Wires,” Physical Review B, vol. 74, pp. 035311-1–035311–18, July 2006. 44. M. Nedjalkov, D. Querlioz, P. Dollfus, and H. Kosina, “Wigner Function Approach,” in Nano-Electronic Devices: Semiclassical and Quantum Transport Modeling (D. Vasileska and S. Goodnick, eds.), pp. 289–358, Springer, 2011. invited. 45. B. Ridley, Electrons and Phonons in Semiconductor Multilayers. Cambridge University Press, 1997. 46. O. Madelung, Introduction to Solid-State Theory. Springer, Berlin, 1978. 47. R. Balescu, Equilibrium and Nonequilibrium Statistical Mechanics. Wiley and Sons, 1975. 48. W. Heisenberg, “Über Quantentheoretische Umdeutung Kinematischer und Mechanischer Beziehungen,” Zeitschrift für Physik, vol. 33, pp. 879–893, 1925. 49. E. Schrödinger, “An Undulatory Theory of the Mechanics of Atoms and Molecules,” Physical Review, vol. 28, pp. 1049–1070, 1926. 50. P. Dirac, The Principles of Quantum Mechanics. Clarendon Press, Oxford, 1930. 51. J. von Neumann, Mathematische Grundlagen der Quantenmechanik. Springer, Berlin, 1932. 52. V. I. Tatarskii, “The Wigner Representation of Quantum Mechanics,” Soviet Physics Uspekhi, vol. 26, pp. 311–327, 1983. 53. I. T. Dimov, Monte Carlo Methods for Applied Scientists. World Scientific Publ., 2008. 54. K. Kurosawa, “Monte Carlo Calculation of Hot Electron Problems,” in Journal of the Physical Society of Japan, vol. 21, (Kyoto), pp. 424–426, 1966. 55. P. Price, “The theory of hot electrons,” IBM Journal of Research and Development, vol. 14, pp. 12–24, 1970. 56. H. Rees, “Calculation of Distribution Functions by Exploiting the Stability of the Steady State,” Journal of Physics and Chemistry of Solids, vol. 30, pp. 643–655, 1969. 57. E. Sangiorgi, B. Ricco, and F. Venturi, “MOS 2 : An Efficient Monte Carlo Simulator for MOS Devices,” IEEE Transactions on Computer-Aided Design, vol. 7, pp. 259–271, Feb. 1988. 58. M. Nedjalkov, H. Kosina, and S. Selberherr, “Monte Carlo Algorithms for Stationary Device Simulation,” Mathematics and Computers in Simulation, vol. 62, no. 3–6, pp. 453–461, 2003. 59. M. Nedjalkov, T. Grasser, H. Kosina, and S. Selberherr, “Boundary Condition Models for Terminal Current Fluctuations,” in Proceedings Simulation of Semiconductor Processes and Devices, (Athens, Greece), pp. 152–155, Springer Verlag, Sept. 2001. 60. M. Nedjalkov, T. Grasser, H. Kosina, and S. Selberherr, “A Transient Model for Terminal Current Noise,” Applied Physics Letters, vol. 80, no. 4, pp. 607–609, 2002. 61. P. Lebwohl and P. Price, “Direct Microscopic Simulation of Gunn-Domain Phenomena,” Applied Physics Letters, vol. 19, pp. 530–533, 1971. 62. A. Phillips and P. Price, “Monte Carlo Calculations on Hot Electron Tails,” Applied Physics Letters, vol. 30, pp. 528–530, May 1977. 63. M. Nedjalkov and P. Vitanov, “Monte Carlo Technique for Simulation of High Energy Electrons,” COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 10, no. 4, pp. 525–530, 1991. 64. M.Nedjalkov and P.Vitanov, “Modification in the One-Particle Monte Carlo Method for Solving the Boltzmann Equation with Changed Variables,” Journal of Applied Physics, vol. 64, no. 7, pp. 3532–3537, 1988. 65. W. Fawcett, A. Boardman, and S. Swain, “Monte Carlo Determination of Electron Transport Properties in Gallium Arsenide.,” Journal of Physics and Chemistry of Solids, vol. 31, pp. 1963–1990, 1970.

210

References

66. G. Baccarani, C.Jacoboni, and A. M. Mazzone, “Current Transport in Narrow-Base Transistors,” Solid-State Electronics, vol. 20, no. 1, pp. 5–10, 1977. 67. A. Reklaitis, “The Calculation of Electron Transient Response in Semiconductors by the Monte Carlo Technique,” Physics Letters, vol. 88A, no. 7, pp. 367–370, 1982. 68. J. Socha and J. Krumhansl, “a Study of Electron Transport in Small Semiconductor Devices: The Monte Carlo Trajectory Integral Method,” Physica B, vol. 134, p. 142, 1985. 69. R. Chambers, “The Kinetic Formulation of Conduction Problems,” Proceedings of the Physical Society (London), vol. A65, pp. 458–459, 1952. 70. M. Nedjalkov and P. Vitanov, “Monte-Carlo Methods for Determination of Transport Properties of Semiconductors,” Solid-State Electronics, vol. 31, pp. 1065–1068, 1988. 71. C. Jacoboni, P. Poli, and L. Rota, “A New Monte Carlo Technique for the Solution of the Boltzmann Transport Equation,” Solid-State Electronics, vol. 31, no. 3/4, pp. 523–526, 1988. 72. M. Nedjalkov and P. Vitanov, “Iteration Approach for Solving the Boltzmann Equation with the Monte Carlo Method,” Solid-State Electronics, vol. 32, no. 10, pp. 893–896, 1989. 73. M. Nedjalkov and P. Vitanov, “Application of the Iteration Approach to the Ensemble Monte Carlo Technique,” Solid-State Electronics, vol. 33, no. 4, pp. 407–410, 1990. 74. M.Nedjalkov and I.Dimov, “Convergency of the Monte Carlo Algorithms for Linear transport Modeling,” Mathematics and Computers in Simulations, vol. 47, pp. 383–390, 1998. 75. L. Reggiani, Topics in Applied Physics: Hot-Electron Transport in Semiconductors. Springer, Berlin, 1985. 76. L. Rota, C. Jacoboni, and P. Poli, “Weighted Ensemble Monte Carlo,” Solid-State Electronics, vol. 32, no. 12, pp. 1417–1421, 1989. 77. H. Kosina, M. Nedjalkov, and S. Selberherr, “Theory of the Monte Carlo Method for Semiconductor Device Simulation,” IEEE Transactions on Electron Devices, vol. 47, no. 10, pp. 1898–1908, 2000. 78. P. Price, “On the Calculation of Differential Mobility,” Journal of Applied Physics, vol. 54, no. 6, pp. 3616–3617, 1983. 79. L. Reggiani, E. Starikov, P. Shiktorov, V. Gružinskis, and L. Varani, “Modelling of SmallSignal Response and Electronic Noise in Semiconductor High-Field Transport,” Semiconductor Science and Technology, vol. 12, pp. 141–156, 1997. 80. V. Gružinskis, E. Starikov, and P. Shiktorov, “Conservation Equations for Hot Carriers – I. Transport Models,” Solid-State Electronics, vol. 36, no. 7, pp. 1055–1066, 1993. 81. E. Starikov and P. Shiktorov, “New Approach to Calculation of the Differential Mobility Spectrum of Hot Carriers: Direct Modeling of the Gradient of the Distribution Function by the Monte Carlo Method,” Soviet Physics Semiconductors, vol. 22, no. 1, pp. 45–48, 1988. 82. E. Starikov, P. Shiktorov, V. Gruzinskis, L. Varrani, J. Vaissiere, J. Nougier, and L. Reggiani, “Monte Carlo Calculation of Noise and Small-Signal Impedance Spectra in Submicrometer GaAs n+nn+ Diodes,” Journal of Applied Physics, vol. 79, no. 1, pp. 242–252, 1996. 83. J. Vaissiere, J. Nougier, L. Varrani, P. Houlet, L. Hlou, E. Starikov, P. Shiktorov, and L. Reggiani, “Small Signal Analysis of the Boltzmann Equation from Harmonic- and Impulse Response Methods,” Physical Review B, vol. 49, no. 16, pp. 11144–11152, 1994. 84. M. Nedjalkov, H. Kosina, and S. Selberherr, “Monte-Carlo Method for Direct Computation of the Small Signal Kinetic Coefficients,” in Proceedings Simulation of Semiconductor Processes and Devices, (Kyoto, Japan), pp. 155–158, Sept. 1999. 85. H. Kosina, M. Nedjalkov, and S. Selberherr, “A Monte Carlo Method for Small Signal Analysis of the Boltzmann Equation,” Journal of Applied Physics, vol. 87, no. 9, pp. 4308– 4314, 2000. 86. H. Kosina, M. Nedjalkov, and S. Selberherr, “The Stationary Monte Carlo Method for Device Simulation - Part I: Theory,” Journal of Applied Physics, vol. 93, no. 6, pp. 3553–3563, 2003. 87. M. Nedjalkov, H. Kosina, and S. Selberherr, “The Stationary Monte Carlo Method for Device Simulation - Part II: Event Biasing and Variance Estimation,” Journal of Applied Physics, vol. 93, no. 6, pp. 3564–3571, 2003.

References

211

88. H. Kosina, M. Nedjalkov, and S. Selberherr, “A Stable Backward Monte Carlo Method for the Solution of the Boltzmann Equation,” in Large Scale Scientific Computations 2003 (I. Lirkov, S. Margenov, and P. Yalamov, eds.), LNCS 2907, (Berlin, Heidelberg), pp. 170–177, Springer Verlag, 2004. 89. M. Nedjalkov, D. Vasileska, I. Dimov, and G. Arsov, “Mixed Initial-Boundary Value Problem in Particle Modeling of Microelectronic Devices,” Monte Carlo Methods and Applications, vol. 13, no. 4, pp. 299–331, 2007. 90. M. Nedjalkov, S. Ahmed, and D. Vasileska, “A Self-Consistent Event Biasing Scheme for Statistical Enhancement,” Journal of Computational Electronics, vol. 3, no. 3–4, pp. 305– 309, 2004. 91. M. Nedjalkov and H. Kosina, “Variance of the Ensemble Monte Carlo Algorithm for Semiconductor Transport Modeling,” Mathematics and Computers in Simulations, vol. 55, no. 1–3, pp. 191–198, 2001. 92. U. Ravaiolli, M. A. Osman, W. Poetz, N. C. Kluksdahl, and D. K. Ferry, “Investigation of Ballistic Transport through Resonant-Tunneling Quantum Wells Using Wigner Function Approach,” Physica B+C, vol. 134, pp. 36–40, 1985. 93. W. Frensley, “Wigner-Function Model of Resonant-Tunneling Semiconductor Device,” Physical Review B, vol. 36, no. 3, pp. 1570–1580, 1987. 94. B. Biegel and J. Plummer, “Comparison of Self-Consistency Iteration Options for the Wigner Function Method of Quantum Device Simulation,” Physical Review B, vol. 54, pp. 8070– 8082, 1996. 95. N. C. Kluksdahl, W. Poetz, U. Ravaiolli, and D. K. Ferry, “Wigner Function Study of a Double Quantum Well Resonant-Tunneling Diode,” Superlattices & Microstructures, vol. 3, pp. 41– 45, 1987. 96. K. Gullapalli, D. Miller, and D. Neikirk, “Simulation of Quantum Transport in MemorySwitching Double-Barrier Quantum-Well Diodes,” Physical Review B, vol. 49, pp. 2622– 2628, 1994. 97. F. A. Buot and K. L. Jensen, “Lattice Weil-Wigner Formulation of Exact-Many Body Quantum-Transport Theory and Applications to Novel Solid-State Quantum-Based Devices,” Physical Review B, vol. 42, no. 15, pp. 9429–9457, 1990. 98. R. K. Mains and G. I. Haddad, “Wigner Function Modeling of Resonant Tunneling Diodes With High Peak-To-Valley Ratios,” Journal of Applied Physics, vol. 64, pp. 5041–5044, 1988. 99. J. Schilp, T. Kuhn, and G. Mahler, “Electron-Phonon Quantum Kinetics in Pulse-Excited Semiconductors: Memory and Renormalization Effects,” Physical Review B, vol. 50, no. 8, pp. 5435–5447, 1994. 100. C. Fuerst, A. Leitenstorfer, A. Laubereau, and R. Zimmermann, “Quantum Kinetic ElectronPhonon Interaction in GaAs: Energy Nonconserving Scattering Events and Memory Effects,” Physical Review Letters, vol. 78, pp. 3733–3736, 1997. 101. M. Nedjalkov and I. Dimov, “Statistical Modelling of Pulse Excited Electron Quantum Kinetics in One Band Semiconductor,” Mathematics and Computers in Simulations, vol. 47, pp. 391–402, 1998. 102. K. Thornber, “High-Field Electronic Conduction in Insulators,” Solid-State Electronics, vol. 21, pp. 259–266, 1978. 103. J. Barker and D. Ferry, “On the Physics and Modeling of Small Semiconductor Devices–I,” Solid-State Electronics, vol. 23, pp. 519–530, 1980. 104. M. V. Fischetti, “Monte Carlo Solution to the Problem of High-Field Electron Heating in SiO2 ,” Physical Review Letters, vol. 53, no. 3, p. 1755, 1984. 105. M. Herbst, M. Glanemann, V. Axt, and T. Kuhn, “Electron-Phonon Quantum Kinetics for Spatially Inhomogenenous Excitations,” Physical Review B, vol. 67, pp. 195305–1 – 195305– 18, 2003. 106. H.Haug and S.W.Koch, Quantum Theory of the Optical and Electronic Properties of Semiconductors (3rd ed.). Singapore: World Scientific, 1994.

212

References

107. I. Levinson, “Translational Invariance in Uniform Fields and the Equation for the Density Matrix in the Wigner Representation,” Soviet Physics JETP, vol. 30, no. 2, pp. 362–367, 1970. 108. J. Barker and D. Ferry, “Self-Scattering Path-Variable Formulation of High-Field, TimeDependent, Quantum Kinetic Equations for Semiconductor Transport in the Finite-CollisionDuration Regime,” Physical Review Letters, vol. 42, no. 26, pp. 1779–1781, 1979. 109. R. Brunetti, C. Jacoboni, and F. Rossi, “Quantum Theory of Transient Transport in Semiconductors: A Monte Carlo Approach,” Physical Review B, vol. 39, pp. 10781–10790, May 1989. 110. M. Nedjalkov, H. Kosina, R. Kosik, and S. Selberherr, “A Space Dependent Wigner Equation Including Phonon Interaction,” Journal of Computational Electronics, vol. 1, no. 1–2, pp. 27– 33, 2002. 111. C. Ringhofer, M. Nedjalkov, H. Kosina, and S. Selberherr, “Semi-Classical Approximation of Electron-Phonon Scattering beyond Fermi’s Golden Rule,” SIAM Journal of Applied Mathematics, vol. 64, pp. 1933–1953, 2004. 112. P.Lipavski, F.Khan, F.Abdolsalami, and J. Wilkins, “High-Field Transport in Semiconductors. I. Absence of the Intra-Collisional Field Effect,” Physical Review B, vol. 43, no. 6, pp. 4885– 4896, 1991. 113. D. Querlioz, H. N. Nguyen, J. Saint-Martin, A. Bournel, S. Galdin-Retailleau, and P. Dollfus, “Wigner-Boltzmann Monte Carlo Approach to Nanodevice Simulation: From Quantum to Semiclassical Transport,” Journal of Computational Electronics, vol. 8, pp. 324–335, 2009. 114. M. Nedjalkov, R. Kosik, H. Kosina, and S. Selberherr, “A Wigner Equation with Quantum Electron-Phonon Interaction,” Microelectronic Engineering, vol. 63, no. 1–3, pp. 199–203, 2002. 115. P. Bordone, M. Pascoli, R. Brunetti, A. Bertoni, and C. Jacoboni, “Quantum Transport of Electrons in Open Nanostructures with the Wigner-Function Formalism,” Physical Review B, vol. 59, no. 4, pp. 3060–3069, 1999. 116. C. Jacoboni, A. Bertoni, P. Bordone, and R. Brunetti, “Wigner-Function Formulation for Quantum Transport in Semiconductors: Theory and Monte Carlo Approach,” Mathematics and Computers in Simulations, vol. 55, no. 1–3, pp. 67–78, 2001. 117. P. Bordone, A. Bertoni, R. Brunetti, and C. Jacoboni, “Monte Carlo Simulation of Quantum Electron Transport Based on Wigner Paths,” Mathematics and Computers in Simulation, vol. 62, p. 307, 2003. 118. A. Bertoni, P. Bordone, R. Brunetti, and C. Jacoboni, “The Wigner Function for Electron Transport in Mesoscopic Systems,” Journal of Physics: Condensed Matter, vol. 11, pp. 5999– 6012, 1999. 119. C. Jacoboni and P. Bordone, “Wigner Function Approach to Non-Equilibrium Electron Transport,” Reports on Progress in Physics, vol. 67, pp. 1033–1075, 2004. 120. M. Nedjalkov, I. Dimov, and H. Haug, “Numerical Studies of the Markovian Limit of the Quantum Kinetics with Phonon Scattering,” Physica Status Solidi (b), vol. 209, pp. 109–121, 1998. 121. Y. Yamada, H. Tsuchiya, and M. Ogawa, “Quantum Transport Simulation of SiliconNanowire Transistors Based on Direct Solution Approach of the Wigner Transport Equation,” IEEE Transactions on Electron Devices, vol. 56, pp. 1396–1401, 2009. 122. S. Barraud, “Phase-Coherent Quantum Transport in Silicon Nanowires Based on Wigner Transport Equation: Comparison with the Nonequilibrium-Green-Function Formalism,” Journal of Applied Physics, vol. 106, p. 063714, 2009. 123. K.-Y. Kim and B. Lee, “On the High Order Numerical Calculation Schemes for the Wigner Transport Equation,” Solid-State Electronics, vol. 43, pp. 2243–2245, 1999. 124. J. Cervenka, P. Ellinghaus, and M. Nedjalkov, “Deterministic Solution of the Discrete Wigner Equation,” in Numerical Methods and Applications (I. Dimov, S. Fidanova, and I. Lirkov, eds.), pp. 149–156, Springer International Publishing, 2015.

References

213

125. B. S. M. Van de Put and W. Magnus, “Efficient Solution of the Wigner-Liouville Equation Using a Spectral Decomposition of the Force Field,” Journal of Computational Physics, vol. 350, pp. 314–325, 12 2017. 126. S. Shao, T. Lu, and W. Cai, “Adaptive Conservative Cell Average Spectral Element Methods for Transient Wigner Equation in Quantum Transport,” Communications in Computational Physics, vol. 9, no. 3, pp. 711–739, 2011. 127. Z. Chen, Y. Xiong, and S. Shao, “Numerical Methods for the Wigner Equation with Unbounded Potential,” Journal of Scientific Computing, vol. 79, pp. 345–368, Apr 2019. 128. H. Tsuchiya and U. Ravaioli, “Particle Monte Carlo Simulation of Quantum Phenomena in Semiconductor Devices,” Journal of Applied Physics, vol. 89, pp. 4023–4029, April 2001. 129. R. Sala, S. Brouard, and G. Muga, “Wigner Trajectories and Liouville’s theorem,” Journal of Chemical Physics, vol. 99, pp. 2708–2714, 1993. 130. D. Ferry, R. Akis, and D. Vasileska, “Quantum Effect in MOSFETs: Use of an Effective Potential in 3D Monte Carlo Simulation of Ultra-Schort Channel Devices,” International Electron Devices Meeting, pp. 287–290, 2000. 131. L. Shifren, R. Akis, and D. Ferry, “Correspondence Between Quantum and Classical Motion: Comparing Bohmian Mechanics with Smoothed Effective Potential Approach,” Physics Letters A, vol. 274, pp. 75–83, 2000. 132. S. Ahmed, C. Ringhofer, and D. Vasileska, “An Effective Potential Approach to Modeling 25nm MOSFET Devices,” Journal of Computational Electronics, vol. 2, pp. 113–117, 2003. 133. C. Ringhofer, C. Gardner, and D. Vasileska, “An Effective Potentials and Quantum Fluid Models: A Thermodynamic Approach,” Journal of High Speed Electronics and Systems, vol. 13, pp. 771–801, 2003. 134. K. L. Jensen and F. A. Buot, “The Methodology of Simulating Particle Trajectories Through Tunneling Structures Using a Wigner Distribution Approach,” IEEE Transactions on Electron Devices, vol. 38, no. 10, pp. 2337–2347, 1991. 135. H. Tsuchiya and T. Miyoshi, “Simulation of Dynamic Particle Trajectories through ResonantTunneling Structures based upon Wigner Distribution Function,” Proc. 6th Int. Workshop on Computational Electronics IWCE6, Osaka, pp. 156–159, 1998. 136. M. Pascoli, P. Bordone, R. Brunetti, and C. Jacoboni, “Wigner Paths for Electrons Interacting with Phonons,” Physical Review B, vol. B 58, pp. 3503–3506, 1998. 137. L. Shifren and D. K. Ferry, “A Wigner Function Based Ensemble Monte Carlo Approach for Accurate Incorporation of Quantum Effects in Device Simulation,” Journal of Computational Electronics, vol. 1, pp. 55–58, 2002. 138. D. Querlioz, P. Dollfus, V. N. Do, A. Bournel, and V. L. Nguyen, “An Improved Wigner Monte-Carlo Technique for the Self-Consistent Simulation of RTDs,” Journal of Computational Electronics, vol. 5, pp. 443–446, 2006. 139. M. Nedjalkov, H. Kosina, S. Selberherr, C. Ringhofer, and D. Ferry, “Unified Particle Approach to Wigner-Boltzmann Transport in Small Semiconductor Devices,” Physical Review B, vol. 70, pp. 115319–115335, Sept. 2004. 140. R. Kosik, M. Thesberg, J. Weinbub, and H. Kosina, “On the Consistency of the Stationary Wigner Equation,” in Book of Abstracts of the International Wigner Workshop (IW2), pp. 30– 31, 2019. 141. M. Nedjalkov, J. Weinbub, M. Ballicchia, S. Selberherr, I. Dimov, D. Ferry, and K. Rupp, “Posedness of Stationary Wigner Equation,” in Book of Abstracts of the International Wigner Workshop (IW2), pp. 32–33, 2019. 142. R. Rosati, F. Dolcini, R. Iotti, and F. Rossi, “Wigner-Function Formalism Applied to Semiconductor Quantum Devices: Failure of the Conventional Boundary Condition Scheme,” Physical Review B, vol. 88, p. 035401, Jul 2013. 143. T. Schmidt and K. Moehring, “Stochastic Path-Integral Simulation of Quantum Scattering,” Physical Review A, vol. 48, no. 5, pp. R3418–R3420, 1993.

214

References

144. M. Nedjalkov, H. Kosina, and S. Selberherr, “A Weight Decomposition Approach to the Sign Problem in Wigner Transport Simulations,” in Large Scale Scientific Computations 2003 (I. Lirkov and et al., eds.), LNCS 2907, (Berlin Heidelberg), pp. 178–184, Springer Verlag, 2004. 145. M. Nedjalkov, H. Kosina, E. Ungersboeck, and S. Selberherr, “A Quasi-Particle Model of the Electron-Wigner Potential Interaction,” Semiconductor Science and Technology, vol. 19, pp. S226–S228, 2004. 146. H. Kosina, M. Nedjalkov, and S. Selberherr, “Solution of the Space-dependent Wigner Equation Using a Particle Model,” Monte Carlo Methods and Applications, vol. 10, no. 3–4, pp. 359–368, 2004. 147. H. Kosina and M. Nedjalkov, “Review Chapter: Wigner Function Based Device Modeling,” in Handbook of Theoretical and Computational Nanotechnology (M. Rieth and W. Schommers, eds.), ISBN: 1-58883-042-X, (Los Angeles), pp. 731–763, American Scientific Publishers, 2006. 148. P. Schwaha, M. Nedjalkov, S. Selberherr, and I. Dimov, “Monte Carlo Investigations of Electron Decoherence due to Phonons,” in Monte Carlo Methods and Applications (K. K. Sabelfeld and I. Dimov, eds.), pp. 203 – 211, De Gruyter, 2012. 149. M. Benam, M. Nedjalkov, and S. Selberherr, “A Wigner Potential Decomposition in the Signed-Particle Monte Carlo Approach,” in Book of Abstracts of the Ninth International Conference on Numerical Methods and Applications (NM&A’18), pp. 34–35, 2018. 150. G. Zandler, A. D. Carlo, K. Kometer, P. Lugli, P. Vogl, and E. Gornik, “A Comparison of Monte Carlo and Cellular Automata Approaches for Semiconductor Device Simulation,” IEEE Electron Device Letters, vol. 14, no. 2, pp. 77–79, 1993. 151. P. Ellinghaus, M. Nedjalkov, and S. Selberherr, “Implications of the Coherence Length on the Discrete Wigner Potential,” in The 17th International Workshop on Computational Electronics (IWCE), pp. 1–3, IEEE Xplore, 2014. 152. P. Ellinghaus, M. Nedjalkov, and S. Selberherr, “Efficient Calculation of the TwoDimensional Wigner Potential,” in The 17th International Workshop on Computational Electronics (IWCE), pp. 1–3, IEEE Xplore, 2014. 153. M. Nedjalkov, P. Schwaha, S. Selberherr, J. M. Sellier, and D. Vasileska, “Wigner QuasiParticle Attributes - An Asymptotic Perspective,” Applied Physics Letters, vol. 102, no. 16, pp. 163113–1 – 163113–4, 2013. 154. P. Ellinghaus, Two-Dimensional Wigner Monte Carlo Simulation for Time-Resolved Quantum Transport with Scattering. PhD thesis, E360, 2016. 155. I. Dimov and M. Savov, “Probabilistic Analysis of the Single Particle Wigner Monte Carlo Method,” Mathematics and Computers in Simulation (MATCOM), vol. 173, no. C, pp. 32–50, 2020. 156. E. Bolthausen, “The Berry-Esseen theorem for functionals of discrete Markov chains,” Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete (Probability Theory and Related Fields), vol. 54, no. 1, pp. 59–73, 1980. 157. O. Muscato and W. Wagner, “A Class of Stochastic Algorithms for the Wigner Equation,” SIAM Journal of Scientific Computing, vol. 38, pp. 1483–1507, 2016. 158. I. Shevtsova, “An Improvement of Convergence Rate Estimates in the Lyapunov Theorem,” Doklady Mathematics, vol. 82, pp. 862–864, 2010. 159. O. Muscato and W. Wagner, “A Stochastic Algorithm Without Time Discretization Error for the Wigner Equation,” Kinetic & Related Models, vol. 12, p. 59, 2019. 160. S. Shao and Y. Xiong, “A Branching Random Walk Method for Many-Body Wigner Quantum Dynamics,” Numerical Mathematics Theory Methods and Applications, vol. 12, pp. 21–71, 2019. 161. P. L’Ecuyer, “History of Uniform Random Number Generation,” in 2017 Winter Simulation Conference (WSC), pp. 202–230, 2017. 162. http://simul.iro.umontreal.ca/. The software is free for users and includes code written in C, C++, and Java. We also recommend the OpenCL random number generation library clRNG http://simul.iro.umontreal.ca/clrng/indexe.html.