Some Unusual Topics in Quantum Mechanics [2 ed.] 9783031359613, 9783031359620

This second edition of Some Unusual Topics in Quantum Mechanics builds upon the topics covered in the first, with additi

114 78 6MB

English Pages 314 [326] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword to the First Edition
Preface to the Second Edition
Preface to the First Edition
Acknowledgement
Notation
Contents
1 Position Operators of Non-relativistic Quantum Mechanics
1.1 Introduction
1.2 Spinless Relativistic Particle
1.3 Lie Algebra of the Poincare Group
1.4 Position Operators of NRQM
1.5 Projective Representations
1.6 Non-relativistic Limit
1.7 Phase Factors of the Galilean Group
1.8 Further Exercises
1.9 Notes
1.9.1 Time Translation and Time Evolution
1.9.2 The Newton-Wigner Position Operator
1.9.3 Poincare and Galilean Group Representations
References
2 A Bundle Picture of Quantum Mechanics
2.1 The Bundle Picture
2.1.1 Sections
2.1.2 Parallel Transport
2.1.3 Change of Basis or of Trivialization
2.1.4 Transport Round a Loop and Curvature
2.2 Covariant Derivative
2.2.1 Covariant Derivative of a Section
2.2.2 Covariant Derivative of an Operator
2.3 The Base X of Galilean Frames
2.3.1 The Hypothesis of Parallel Transport
2.3.2 Calculation of i
2.3.3 Galilean Bundle Curvature Is Zero
2.4 Application to Accelerated Frames
2.4.1 A Linearly Accelerated Frame
2.4.2 Rotating Frame: Centrifugal and Coriolis Forces
2.5 Further Exercises
2.6 Notes
2.6.1 Bundle Picture
2.6.2 Geometric Phase
References
3 A Beam of Particles = A Plane Wave?
3.1 A Coherent Bunch
3.2 Scattering Theory
3.2.1 Scattering States and the Moller Operator
3.2.2 Scattering Matrix
3.2.3 Properties of Ω() and S
3.3 Transition Rate
3.3.1 Irreversibility of Transitions
3.3.2 Transition Probability per Unit Time
3.4 Cross Sections
3.4.1 Non-relativistic Scattering from a Potential
3.4.2 Colliding Non-relativistic Beams
3.4.3 Relativistic Scattering
3.5 Comments on Formulas of Sect.3.2.3
3.5.1 Moller Operators
3.5.2 S-Matrix
3.5.3 Integral Equations
3.5.4 Ω(+) and S in Energy Basis
3.6 Further Exercises
3.7 Notes
References
4 Star-Product Formulation of Quantum Mechanics
4.1 Weyl Ordering and the Star Product
4.2 Derivation for Star-Product Expression
4.3 Wigner Distribution Function
4.4 Trace of
4.5 Trace of a Product: Expectation Values
4.6 Eigenvalues
4.7 Dynamics and the Moyal Bracket
4.8 Star Exponential and the Path Integral
4.9 Further Exercises
4.10 Notes
4.10.1 Weyl Correspondence and Wigner Distribution
4.10.2 Star Product and Moyal Bracket
References
5 Can There Be a Non-linear Quantum Mechanics?
5.1 Hamiltonian Equations in Quantum Mechanics
5.2 Observables and Poisson Bracket
5.3 Symmetry Transformations
5.4 Eigenvalues
5.5 Non-linear Quantum Mechanics?
5.6 Non-linear Terms: Simple Examples
5.6.1 Example 1: Extra Energy Level
5.6.2 Example 2: Asymptotic States
5.7 Further Exercises
5.8 Notes
5.8.1 Non-linear Quantum Mechanics
5.8.2 Non-linear Schrodinger Equation
References
6 Interaction = Exchange of Quanta
6.1 Non-relativistic ``Potential''
6.2 The Simplest Model of Quanta Exchange
6.2.1 The Hamiltonian
6.2.2 Bare and Dressed States
6.2.3 Single-Dressed B-particle
6.2.4 B-B Effective Interaction
6.3 Exercises
6.4 Notes
References
7 Proof of Wigner's Theorem
7.1 Rays and Symmetry Transformation
7.2 Wigner's Theorem
7.3 Bargmann Invariant
7.4 A Lemma
7.5 Proof of the Theorem
7.6 The Final Step
7.7 Further Exercises
7.8 Notes
References
8 Hilbert Space: An Introduction
8.1 Definition
8.2 Pythagoras Theorem
8.3 Bessel's Inequality
8.4 Schwarz and Triangle Inequalities
8.5 Complete Orthonormal Set
8.6 Subspaces
8.7 Direct Sum of Hilbert Spaces
8.8 Tensor Product of Hilbert Spaces
8.9 Bounded Linear Operators
8.10 Bounded Linear Functionals
8.11 Adjoint of a Bounded Operator
8.12 Projection Operators
8.13 Unitary Operators
8.14 Eigenvalues and Spectrum
8.15 Spectral Theorem
8.16 Density Matrix
8.17 Notes
References
9 What Is an ``Essentially Self-Adjoint'' Operator?
9.1 Introduction
9.2 Unbounded Operators
9.2.1 Domain and Range of an Operator
9.3 Graph of an Operator
9.4 Closed Operator
9.5 Adjoint in the General Case
9.6 Symmetric and Self-Adjoint Operators
9.6.1 Boundary Conditions for Self-Adjointness of H=-d2/dx2 on L2([a,b])
9.7 Self-Adjoint Extensions
9.8 Notes
References
10 Is There a Time-Energy Uncertainty Relation?
10.1 Introduction
10.1.1 Internal and External Time
10.1.2 Individual Measurement Versus StatisticalRelations
10.1.3 The Einstein-Bohr Argument of 1927
10.2 Mandelstam-Tamm Relation
10.2.1 A Characteristic Time for a Time-Dependent Observable
10.2.2 Half-Life of a State
10.3 Krylov-Fock Survival Probability
10.4 Wigner's Uncertainty Relation
10.4.1 Definition of t and E
10.4.2 Calculation of t
10.4.3 The Product (t)2(E)2
10.5 Notes
References
11 Fock Spaces
11.1 Introduction
11.2 Space for Many Particles
11.3 Fock Space for Bosons
11.3.1 Fock Space Fs
11.3.1.1 Symmetrizer
11.3.1.2 Annihilation Operator on General Fock Space Fn
11.3.1.3 Annihilation Operator on Fns
11.3.1.4 Creation Operator on Fn
11.3.1.5 Creation Operator on Fns
11.3.1.6 af Is Adjoint of af
11.3.2 Commutation Relations
11.4 Fock Space for Fermions
11.4.1 Fock Space Fa
11.4.1.1 Annihilation and Creation Operators on Fa
11.4.2 Anti-commutation Relations
11.5 Orthonormal Bases in Hns and Hna
11.5.1 Orthonormal Basis in Hn
11.5.2 Orthonormal Basis in Hns
11.5.3 Orthonormal Basis in Hna
11.6 Occupation Number Representation
11.6.1 Symmetric Occupation States
11.6.2 Antisymmetric Occupation States
11.7 Summary
11.7.1 Af, Af
11.7.2 Symmetrizer S(n) and Symmetric Subspaces
11.7.3 Antisymmetrizer A(n) and AntisymmetricSubspaces
11.7.4 Orthonormal Bases in Hns
11.7.5 Orthonormal Basis in Hna
11.8 Notes
References
12 Second Quantization
12.1 Tensor Product of Observables
12.2 One-Particle Observables
12.3 Two-Particle Observables
12.3.1 Bosons
12.3.2 Fermions
12.4 Symmetries
12.5 Examples
12.5.1 Non-relativistic Particles
12.5.1.1 Bosons
12.5.1.2 Fermions
12.5.2 Heisenberg Picture
12.5.2.1 The Name ``Second Quantization''
12.6 Many Species of Particles: The Standard Convention
12.7 A Briefest Outline of Feynman Diagrams
12.8 Summary
12.8.1 One-Particle Operators
12.8.2 Two-Particle Observables
12.8.2.1 Bosons
12.8.2.2 Fermions
12.8.3 Symmetries
12.9 Notes
12.9.1 Definition of the Tensor Product
12.9.2 The Standard Convention
References
13 Relativistic Configuration States and Quantum Fields
13.1 Spin Zero Particle of Mass m
13.2 Configuration Space States
13.2.1 Inner Product in Configuration Space
13.2.2 "426830A ct,x|ct,y"526930B ≠0
13.3 Fock Space for Spin Zero Particles
13.3.1 Microcausality
13.4 Space and Time-Reversal Covariance
13.5 Observables
13.5.1 Pauli-Jordan Function
13.5.2 Momentum and Energy
13.6 Charged Spin Zero Particles
13.6.1 Microcausality
13.6.2 Charge Conjugation
13.7 Spin 1/2 Particles
13.7.1 Introduction to SL(2,C)
13.7.1.1 ``Direct'' Boost
13.7.2 Proper Orthochronous Lorentz Group
13.8 Covariant States
13.8.1 Momentum States
13.8.2 Wigner Rotation
13.8.3 Dirac Equation in Momentum Space
13.8.4 Parity P
13.8.5 Configuration Space States
13.8.6 Dirac Equations in Configuration Space
13.9 Fock Space for Spin 12 particles
13.9.1 The Anti-particle
13.9.2 Microcausality
13.10 Notes
13.10.1 Relativistic Invariance
13.10.2 Dirac Equation
References
14 Minimum Uncertainty States
14.1 Uncertainty Inequality
14.2 Condition for Equality
14.2.1 Example
14.3 Coherent States
14.3.1 Annihilation and Creation Operators
14.3.2 Eigenstates of af
14.3.3 Minimum Uncertainty States
14.4 Notes
References
15 Path Integrals in General Coordinates
15.1 Introduction
15.2 Feynman Path Integral
15.2.1 The Short-Time Propagator (STP)
15.2.2 The VPM determinant
15.3 Path Integral as a Scheme of Quantization
15.3.1 A Free Particle in Two Dimensions
15.4 Time Re-parametrization
15.5 Pseudo Time Scaling of Paths
15.6 Coulomb Potential
15.7 Schrodinger Equations for Riemannian Configuration Space
15.7.1 Lagrangian STP
15.7.2 The Canonical STP
15.8 Notes
15.8.1 K-S Transformation
15.8.2 General Reviews
References
16 A Brief Pre-history of Matrix Mechanics and O(4) Symmetry of the Hydrogen Atom
16.1 Niels Bohr(1913)
16.2 Albert Einstein (1917)
16.3 Ladenberg (1919)
16.3.1 Drude's Classical Theory of Dispersion
16.3.2 Loss of Energy: Classical Radiation
16.3.3 Loss of Energy: Quantum Theory
16.3.4 Kramers (1924)
16.4 Bohr, Kramers, Slater (1924)
16.5 Thomas, Reiche, and Kuhn (1925)
16.5.1 Remark About the Kuhn Formula
16.6 Heisenberg
16.6.1 Heisenberg's Proof of the Thomas-Kuhn Sum Rule, July 1925
16.7 Hydrogen Atom and O(4)
16.7.1 Runge-Lenz Vector in Classical Mechanics
16.7.2 Runge-Lenz Vector in Quantum Mechanics
16.7.3 A Useful Result
16.8 Lie Algebra of O(4)
16.8.1 Lie Algebra of O(3) and SU(2)
16.8.1.1 Important Note
16.9 Energy Spectrum via O(4)
16.10 Note
References
17 Non-locality in Quantum Mechanics and Bell's Inequality
17.1 Introduction
17.1.1 Probabilities in Quantum Mechanics
17.1.2 EPR Incompleteness Argument
17.1.3 David Bohm's Reformulation of the EPRArgument
17.1.4 Hidden Variables and Bell's Inequality
17.2 Bell's Hidden Variable Model for a Spin 1/2 Particle
17.2.1 Impossibility Proof of Hidden Variable Theories
17.3 The Bohm Version of EPR: Two Spins
17.3.1 Measuring Spins in Two Different Directions
17.4 Bell's Inequality
17.4.1 Wigner's Proof
17.4.2 BCHSH Generalization
17.5 Bell's Inequality Violation (BIV)
17.5.1 Entangled Photon Pairs
17.5.1.1 State of Entangled Photons
17.5.1.2 Polarization Vectors in Classical Optics
17.5.1.3 Bell Inequality for Photons
17.5.1.4 Polarizers
17.5.1.5 Aspect Experiment with Variable Axes
17.5.2 Creation of Entangled Photon Pairs by SPDC
17.5.3 Summary of Experimental Evidence for BIV
17.5.4 Are Inequalities Really Necessary?
17.6 Epilogue
17.7 Notes
17.8 General Reviews
References
Index
Recommend Papers

Some Unusual Topics in Quantum Mechanics [2 ed.]
 9783031359613, 9783031359620

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Physics

Pankaj Sharan

Some Unusual Topics in Quantum Mechanics Second Edition

Lecture Notes in Physics Founding Editors Wolf Beiglböck, Heidelberg, Germany Jürgen Ehlers, Potsdam, Germany Klaus Hepp, Zürich, Switzerland Hans-Arwed Weidenmüller, Heidelberg, Germany

Volume 1020 Series Editors Roberta Citro, Salerno, Italy Peter Hänggi, Augsburg, Germany Morten Hjorth-Jensen, Oslo, Norway Maciej Lewenstein, Barcelona, Spain Satya N. Majumdar, Orsay, France Luciano Rezzolla, Frankfurt am Main, Germany Angel Rubio, Hamburg, Germany Wolfgang Schleich, Ulm, Germany Stefan Theisen, Potsdam, Germany James D. Wells, Ann Arbor, MI, USA Gary P. Zank, Huntsville, AL, USA

The series Lecture Notes in Physics (LNP), founded in 1969, reports new developments in physics research and teaching - quickly and informally, but with a high quality and the explicit aim to summarize and communicate current knowledge in an accessible way. Books published in this series are conceived as bridging material between advanced graduate textbooks and the forefront of research and to serve three purposes: • to be a compact and modern up-to-date source of reference on a well-defined topic; • to serve as an accessible introduction to the field to postgraduate students and non-specialist researchers from related areas; • to be a source of advanced teaching material for specialized seminars, courses and schools. Both monographs and multi-author volumes will be considered for publication. Edited volumes should however consist of a very limited number of contributions only. Proceedings will not be considered for LNP. Volumes published in LNP are disseminated both in print and in electronic formats, the electronic archive being available at springerlink.com. The series content is indexed, abstracted and referenced by many abstracting and information services, bibliographic networks, subscription agencies, library networks, and consortia. Proposals should be sent to a member of the Editorial Board, or directly to the responsible editor at Springer: Dr. Lisa Scalone [email protected]

Pankaj Sharan

Some Unusual Topics in Quantum Mechanics Second Edition

Pankaj Sharan Department of Physics Jamia Millia Islamia University (emeritus) New Delhi, India

ISSN 0075-8450 ISSN 1616-6361 (electronic) Lecture Notes in Physics ISBN 978-3-031-35961-3 ISBN 978-3-031-35962-0 (eBook) https://doi.org/10.1007/978-3-031-35962-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020, 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Foreword to the First Edition

Quantum Theory is about 120 years old, and Quantum Mechanics proper, a truly spectacular achievement of twentieth century physics, will soon be a century old. As Abraham Pais once said, quantum theory is ‘a uniquely twentieth century mode of thought’. Quantum mechanics has profound geometric and algebraic features, with probability ideas too playing a crucial role. It has a richness about it, and is still growing, as seen through the developments of weak measurement, the Zeno effect, the geometric phase idea, entanglement and quantum information and computation in recent decades. It is a core subject in the teaching of physics at upper undergraduate, graduate and research levels. Over a long period of time, a very large number of texts have been written, several of them becoming classics of the physics literature. Every teacher of quantum mechanics necessarily has to make some selection of relatively advanced topics to cover, after the basic or irreducible core has been presented. Of course the successful applications are legion. Pankaj Sharan is a truly gifted and experienced teacher of the subject having taught it for many decades with passion and enthusiasm. In this book he has put together a set of concise treatments of special topics that are usually not found in most texts. These include the role of particle position as seen in going from the relativistic to the Galilean domain; a picture of quantum mechanics in the mathematical language of fibre bundles; the basic concepts and calculational methods in scattering theory such as the Moller and S matrices, and their integral equations; the formulation of the subject in the classical phase space language based on the pioneering ideas of Weyl, Wigner and later Moyal; the motivations for and possibilities of a nonlinear extension of quantum mechanics; the connection between the classical idea of interparticle interaction potential and the typically quantum idea of particle exchange; and the proof of the Wigner Theorem on representation of symmetries in the form given by Bargmann. The author’s pedagogical skills built up over many years are evident in every one of these treatments.

v

vi

Foreword to the First Edition

This book provides enrichment material for both students and teachers—as possible projects for individual students to work on and present as special lectures in a regular course, and for teachers to possibly include some interesting ideas in their classroom treatment of the subject. Thanks and congratulations to the author for this excellent supplementary material every good course on quantum mechanics can draw upon. Bengaluru, India

N. Mukunda

Preface to the Second Edition

I am happy that 2 years after the publication of Some Unusual Topics in Quantum Mechanics there was a request from the publishers to consider expanding the book by including some more topics. The purpose of the book in this second edition remains the same: to provide discussion of topics which can supplement regular quantum mechanics courses at a graduate level. In the list of topics added, there is an attempt to provide some minimal mathematical background that every serious student of quantum mechanics should have. It is sad that senior students, even researchers sometimes, do not care to learn the basics of Hilbert space theory. In order to be a good citizen, it is not necessary to be a lawyer, but a minimum knowledge of law is necessary to remain on the right side of the authorities. In my opinion, mathematics plays the same role for physicists. Chapters on mathematical topics have been written with this in mind. There is a chapter added on Fock spaces, followed by one on second quantization and another one on relativistic states. These topics are usual but their treatment here is somewhat different. Most introductions to annihilation-creation operators are done with a view to specific applications and the commutation or anti-commutation rules are simply assumed. Here these operators are introduced in a very general setting and the anti-linear nature of the annihilation operator on its argument is used often. Relativistic quantum fields are constructed from annihilation and creation operators of non-orthonormal, relativistic, “configuration states.” It is shown, following Weinberg, that invariance and “micro-causality” are sufficient to construct free quantum fields and fix the spin-statistics connection. There is no need to start from a classical Lagrangian, and then attempt to quantize it in a “canonical” fashion. That works only for the scalar field, nowhere else. Also, in this chapter on relativistic states, the Dirac equation is introduced in a new and natural way. There are two chapters on uncertainty relations. One, on the conditions of minimum uncertainty, with a very brief introduction to coherent states, and another on the time-energy “uncertainty relation” which has many versions. We discuss only three approaches to time-energy relation and encourage readers to consult the thorough review on how time is treated in quantum mechanics available as two volumes in the Lecture Notes in Physics series. vii

viii

Preface to the Second Edition

A brief chapter on the technique of time re-parametrization discusses Feynman path integral for non-Cartesian coordinates and how to handle that. On a historical note, there is a chapter on the pre-history of the discovery of matrix mechanics by Heisenberg, Born, Jordan, and Dirac building on the works of Einstein, Bohr, Slater, and Kramers. And Pauli gave a derivation of the spectrum of the Hydrogen atom using matrix mechanics in January 1926 before the Schrodinger equation was invented. We give Pauli’s proof and introduce the group O(4) of rotational symmetry in four dimensions which is specific to .1/r potential. Finally, there is a chapter on quantum non-locality. That issue was initiated by the Einstein-Podolsky-Rosen argument. We discuss how Bell’s inequality makes it possible to test if there are indeed long-distance correlations due to entanglement, even at space-like distances. This topic has become even more relevant due to the importance and hectic activity in the field of quantum information, computation, and teleportation. We restrict ourselves to just the non-locality or non-separability of entangled quantum states. The recent recognition in 2022 Nobel prize to these ideas is another motivation to add this puzzling topic to quantum mechanics courses. These last two chapters about foundations of quantum mechanics, the very beginning, and after a 100 years, tell a story of a thriving subject. All chapters of the book have a “Notes” and “References” section containing some points or remarks, and some directly relevant references. These references are just for indication and cannot be exhaustive or complete in a book of this scope and level. I am aware of, and I sincerely apologize for, so many omissions. I am certain that readers will search further, starting from the given references in the book. Gurugram, India April 28, 2023

Pankaj Sharan

Preface to the First Edition

These topics are unusual only in the sense that they are not on the syllabi of most courses on Quantum Mechanics, and they are usually not treated in textbooks. The book is based on notes for classroom lectures and group seminars on these topics given during four decades of my teaching career. Three of the seven chapters are on the conceptual difficulties students face: position operators (Chap. 1), plane waves versus real beams of particles (Chap. 3), and the concept of potential in quantum mechanics (Chap. 6). The other three chapters are on different ways of looking at quantum theory: the fibre bundle approach (Chap. 2) and the intimate relation of quantum mechanics to classical Hamiltonian mechanics. In one case (Chap. 4), one learns how to do quantum mechanics on the phase space, and in the other (Chap. 5), where quantum theory itself is a linear Hamiltonian mechanics, the question is posed whether it is possible to conceive of a non-linear generalization of the theory. The last chapter is a proof of the Wigner theorem on symmetry transformations. The theorem is used everywhere, but its proof is omitted in most textbooks. I think Bargmann’s proof of the theorem presented here is aesthetically appealing and all students must be encouraged to go through it once. I hope that this short book will be helpful in motivating students of quantum mechanics to explore and acquire a deeper understanding of the subject. New Delhi, India

Pankaj Sharan

ix

Acknowledgement

I am thankful to Q. N. Usmani, Tabish Qureshi and A. K. Kapoor for discussions on various aspects of quantum mechanics. I am grateful to Vasundhra Choudhry for collaboration on a part of Chap. 5, Pravabati Chingangbam for collaboration on matter included in Chap. 2, Tabish Qureshi for comments and suggestions on the contents of Chap. 17, and to A. K. Kapoor for reading and correcting parts of Chap. 15. I am grateful to H. S. Mani and A. K. Kapoor who have helped me in many ways, practically all the time. H. S. Mani had sent the draft version of his forthcoming book on quantum optics and quantum mechanics. I have found it useful in writing a section of Chap. 17. I feel privileged that N. Mukunda, from whom I have learned much through his lectures and papers, had written a Foreword to the first edition of the book which is being reproduced here. I thank B. Ananthanarayan, A. K. Kapoor, Tabish Qureshi, Anu Venugopalan, and Medha Sharma for making available reference material needed for preparation of the book. I thank Lisa Scalone and B. Ananthanarayan from the editorial board of Springer Nature for prompt helpful advice at all stages of preparation of the first and the second edition of this book. Finally, it is a pleasure to thank all my colleagues at Jamia Millia Islamia, New Delhi, particularly, Lekha Nair, M. Sami, Somasri Sen, Asad Niazi, Zubaida, and Shafeeque Ansari, for having created an atmosphere of cheerful devotion to teaching and academic work. Gurugram, 11th April 2023

Pankaj Sharan

xi

Notation

We use both .φ, ψ, f, g, etc., and .|φ, |ψ, |f , |g, etc., for vectors in a Hilbert space .H. The inner product denoted by .(φ, ψ) or .φ|ψ is linear in the second argument and anti-linear in the first. The norm .φ is the positive square root √ . (φ, φ). We also write the multiplication of a Hilbert space vector .|φ or .φ by a complex number c from left or from right as convenient: .c|φ = |φc or .cφ = φc etc. An operator in a Hilbert space is written with a “hat”: for example .x, ˆ when it has to be distinguished from the corresponding classical quantity x. The hat is omitted when confusion is unlikely. We use the words “self-adjoint” and “Hermitian” interchangeably. A dagger (.†) denotes a Hermitian adjoint, and a (*) the complex conjugate. The Minkowski metric in our notation is: ⎛ ⎜ ημν = ⎜ ⎝

.



−1 1

⎟ ⎟, 1 ⎠ 1

μ, ν = 0, 1, 2, 3. Three-dimensional vectors are written in bold face .p = (p1 , p2 , p3 ) or .pi = pi , i = 1, 2, 3. All the components of momentum 4-vector .p = (p0 , p1 , p2 , p3 ) of a particle of proper mass m have the physical dimensions of momentum.  The 0-component is also denoted by .ω at some places, for example: .ωp = p0 = p2 + m2 c2 . The energy of the particle is denoted by .Ep = c ωp . Similarly, all the components of spacetime 4-vector have dimensions of distance .x = (x 0 = ct, x 1 , x 2 , x 3 ). The identity operator or identity matrix will be generically denoted by .1. Derivatives .∂f/∂x i are often abbreviated by .∂i f or .f,i when there is no confusion.

.

xiii

Contents

1

Position Operators of Non-relativistic Quantum Mechanics . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Spinless Relativistic Particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Lie Algebra of the Poincare Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Position Operators of NRQM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Projective Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Non-relativistic Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Phase Factors of the Galilean Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Further Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9.1 Time Translation and Time Evolution . . . . . . . . . . . . . . . . . . 1.9.2 The Newton-Wigner Position Operator . . . . . . . . . . . . . . . . . 1.9.3 Poincare and Galilean Group Representations . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 4 6 7 9 11 13 14 14 15 16 16

2

A Bundle Picture of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The Bundle Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Parallel Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Change of Basis or of Trivialization . . . . . . . . . . . . . . . . . . . . 2.1.4 Transport Round a Loop and Curvature . . . . . . . . . . . . . . . . 2.2 Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Covariant Derivative of a Section . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Covariant Derivative of an Operator . . . . . . . . . . . . . . . . . . . . 2.3 The Base X of Galilean Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 The Hypothesis of Parallel Transport . . . . . . . . . . . . . . . . . . . 2.3.2 Calculation of ˆ i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Galilean Bundle Curvature Is Zero . . . . . . . . . . . . . . . . . . . . . 2.4 Application to Accelerated Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 A Linearly Accelerated Frame . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Rotating Frame: Centrifugal and Coriolis Forces . . . . . . 2.5 Further Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19 19 20 20 21 22 24 24 24 25 26 27 28 29 29 31 33

xv

xvi

Contents

2.6

Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Bundle Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Geometric Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34 34 34 34

3

A Beam of Particles .= A Plane Wave? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 A Coherent Bunch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Scattering Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Scattering States and the Moller Operator . . . . . . . . . . . . . . 3.2.2 Scattering Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Properties of (±) and S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Transition Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Irreversibility of Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Transition Probability per Unit Time . . . . . . . . . . . . . . . . . . . 3.4 Cross Sections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Non-relativistic Scattering from a Potential . . . . . . . . . . . . 3.4.2 Colliding Non-relativistic Beams . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Relativistic Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Comments on Formulas of Sect. 3.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Moller Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 S-Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Integral Equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 (+) and S in Energy Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Further Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 37 40 40 41 42 43 44 44 45 45 48 51 52 52 54 54 55 56 60 60

4

Star-Product Formulation of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . 4.1 Weyl Ordering and the Star Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Derivation for Star-Product Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Wigner Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Trace of fˆ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Trace of a Product: Expectation Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Eigenvalues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Dynamics and the Moyal Bracket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Star Exponential and the Path Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Further Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 Weyl Correspondence and Wigner Distribution . . . . . . . . 4.10.2 Star Product and Moyal Bracket . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61 61 63 64 67 68 68 69 70 71 72 72 73 73

5

Can There Be a Non-linear Quantum Mechanics? . . . . . . . . . . . . . . . . . . . . . 5.1 Hamiltonian Equations in Quantum Mechanics . . . . . . . . . . . . . . . . . . . 5.2 Observables and Poisson Bracket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Symmetry Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75 75 76 79

Contents

xvii

5.4 5.5 5.6

Eigenvalues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-linear Quantum Mechanics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-linear Terms: Simple Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Example 1: Extra Energy Level . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Example 2: Asymptotic States . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Further Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Non-linear Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Non-linear Schrodinger Equation . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80 81 82 83 84 85 86 86 86 86

6

Interaction .= Exchange of Quanta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Non-relativistic “Potential” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Simplest Model of Quanta Exchange . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 The Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Bare and Dressed States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Single-Dressed B-particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 B-B Effective Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89 89 90 91 91 92 94 96 97 98

7

Proof of Wigner’s Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Rays and Symmetry Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Wigner’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Bargmann Invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 A Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Proof of the Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 The Final Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Further Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99 99 100 100 101 102 108 109 110 111

8

Hilbert Space: An Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Pythagoras Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Bessel’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Schwarz and Triangle Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Complete Orthonormal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Direct Sum of Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Tensor Product of Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Bounded Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10 Bounded Linear Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11 Adjoint of a Bounded Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.12 Projection Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

113 113 114 115 116 117 118 119 120 121 123 124 125

xviii

9

10

11

Contents

8.13 Unitary Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14 Eigenvalues and Spectrum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15 Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.16 Density Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

126 127 130 131 132 132

What Is an “Essentially Self-Adjoint” Operator? . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Unbounded Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Domain and Range of an Operator . . . . . . . . . . . . . . . . . . . . . . 9.3 Graph of an Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Closed Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Adjoint in the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Symmetric and Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Boundary Conditions for Self-Adjointness of H = −d 2 /dx 2 on L2 ([a, b]). . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Self-Adjoint Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

133 133 134 135 135 136 137 138

Is There a Time-Energy Uncertainty Relation? . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Internal and External Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Individual Measurement Versus Statistical Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 The Einstein-Bohr Argument of 1927 . . . . . . . . . . . . . . . . . . 10.2 Mandelstam-Tamm Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 A Characteristic Time for a Time-Dependent Observable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Half-Life of a State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Krylov-Fock Survival Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Wigner’s Uncertainty Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Definition of t and E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Calculation of t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 The Product ( t)2 ( E)2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143 143 144

Fock Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Space for Many Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Fock Space for Bosons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Fock Space Fs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Commutation Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

155 155 156 157 157 163

139 141 142 142

144 144 145 145 146 148 149 149 151 152 153 154

Contents

11.4

xix

Fock Space for Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Fock Space Fa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 Anti-commutation Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Orthonormal Bases in Hsn and Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Orthonormal Basis in Hn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Orthonormal Basis in Hsn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.3 Orthonormal Basis in Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Occupation Number Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Symmetric Occupation States . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.2 Antisymmetric Occupation States. . . . . . . . . . . . . . . . . . . . . . . 11.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.1 Af , A†f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.2 Symmetrizer S(n) and Symmetric Subspaces . . . . . . . . . . . 11.7.3 Antisymmetrizer A(n) and Antisymmetric Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.4 Orthonormal Bases in Hsn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.5 Orthonormal Basis in Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

164 164 166 168 168 168 170 171 171 172 173 173 173

12

Second Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Tensor Product of Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 One-Particle Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Two-Particle Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Bosons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Non-relativistic Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.2 Heisenberg Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Many Species of Particles: The Standard Convention . . . . . . . . . . . . 12.7 A Briefest Outline of Feynman Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . 12.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8.1 One-Particle Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8.2 Two-Particle Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8.3 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.9 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.9.1 Definition of the Tensor Product . . . . . . . . . . . . . . . . . . . . . . . . 12.9.2 The Standard Convention. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

177 177 178 180 181 182 184 186 186 188 189 191 195 195 195 195 196 196 197 197

13

Relativistic Configuration States and Quantum Fields . . . . . . . . . . . . . . . . . 13.1 Spin Zero Particle of Mass m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Configuration Space States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Inner Product in Configuration Space. . . . . . . . . . . . . . . . . . . 13.2.2 ct, x|ct, y = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

199 199 201 202 203

174 175 176 176 176

xx

Contents

13.3

Fock Space for Spin Zero Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Microcausality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Space and Time-Reversal Covariance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Pauli-Jordan Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 Momentum and Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Charged Spin Zero Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.1 Microcausality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.2 Charge Conjugation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Spin 1/2 Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7.1 Introduction to SL(2, C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7.2 Proper Orthochronous Lorentz Group . . . . . . . . . . . . . . . . . . 13.8 Covariant States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.1 Momentum States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.2 Wigner Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.3 Dirac Equation in Momentum Space . . . . . . . . . . . . . . . . . . . 13.8.4 Parity P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.5 Configuration Space States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.6 Dirac Equations in Configuration Space . . . . . . . . . . . . . . . . 13.9 Fock Space for Spin 12 particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9.1 The Anti-particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9.2 Microcausality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.10 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.10.1 Relativistic Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.10.2 Dirac Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

205 205 208 208 208 209 210 212 213 214 214 217 218 218 219 221 221 222 224 225 226 227 229 229 230 230

14

Minimum Uncertainty States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Uncertainty Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Condition for Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Coherent States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.1 Annihilation and Creation Operators . . . . . . . . . . . . . . . . . . . 14.3.2 Eigenstates of af . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 Minimum Uncertainty States . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

231 231 232 234 235 235 237 237 239 239

15

Path Integrals in General Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Feynman Path Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.1 The Short-Time Propagator (STP) . . . . . . . . . . . . . . . . . . . . . . 15.2.2 The VPM determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Path Integral as a Scheme of Quantization . . . . . . . . . . . . . . . . . . . . . . . . 15.3.1 A Free Particle in Two Dimensions . . . . . . . . . . . . . . . . . . . . . 15.4 Time Re-parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

241 241 242 243 244 245 246 247

Contents

15.5 15.6 15.7

Pseudo Time Scaling of Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coulomb Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schrodinger Equations for Riemannian Configuration Space . . . . 15.7.1 Lagrangian STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.7.2 The Canonical STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.8 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.8.1 K-S Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.8.2 General Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

17

A Brief Pre-history of Matrix Mechanics and O(4) Symmetry of the Hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Niels Bohr(1913). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Albert Einstein (1917) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Ladenberg (1919) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.1 Drude’s Classical Theory of Dispersion . . . . . . . . . . . . . . . . 16.3.2 Loss of Energy: Classical Radiation . . . . . . . . . . . . . . . . . . . 16.3.3 Loss of Energy: Quantum Theory. . . . . . . . . . . . . . . . . . . . . . . 16.3.4 Kramers (1924) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Bohr, Kramers, Slater (1924) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Thomas, Reiche, and Kuhn (1925) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.1 Remark About the Kuhn Formula. . . . . . . . . . . . . . . . . . . . . . . 16.6 Heisenberg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6.1 Heisenberg’s Proof of the Thomas-Kuhn Sum Rule, July 1925 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7 Hydrogen Atom and O(4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7.1 Runge-Lenz Vector in Classical Mechanics . . . . . . . . . . . . 16.7.2 Runge-Lenz Vector in Quantum Mechanics . . . . . . . . . . . . 16.7.3 A Useful Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8 Lie Algebra of O(4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8.1 Lie Algebra of O(3) and SU (2) . . . . . . . . . . . . . . . . . . . . . . . . 16.9 Energy Spectrum via O(4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10 Note. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-locality in Quantum Mechanics and Bell’s Inequality . . . . . . . . . . . . 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.1 Probabilities in Quantum Mechanics . . . . . . . . . . . . . . . . . . . 17.1.2 EPR Incompleteness Argument . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.3 David Bohm’s Reformulation of the EPR Argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.4 Hidden Variables and Bell’s Inequality . . . . . . . . . . . . . . . . . 17.2 Bell’s Hidden Variable Model for a Spin 1/2 Particle . . . . . . . . . . . . . 17.2.1 Impossibility Proof of Hidden Variable Theories . . . . . . 17.3 The Bohm Version of EPR: Two Spins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.1 Measuring Spins in Two Different Directions . . . . . . . . . .

xxi

249 251 251 251 252 255 255 256 256 259 259 261 263 263 264 265 266 267 267 268 269 269 273 273 275 277 277 279 280 281 282 283 283 284 285 286 287 288 290 290 291

xxii

Contents

17.4

Bell’s Inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.1 Wigner’s Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.2 BCHSH Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 Bell’s Inequality Violation (BIV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.1 Entangled Photon Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.2 Creation of Entangled Photon Pairs by SPDC. . . . . . . . . . 17.5.3 Summary of Experimental Evidence for BIV . . . . . . . . . . 17.5.4 Are Inequalities Really Necessary? . . . . . . . . . . . . . . . . . . . . . 17.6 Epilogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7 Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.8 General Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

293 293 296 299 299 305 306 306 308 309 309 310

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311

1

Position Operators of Non-relativistic Quantum Mechanics

Abstract

The components of the position operator of a particle in non-relativistic quantum mechanics are the non-relativistic limits of generators of the pure Lorentz transformations, or the “boost operators.” The non-relativistic limit of the relativistic Poincare group is the Galilean group, which reveals interesting complications.

1.1

Introduction

The position operators .xˆi , i = 1, 2, 3 of a particle in non-relativistic quantum mechanics are perhaps the most important observables because of their physical meaning. But any non-relativistic theory can only be the limit of a relativistic theory. And a position operator in a relativistic theory is a problem because, in a truly relativistic theory, it must be the space part of a four-vector, whose timecomponent must be the physical time. But time is not an operator either in relativistic quantum mechanics or in relativistic quantum field theory. Moreover, the single particle picture itself is under threat in relativistic quantum theory. Similarly, there are problems associated with defining a center of mass for two or more relativistic particles. These conceptual problems have been thoroughly discussed during the 1960s and 1970s. A discussion on the so-called localization problem can be found in [1]. A recent account with older references is [2]. Actually, the simplest view is to regard the position operators, up to a constant factor, as limits of the three boost operators of relativistic quantum mechanics. This fact somewhat surprises many practitioners of quantum theory because the boost operators do not commute among themselves!

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_1

1

2

1 Position Operators of NRQM

Non-relativistic quantum mechanics relies heavily on the commutation relations: [xˆi , pˆ j ] = i hδ ¯ ij

.

(1.1)

which allows a kind of reciprocity between .x and .p representations. In the .xrepresentation pˆ i = −i h¯

.

∂ ∂xi

(1.2)

act as generators of space translations. Conversely, in the momentum space representation xˆi = i h¯

.

∂ ∂pi

(1.3)

act as generators of translations in the momentum space. But while homogeneity of space is a symmetry for a free particle, there is no such symmetry as “homogeneity of momentum space.” The symmetry transformation which does change the momentum is the change from one Galilean inertial frame of reference to the other moving with a constant velocity with respect to it. In a relativistic theory, these changes of inertial frames become “pure” Lorentz transformations or “boosts.” It is these that we turn to in the next section.

1.2

Spinless Relativistic Particle

Poincare transformations include the Lorentz transformations (rotations and boosts) as well as pure translations in four-dimensional spacetime. Denote such a transformation by .(a, ) acting on the spacetime points .x = (x 0 , x 1 , x 2 , x 3 ) as follows: (a, )x = x + a,

.

where a is the four vector of translation and . is a Lorentz matrix, that is, a .4 × 4 real matrix such that  .x · x = ημν x μ x ν = −(x 0 )2 + (x 1 )2 + (x 2 )2 + (x 3 )2 μ,ν

remains unchanged. If .y = x, then x·x =



.

μ,ν

ημν x μ x ν =

 μ,ν

ημν y μ y ν = y · y,

1.2 Spinless Relativistic Particle

3

and so the Lorentz matrix . satisfies T η = η.

.

Matrices such as . form a group, .L, called the Lorentz group. It follows from this condition that .(det )2 = 1, and by writing out the 00 element of this ↑ equation, we see that .200 ≥ 1. The Lorentz group has a subgroup .L+ of those matrices for which .det  = +1 and .00 ≥ +1. These are the matrices which are connected to the identity matrix in a continuous fashion. This group has six parameters corresponding to rotations in three-dimensional space and “boosts”, which are Lorentz transformations corresponding to frames moving with constant relative velocity but with their spatial axes remaining parallel. Every group element is a product of boosts and rotations. A general element of .L can be obtained by ↑ multiplying an element of .L+ by the matrix .η (also called time inversion) or .−η (called space inversion or parity), or both. In this chapter we restrict ourselves to ↑ .L+ . The so-called discrete symmetries represented by parity and time reversal are treated briefly in Chap. 7 of this book. The Poincare group is obtained by adjoining spacetime translations to Lorentz transformations as shown above. The group multiplications law is (a2 , 2 )(a1 , 1 ) = (a2 + 2 a1 , 2 1 ),

.

the identity element is .(0, 1) and the inverse is (a, )−1 = (−−1 a, −1 ).

.

When we are dealing with only translations .(a, 1), or only Lorentz transformations (0, ), it is convenient to write just .(a) or .(). A relativistic particle of proper mass m with zero spin is described by momentum space wave functions .ψ(p) where .p = (p0 , p1 , p2 , p3 ) but .p0 = ωp =  2 p + m2 c2 . For convenience, we write .ψ(p), with the four-vector p as argument, it being understood that .ψ is actually a function only of .p = (p1 , p2 , p3 ) and 0 .p = ωp in what follows. We can construct a Hilbert space .H by restricting functions .ψ to be square integrable with respect to the Lorentz invariant volume element:

.

 .

d3 p |ψ(p)|2 < ∞. 2ωp

The inner product in .H is defined by  (ψ, φ) =

.

d3 p ∗ ψ (p)φ(p). 2ωp

(1.4)

4

1 Position Operators of NRQM

It is convenient to use the Dirac bracket notation: .ψ(p) = p|ψ where .|p are eigenvectors of momentum such that p|k = 2ωp δ 3 (p − k).

.

The action of Poincare group representation is defined on this basis as U (a)|p = exp(−ip · a/h)|p = exp[(ip0 a 0 − ip · a)/h])|p, . ¯ ¯

(1.5)

U ()|p = |p,

(1.6)

.

so that U (a, )|p = U (a)U ()|p = exp(−ip · a/h)|p. ¯

.

(1.7)

On momentum space wave functions .ψ(p), it acts as follows: −1 (U (a, )ψ)(p) = exp(−ip · a/h)ψ( p). ¯

.

(1.8)

Exercise 1.1 Verify (1.8) and check that operators .U (a, ) are unitary and that they provide a representation of the Poincare group: U (a1 , 1 )U (a2 , 2 ) = U (a1 + 1 a2 , 1 2 ).

.

1.3

(1.9)

Lie Algebra of the Poincare Group

An infinitesimal boost in one-direction by a small .α so that .sinh α ≈ α = v/c is ⎛

⎞ ⎛ ⎞ cosh α sinh α 0 0 0100 ⎜ sinh α cosh α 0 0 ⎟ ⎜ ⎟ ⎟ = 1 + α ⎜1 0 0 0⎟ + ··· , . = ⎜ ⎝ 0 ⎠ ⎝ 0 10 0 0 0 0⎠ 0 0 01 0000 −1 p = (p0 − αp1 , p1 − αp0 , p2 , p3 ) + · · · . Therefore, (U ()ψ)(p) = ψ(−1 p) = [(1 + iαK1 /h¯ + · · · )ψ](p)

.

0 (K1 ψ)(p) = i hp ¯

∂ψ , ∂p1

(1.10)

1.3 Lie Algebra of the Poincare Group

5

which identifies the generator .K1 . The two other generators for boosts can be similarly constructed: 0 (K2 ψ)(p) = i hp ¯

.

∂ψ , ∂p2

0 (K3 ψ)(p) = i hp ¯

∂ψ . ∂p3

(1.11)

For an infinitesimal rotation about the 1-axis ⎛

1 ⎜0 . = ⎜ ⎝0 0 

−1

⎛ ⎞ 00 0 0 0 ⎜0 0 1 0 0 ⎟ ⎟=1+θ⎜ ⎝0 0 0 cos θ sin θ ⎠ 00 0 − sin θ cos θ

0 0 0 −1

⎞ 0 0⎟ ⎟ + ··· , 1⎠ 0

p = (p , p , p − θp , p + θp ) + · · · , 0

1

2

3

3

2

the generator can be identified: (U ()ψ)(p) = ψ(−1 p) = [(1 + iθ J1 /h¯ + · · · )ψ](p) , ∂ψ ∂ψ (J1 ψ)(p) = −i h¯ p2 3 − p3 2 . ∂p ∂p

.

(1.12)

The other two rotation generators are ∂ψ ∂ψ (J2 ψ)(p) = −i h¯ p3 1 − p1 3 , . ∂p ∂p 2 ∂ψ 1 ∂ψ . −p (J3 ψ)(p) = −i h¯ p ∂p2 ∂p1

.

(1.13) (1.14)

For spacetime translations a = (a 0 , a 1 , a 2 , a 3 ),

U (a) = 1 + i(P 0 a 0 − iP i a i )/h¯ + · · · ,

.

the generators are simply (P μ )ψ(p) = pμ ψ(p).

.

(1.15)

Note that the generators .Ji , Ki have physical dimensions of angular momentum (or action) and .P μ that of momentum. Note also the relativistic invariant definition of μ 0 0 .P . The physical, measured energy is .cP and not .cP0 = −cP . The Lie algebra, or commutation relations, of the generators of the group is the following. As the commutator is antisymmetric, of the 10 generators .P μ , Ji , Ki there are .10 × 9/2 = 45 such commutation relations. We write below a list of commutation relations of each type, the other such relations being obtained by renaming of indices. The total number of relations of a given type is written in a

6

1 Position Operators of NRQM

square bracket after one typical relation: [P μ , P ν ] = 0,

[6].

.

[J1 , J2 ] = 

Ji , P 0 =

 J1 , P 2 =

i hJ ¯ 3,

(1.16)

[3]. [3].

0,

(1.17) (1.18)

3 i hP ¯ ,

[9].

(1.19)

[J1 , K2 ] = i hK ¯ 3,

[9].

(1.20)

[K1 , K2 ] = 

Ki , P 0 =

 Ki , P j =

−i hJ ¯ 3,

[3].

(1.21)

i i hP ¯ ,

[3].

(1.22)

[9].

(1.23)

j

0 i hδ ¯ iP .

Exercise 1.2 Prove that the generators .Ji and .Ki are self-adjoint with respect to the inner product defined by (1.4). [The terms produced by differentiating .ωp in checking for the Hermitian nature of .Ki cancel.]

1.4

Position Operators of NRQM

We have written “NRQM” for “non-relativistic quantum mechanics.” Define Xi =

.

1 Ki , mc

(1.24)

then these “position operators” have the following relations with each other and the momenta .P i : hJ ¯ 3 , and two more. . [X1 , X2 ] = −i (1.25) m2 c 2 0

 j P j Xi , P = i hδ ,. (1.26) ¯ i mc 

i h¯ i P ,. Xi , P 0 = (1.27) mc 

P i , P j = 0. (1.28) For wave packets of average angular momentum of the order of a few .h’s, ¯ the position operators for different directions fail to commute up to a square of 2 ∼ 10−24 m2 for electron mass). Similarly, the the Compton wavelength (.(h/mc) ¯ commutator of position and momentum is the standard one only for wave packets

1.5 Projective Representations

7

of momenta well below mc so that .P 0 /mc ∼ 1. The commutator with energy is the standard non-relativistic one for .P 0 ∼ mc + P i P i /2mc. Under space translations, the position operators shift as expected if the average energy is mostly the rest energy (.P 0 ∼ mc): .

exp(−iP ai )Xj exp(iP ak ) = Xj − aj i

k

P0 . mc

The non-commuting of different .Xj ’s has a physical interpretation. If we were to write an uncertainty relation for them in a state .ψ, then it will look like ( ψ X1 )( ψ X2 ) ≥

.

hJ ¯ 3 ψ 2m2 c2

.

Therefore, if we try to make a wave packet too narrow in the 1-direction, then we cannot also squeeze it in the 2-direction too much for a given average value of the orbital angular momentum in the 3-direction. For particles with spin, it is even more complicated.

1.5

Projective Representations

There is a deep connection between continuous symmetries and unitary operators in Hilbert space. They are connected by Wigner’s theorem. Although we give a proof of the theorem in Chap. 7 of this book, it is well to recall the statement here. The state of a physical system in quantum mechanics is described by a unit vector .φ in a Hilbert space .H. But, any other vector which is a multiple of the given vector by a complex number of modulus unity is equally good to describe the same state. This is so because all physically measurable predictions of quantum mechanics depend on the expectation values of observables .A = (φ, Aφ) or, equivalently, on transition probabilities .|(φ, ψ)|2 between two states. These quantities have the same value if .φ and .ψ were to be replaced by .exp(iα)φ and .exp(iβ)ψ, respectively, where .α and .β are any real numbers. The number of type .exp(iα) is called a “phase factor,” or simply a “factor” when the context is understood. The set .{φ} of all such vectors, that is, all multiples by a phase factor of some fixed unit vector .φ, is called a ray. A ray represents a physical state of the system in the sense that any member of the ray is equally qualified to represent the state. We say that .{φ} is the ray determined by a unit vector .φ. Of course, we can choose any other vector from the same ray and construct the ray by including all its multiples by phase factors. A symmetry transformation S is a one-to-one invertible ray mapping such that the transition probability .|(φ, ψ)|2 is the same as .|(φ , ψ )|2 where .φ and .ψ are any members of the mapped rays .S{φ} and .S{ψ}, respectively.

8

1 Position Operators of NRQM

The symmetries obviously form a group, because if two mappings preserve transition probabilities separately, then their composition also does. The identity mapping and the inverse mapping are also symmetries. The ray mappings are very inconvenient to deal with as rays do not even form a linear space. Therefore it is very useful to have a result like Wigner’s theorem. It says that for the ray mapping corresponding to a symmetry, there is an underlying vector mapping, or operator, which is compatible with the ray mapping. This means that under the action of the operator, vectors in a ray are mapped to vectors in the mapped ray. This vector mapping, or operator, can be either unitary or anti-unitary and is determined uniquely except for an unknown phase factor. Wigner’s theorem applies to one symmetry. When we have a continuous group of symmetries, like the rotation, or the Poincare group, then we are dealing with an infinite number of them, and for each symmetry transformation, there is a unitary operator with an unknown phase factor. For continuous group of symmetries, there can only be unitary operators, no anti-unitary operators. This is so because the product of two unitary or two antiunitary operators is always unitary, and every continuous symmetry transformation can always be written as a product of two similar symmetry transformations. For example, a rotation by a certain angle about an axis is the same as a product of two rotations by half the angle about the same axis. For implementing symmetry in quantum mechanics, we not only need the operator for each symmetry, but these operators should form a representation of the group of symmetries. Let the symmetry group be .G and its elements be denoted by letters .r, s, t, etc. If two symmetry group elements r and s correspond to .Ur and .Us , respectively, then the symmetry .rs ∈ G corresponds to .Ur Us . But as each unitary operator is known only up to a factor, we can write: Ur Us = ω(r, s)Urs ,

.

|ω(r, s)| = 1.

(1.29)

The question arises: can we not fix these factors or “unknown phases” of each unitary operator in such a way so as to avoid the ambiguity altogether? We can start with the identity e of the group, and since .ee = e, fix the phase of .Ue such that .ω(e, e) = 1. Moreover, if there are three group elements, then using the associative law Ur Us Ut = Ur (ω(s, t)Ust ) = ω(s, t)ω(r, st)Urst

.

= ω(r, s)Urs Ut = ω(r, s)ω(rs, t)Urst , we obtain: ω(r, st)ω(s, t) = ω(r, s)ω(rs, t).

.

(1.30)

1.6 Non-relativistic Limit

9

If .f (r) are any phase factors defined on .G, with .f (e) = 1, and if we change the unitary operators .Ur to .Ur = f (r)Ur , the above relation will determine another set of .ω’s: ω (r, s) = ω(r, s)

.

f (r)f (s) . f (rs)

These .ω (r, s) also satisfy the consistency condition (1.30). Two sets of such factors

.ω(r, s) and .ω (r, s) are called equivalent. A representation in which the operators carry phase factors which cannot be chosen to make .ω(r, s) = 1 for all group elements is called a projective representation or a ray representation. Exercise 1.3 Verify that the condition (1.30) is satisfied by .ω (r, s). A unitary representation where we can choose all the factors .ω(r, s) equal to 1 is called a true representation. It is a matter of satisfaction that for Poincare group, the ambiguity in the unknown phases in .U (a, ) can be brought down to .±1 using ↑ continuity arguments, and can be further eliminated completely if instead of .L+ we use its covering group .SL(2, C) which is simply connected. This is just as well, because spin half-odd-integer particles are described by .SL(2, C) representations ↑ and not by those of .L+ . The representation we have defined for a relativistic spinzero particle above is a true unitary representation, without involving any unknown phases. But its non-relativistic limit, a representation of the Galilean group, turns out to be a projective representation!

1.6

Non-relativistic Limit

We discuss the non-relativistic limit of the relativistic representation. If .|p| 0 if the coordinate .t of .Sτ is related to the time coordinate of frame S by

.t = t − τ . Therefore a state .ψ seen by S at time .t = 0, say, will be seen in the continuously changing frames .Sτ as .ψ(τ ) = U (−τ )ψ. This is the time evolution in the Schrödinger picture. The Heisenberg’s way of representing time evolution is through unchanging states but changing observables. So, if we change the frames .Sτ continuously by changing .τ , the state .ψ of the system can be chosen to remain constant, but the observables A evolve with .τ so that their expectation values (which are experimentally measured) change in the same manner as in the Schrödinger picture: .

(ψ, A(τ )ψ) = (ψ(−τ ), Aψ(−τ )).

.

Since this happens for every state .ψ, we get .A(τ ) = U (τ )AU (−τ ). If .U (τ ) = exp(iH τ/h), ¯ then the differential form of this relation is i h¯

.

dA(τ ) = [A(τ ), H ]. dτ

In the next chapter, we will generalize this idea of continuously changing frames. Irreducible unitary representations of Poincare group are treated in the classic paper of E. P. Wigner [4].

1.9.2

The Newton-Wigner Position Operator

One can ask the question, for a relativistic particle, why cannot we just define a three i as position operators in one frame reference? But these are vector .Xˆ i = i h∂/∂p ¯ unsuitable as observables because they are not self-adjoint. In fact, by the definition of an adjoint ˆ † ψ) = (Xˆ i φ, ψ) = −i h¯ .(φ, X i



d 3p 2p0



∂φ ∗ ψ(p) ∂pi

After an integration by part, we see that i hp ¯ i Xˆ i† = Xˆ i − 0 2 . (p )

.

Therefore, we can define a self-adjoint operator, called the Newton-Wigner operator as i XN W =

.

Xˆ i + Xˆ i† ∂ i hp ¯ i = i h¯ i − . 2 ∂p 2(p0 )2

16

1 Position Operators of NRQM

This has the advantage that different components of the position operator commute, i and can be simultaneously diagonalized. But a common eigenstate of .XN W (i.e., a state“localized” at a given point), when space translated by .U (a) is not orthogonal to the original state. One can read more about it in the original paper [6], or Section 3c of the classic book by Schweber [7].

1.9.3

Poincare and Galilean Group Representations

We have discussed only the Poincare group representation corresponding to a particle of mass m and spin zero. General irreducible unitary representations of the Poincare group are characterized or labeled by two quantities: the value of μ .w = P Pμ and a number s which can be an integer or a half-odd integer or a real number corresponding to spin. If w is negative, equal to .−m2 , we get the representation for a particle of finite mass m. If .w = 0, we get zero mass representations. In this case s can be an integer or half-odd integer and is called helicity. In .w = 0 case, s can also be an arbitrary real number as well, but this “continuous spin” quantum number is clearly unphysical. Also unphysical are those representations for which .w > 0, or, as we say, .P μ is space-like. The Galilean group representation which is derived from the representation of the Poincare group for finite mass m shows the dependence on m in its Lie algebra commutator between boost and translation. The Galilean group, of course, requires them to commute. The true representations of the Galilean group do not seem to correspond to anything physical. A very detailed introduction to Lie groups, Lie algebras, and particularly the Galilean group can be found in the classic book by Sudarshan and Mukunda [3]. Inonu and Wigner [8] found that the true representations of Galilean group are physically not relevant, whereas of the physically relevant representations, there is only one type, and that carries phase factors which cannot be wished away. This paper also introduces the idea of group contraction: how one group .G1 reduces to another group .G2 when a parameter on which the group elements of .G1 depend goes to a limiting value. In the present case of Poincare to Galilean group, it is the velocity of light .c → ∞. Galilean Lie algebra is touched upon briefly in Section 2.4 of the book by Weinberg [9] from a physicist’s point of view. A rigorous treatment of the question of true and ray representations is in Bargmann [5] which discusses the Galilean group in Section 6 of the paper.

References 1. A.J. Kalnay, The localization problem, in Problems in the Foundations of Physics, ed. by M. Bunge. Studies in the Foundations, Methodology and Philosophy of Science, vol 4 (Springer, Berlin, Heidelberg, 1971)

References

17

2. P. Aguilar, C. Chryssomalakos, H. Hernandez Coronado, E. Okon, Position Operators and Center of Mass: New Perspectives. arXiv:1306.0504v1 3. E.C.G. Sudarshan, N. Mukunda, Classical Dynamics: A Modern Perspective (World Scientific, 2016) 4. E.P. Wigner, On unitary representations of the inhomogeneous Lorentz group. Ann. Math. 40, 149 (1939). Reprinted in F.J. Dyson, Symmetry Groups in Nulcear and Particle Physics (W. A. Benjamin, New York, 1966) 5. V. Bargmann, On unitary ray representations of continuous groups. Ann. Math. 59, 1 (1954) 6. T.D. Newton, E.P. Wigner, Localized states for elementary systems. Rev. Mod. Phys. 21, 400 (1949) 7. S.S. Schweber, An Introduction to Relativistic Quantum Field Theory (Harper and Row, New York, 1961) 8. E. Inonu, E.P. Wigner, Il Nuovo Cimento, IX(8), 705 (1952) 9. S. Weinberg, The Quantum Theory of Fields, Vol I (Cambridge University Press, Cambridge, 1995).

2

A Bundle Picture of Quantum Mechanics

Abstract

The vector that represents the state of a quantum mechanical system depends, apart from the system itself, on the frame of reference in which the description is being made. There are an infinite number of frames and there is a vector in a Hilbert space for each of them. This suggests a vector bundle picture for describing the dynamics of the system. This way of looking at dynamics is especially useful where change of frames is involved.

Note: In this chapter we use Einstein’s convention of summing over repeated indices to simplify formulas.

2.1

The Bundle Picture

The concept of state of a physical system in classical or quantum mechanics always depends on the frame of reference in which the description is being made. Usually, the frame of reference is fixed and therefore not explicitly mentioned. A quantum state is described by a vector in a Hilbert space .H characteristic of the physical system. There is thus a Hilbert space for each frame of reference to which the state as seen in that frame belongs. All these spaces, called fibers, are of the same type because they refer to the same physical system. But quantum mechanical description has this special feature: a state .ψ and the set of observables .{Ai }, taken together, describe the same physics as do states .U ψ and observables .{U Ai U −1 }, where U is any unitary operator. So we have the following “bundle” picture. There is a set X of frames of reference, and for each frame P in X, there is a fiber of a Hilbert space .HP which contains the vectors as observed by P . Let us assume that the frames are labelled by a number of parameters or “coordinates” .x = (x 1 , . . . , x n ), etc. Then we can ask © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_2

19

20

2 A Bundle Picture of Quantum Mechanics

the question: how is the state .ψ ∈ HP of a physical system as seen by the observer in frame P (with coordinates x) related to the state as seen by the observer in a neighboring frame Q (with coordinates .x + x) considering .x to be small? In general, it is not possible to compare vectors in two different vector spaces .HP and .HQ because the states and observables in each frame can be changed by a unitary operator. We need two rules: one for identifying the different individual Hilbert spaces .HP , HQ , etc., and, second, a way of comparing vectors in neighboring spaces. The first of these rules is called “local trivialization” and the second a law of “parallel transport.” A trivialization allows us to use the Cartesian product .X × H for describing the vectors of all frames in a common Hilbert space, i.e., as elements .(x.ψ(x)), where x represents the frame, and .ψ(x) ∈ H, as the state seen by that frame. Two different trivializations will differ by an x-dependent unitary operator .U (x) changing states (and observables) : .(x, ψ(x)) being replaced by .(x, U (x)ψ(x)). We use the term “trivialization” here in a limited sense where coordinates of the base manifold X are not changed, but only the identification of fibers with .H changes. The “bundle picture” presented here has the same mathematical features as in general relativity or Yang-Mills gauge theories. Of course, the specific base manifold and the fiber in all these cases are different.

2.1.1

Sections

A section is a smooth function which assigns a vector in .HP for all frames .P ∈ X. If we have chosen a trivialization, then .(x, ψ(x)) ∈ X × H is a section. We say that a section is smooth if .ψ(x) changes continuously and smoothly with x. A section is the mapping .x → ψ(x) after we have taken some fixed trivialization. An orthonormal (o.n.) basis of sections .x → φn (x), n = 1, 2, . . . is such that for each x (φn (x), φm (x)) = δnm .

.

Given an o.n. basis of sections, every section .ψ(x) can be replaced by its components functions, that is, a sequence of complex functions .cn (x) = (φn (x), ψ(x)).

2.1.2

Parallel Transport

Let P and Q be two neighboring points in X with coordinates x and .x + x, respectively. We treat .x i as infinitesimally small. Given a vector in .Hx at a point P with coordinates x, we can compare it with a vector at Q (with coordinates .x + x) if we provide a rule by which these vectors

2.1 The Bundle Picture

21

can be brought together. If .ψ ∈ Hx+x at Q is brought to P without change, we  expect the transported vector .ψQ→P : 1. to behave linearly under transport: that is, if we bring a linear combination .aφ + bψ from Q to P, then the vectors brought at P will be the same linear combination   .aφ Q→P + bψQ→P , 2. to remain unchanged if .x i → 0,   3. to preserve inner product that is .(φQ→P , ψQ→P ) = (φ, ψ). To find the form of such a rule, it is convenient to use an o.n. basis of sections φn (x). From the first point above if we can define the parallel transport for a basis, then we can define for any vector, using the linearity. With this in mind, we write the rule for .φr (x + x) at Q brought to the point P with coordinates x as (we drop the suffix .Q → P in what follows):

.

φr = φr (x) + x i φs (x)isr (x),

.

(2.1)

where .isr (x) are complex numbers expressing the transported vector in the basis φr (x). To check if it preserves the inner product, we calculate (keeping to first order of smallness in .x):

.

  ∗ δrs = (φr , φs ) = δrs + x i δrt its (x) + itr (x)δts ,

.

or, ∗ irs (x) + isr (x) = 0,

.

which shows that .isr is an anti-Hermitian matrix of an operator on .H in the basis {φr }. We can define a Hermitian operator .Pˆi (x) as

.

irs (x) = i(φr (x), Pˆi (x)φs (x)),

.

(2.2)

and call it the connection operator with connection components .isr in this basis.

2.1.3

Change of Basis or of Trivialization

A change in trivialization results in a unitary operator .Uˆ (x) acting on each of the ∼ spaces .Hx . The basis of sections .φr (x) now becomes .φ r = Uˆ (x)φr (x) = φs (x)Usr where .Usr = (φs (x), Uˆ φr (x)). Under parallel transport ∼

.







φ r =φ r (x) + x i φ s (x)  isr (x),

22

2 A Bundle Picture of Quantum Mechanics ∼



where . are the new connection components. On the other hand, when .φ r (x + x) = φs (x + x)Usr (x + x) is brought to x, the coefficients .Usr (x + x), being complex numbers, are brought as such, and contribute an additional term: i .x ∂i Usr . Thus, omitting the arguments x of functions, and keeping to first order in .x, ∼

.

φ r = φs Usr + x i φs ∂i Usr + x i φt its Usr ∼









−1 = φ r +x i φ t Uts−1 ∂i Usr + x i φ u Uut its Usr ∼

−1 = φ r +x i [φ t Uts−1 ∂i Usr + φ u Uut its Usr ].

Comparison shows, written as a matrix ∼

.

 i = U −1 i U + U −1 ∂i U.

(2.3)

If the connection components are all zero, .ius = 0, then in some other trivialization, the connection components are “pure gauge”: ∼

.

 itr = Uts−1 ∂i Usr ,

with .Usr (x) a unitary matrix dependent on x.

2.1.4

Transport Round a Loop and Curvature

We have discussed the form of a vector .φr (Q) parallel transported from a point Q = {x +x} to .P = {x}. One can ask a question: if a vector is parallel transported around a closed loop and brought back to the starting point, will it agree with the initial vector, or not? We discuss first for an infinitesimally small loop. And, instead of going round the loop, we can equivalently take the vector along two different infinitesimal paths, having the same starting and ending points, and compare the transported vectors (Fig. 2.1). Let us take four points with coordinates as shown:

.

Q = {x i + 1 x i + 2 x i }, Q1 = {x i + 1 x i }, Q2 = {x i + 2 x i }, P = {x i }.

.

The vector .φr (Q) when brought to .Q1 is 

φQ→Q1 r = φr (x + 1 x) + 2 x i φs (x + 1 x)isr (x + 1 x).

.

2.1 The Bundle Picture

23

Fig. 2.1 Transport of vector along two routes .P → Q1 → Q and .P → Q1 → Q .φ

Going through another step and transporting this to P , and keeping to second order in .x’s 

φQ→Q1 →P r = φr (x) + 1 x i φs (x)isr (x)

.

+2 x i [φs (x) + 1 x j φt (x)j ts (x)][isr (x) + 1 x j ∂j isr ] = φr (x) + 1 x i φs (x)isr (x) + 2 x i φs (x)isr (x)   +2 x i 1 x j φt (x) ∂j itr + j ts (x)isr (x) . The other route for transport .Q → Q2 → P will give similarly, with the role of .1 and .2 interchanged: 

φQ→Q2 →P r = φr (x) + 2 x i φs (x)isr (x) + 1 x i φs (x)isr (x)   +1 x i 2 x j φt (x) ∂j itr + j ts (x)isr (x) .

.

The difference is therefore 



φQ→Q1 →P r − φQ→Q2 →P r = 2 x i 1 x j φt (x)Rtrij

.

where Rtrij = ∂j itr (x) − ∂i j tr (x) + j ts (x)isr (x) − its (x)j sr (x)

.

(2.4)

is called the curvature tensor. It is well to remember that indices .t, r belong to the o.n. basis in Hilbert space .H, whereas .i, j run over the variables of the base X. If the components of the curvature tensor are zero, then parallel transport of a vector can be taken along any path without dependence on the path. Exercise 2.1 Show that the change in the curvature tensor components, under a ∼

change in the basis sections by .φr (x) → φ r = φs (x)Usr , is, as a matrix, ∼

.

R j i = U −1 Rj i U.

(2.5)

24

2 A Bundle Picture of Quantum Mechanics

2.2

Covariant Derivative

2.2.1

Covariant Derivative of a Section

The parallel transport law gives us a way to define derivative of sections in any direction. Given a section .x → ψ(x) = cr (x)φr (x), we can bring .ψ(x + x) at Q to P using (2.1), and find the difference from .ψ(x): 

ψQ→P − ψ(x) = cr (x + x)[φr (x) + x i isr (x)] − cr (x)φr (x)

.

= x i φs (x) [∂i cs (x) + ist ct (x)]. Dividing by the displacement and taking the limit Di ψ(x) = φs (x)[∂i cs (x) + ist ct (x)].

.

(2.6)



If we were to change the trivialization then .φ r (x) = φs (x)Usr and express the vector .ψ in both bases ∼ ∼



ψ = cs φs = c r φ r = φs Usr c r ,

.

then, suppressing the indices, and using matrix notation, ∼

c = U c,

.



c= U −1 c.

Using (2.3) and (2.6), we write the covariant derivative in the new trivialization: ∼



.



∼ ∼

D i ψ(x) = φ [∂i c +  i c], = φ U [∂i (U −1 c) + (U −1 i U + U −1 ∂i U )U −1 c], = φ [U (∂i U −1 )c + ∂i c + i c + (∂i U )U −1 c].

The first and last terms in the last line can be added .U (∂i U −1 ) + (∂i U )U −1 = ∼ ∂i (U U −1 ) = 0. Therefore .D i ψ(x) = Di ψ(x), and the covariant derivative is independent of the trivialization chosen to calculate it. This justifies the name.

2.2.2

Covariant Derivative of an Operator

Let .Aˆ be an operator defined in .HQ , Q = {x + x} and let ˆ s (x + x)) Ars = (φr (x + x), Aφ

.

2.3 The Base X of Galilean Frames

25

be its matrix elements. To define its parallel transport to .P = {x}, we use the fact that  transport is a linear process which preserves the inner product. Therefore .AQ→P has the same matrix elements in the transported basis. Keeping to first order 





Asr = (φs(Q→P ) , AQ→P φr(Q→P ) )

.





∗ (x)(φt (x), AQ→P φr (x)) = (φs (x), AQ→P φr (x)) + x i its 

+x i itr (x)(φs (x), AQ→P φt (x)). 



We realize that .Asr (x + x) and .Asr (x) ≡ (φs (x), AQ→P φr (x)) differ by an order of magnitude of .x and become equal as .x → 0. Therefore, using the antiHermitian nature of . ∗ Asr (x) = Asr − x i its (x)Atr − x i itr (x)Ast

.

= Asr + x i (ist (x)Atr − Ast itr (x)). Thus the numbers .Asr (which are the matrix elements of .Aˆ at .x + x) define, by parallel transport, a new operator .A through its matrix elements with respect to the same basis of sections. ˆ If we had a section of operators .x → A(x), then .Asr in the equation above would be .Asr (x + x), and the difference of transported operator’s matrix elements with those of .Asr (x) will be Asr (x) − Asr (x) = Asr (x + x) − Asr (x) + x i (ist (x)Atr − Ast itr (x)).

.

This allows us to define the covariant derivative of an operator matrix elements (writing .i as a matrix): Di Asr (x) =

.

2.3

∂Asr + [i , A]sr . ∂x i

(2.7)

The Base X of Galilean Frames

So far we have focussed our attention on the fibers of the Hilbert space bundle. Now we look at the structure of the base X of frames of reference. Our starting point was that frames of reference are related to each other through symmetry operations such as translation in space and time, or rotations and boosts. These transformations are not accidental but reflect the invariance of the metric or the line element of spacetime. The infinitesimal transformations, determined by these symmetries or “isometries” of spacetime, show up as Killing vector fields. It can be shown that in an n-dimensional Riemannian space, there can be only .n(n+1)/2 such independent Killing vector fields.

26

2 A Bundle Picture of Quantum Mechanics

Starting from a standard frame, we can label all other frames by the symmetry transformation parameters which connect them to the standard frame. Thus there are as many frames as the group elements of the symmetry transformations, namely, the ten-parameter group of Galilean or Poincare transformations. We have seen in Chap. 1 that all the Galilean frames can be obtained by applying rotation, boost, and translations in space and time, to some arbitrarily chosen frame .S0 : Sx = (τ, a, v, R)S0 ,

.

x = (τ, a, v, R)

which corresponds to the transformation t  = t + τ,

.

x = Rx + vt + a . We parametrize the rotation R through the Euler angles (rotation by angle .φ about 3-axis, a rotation by .θ about 1-axis, followed by a rotation of .ψ about the 3-axis): R(ψ, θ, ψ) = R3 (ψ)R1 (θ )R3 (φ).

.

(2.8)

There are thus ten parameters in .x = (τ, a, v, R).

2.3.1

The Hypothesis of Parallel Transport

We have seen in Chap. 1 that unitary representations of the Galilean or the Poincare group reflect the equivalence of all frames of reference in the sense that physical measurements will not distinguish one frame from another. In the language of differential geometry, this means that state vectors in various frames related by the unitary operators of the group element are carried from one frame to another “without change,” or by parallel transport. Therefore, if .x → ψ(x) = cr (x)φr (x) is the section determined by the physical system’s state vector as seen in various frames, the equation for determining the change from frame at x to .x + x is 

 ∂cs .Di ψ = φs (x) + isr (x)cr (x) = 0. ∂x i

(2.9)

For a trivialization which uses a constant o.n. basis of sections, so that there is no dependence on x from the basis, we can write directly Equation (2.9) with operators ˆ i: . isr (x) = i(φs , ˆ i φr )

.

(2.10)

2.3 The Base X of Galilean Frames

27

as the Schrödinger equation i

.

2.3.2

∂ψ(x) = ˆ i (x)ψ(x). ∂x i

(2.11)

Calculation of ˆ i

The connection operators .i can be calculated by putting all the .x j , j = i zero and looking at the coefficient of .x i . The state . as seen in the frame .Sx , .x = (τ, a, v, R) is obtained by the successive unitary operators in reverse order on state . 0 in .S0 :

x = U (R −1 )U (−v)U (−a)U (−τ ) 0 .

.

All we need to do is to differentiate this with each of the parameters .x i and obtain the corresponding .ˆ i using the Schrödinger equation (2.11). The unitary operators are U (−τ ) = exp(−i Pˆ 2 τ/2mh), ¯

.

U (−a) = exp(i Pˆ · a/h), ¯ ˆ · v/h), U (−v) = exp(imX ¯ U (R −1 ) = exp(iφJ3 /h) ¯ exp(iθ J2 /h) ¯ exp(iψJ3 /h). ¯ The ten connection operators at different points .x = (τ, a, v, ψ, θ, φ) determined by the parallel transport hypothesis are: 1 (Pˆ − mRv)2 1 (R −1 Pˆ − mv)2 = ,. 2m 2m h¯ h¯ 1 = − (R −1 Pˆ − mv)i , . h¯ 1 ˆ i,. = − m(R −1 X) h¯ 1 = − (J3 cos θ + J2 sin θ cos φ + J1 sin θ sin φ), . h¯ 1 = − (J1 cos φ − J2 sin φ), . h¯ 1 = − J3 . h¯

ˆ τ (x) =

(2.12)

ˆ ai (x)

(2.13)

.

ˆ vi (x) ˆ ψ (x) ˆ θ (x) ˆ φ (x)

(2.14) (2.15) (2.16) (2.17)

28

2 A Bundle Picture of Quantum Mechanics

We have chosen a particular order of operators to label frames of reference .x = (τ, a, v, R). The same frames can be labeled in other ways too, leading to connection operators different from the ones above. But since that would amount to a change by unitary operators, the two connection operator sets will be related by the connection transformation law given in Sect. 2.3.3.

2.3.3

Galilean Bundle Curvature Is Zero

It is worthwhile to check that all the 45 components of curvature ∂ ∂ ˆ i − i ˆ j + i[ˆ j , ˆ i ], j ∂x ∂x

Rj i =

.

{x i } = (τ, a, v, φ, θ, ψ),

for these 10 connection operators are zero. It should not cause surprise that the curvature components are zero because they were obtained by the process .dU = (dU U −1 )U which is a “pure gauge” term of the transformation formula for the connection. But it is still worthwhile to remark that the space translation determined by .ˆ ai and boost .ˆ vi gives zero curvature even if they do not commute and gives rise to the non-removable phase factor for the representation as discussed in Sect. 1.7. A simpler example is the non-commutativity of the angular momentum operators. The curvature obtained is zero because in going round a loop, the connection operators in different frames are different, and exactly compensate for the changes due to non-commutativity. For the sake of simplicity, we take two cases restricted to one dimension to illustrate the point. There are only three variables .(τ, a, v) with the connection operators: 1 (Pˆ − mv)2 2m h¯ 1 ˆ a (τ, a, v) = − (P − mv) h¯ 1 ˆ v (τ, a, v) = −m X. h¯

τ (τ, a, v) =

.

In the first case, a frame at .(0, 0, 0) is changed to a moving frame .(0, 0, v), then time translated by .τ (0, 0, v), and then brought back to .(τ, 0, 0) by .v (τ, 0, v) by reverse velocity .−v. During this process the state .ψ0 in frame .(0, 0, 0) is seen to change to ˆ

ˆ

e−imv X/h¯ e−iτ (P −mv)

.

ˆ h¯ ¯ eimv X/ ψ0

2 /2mh

ˆ 2 /2mh¯

= e−iτ P

ψ0

2.4 Application to Accelerated Frames

29

Fig. 2.2 Verification that curvature in Galilean bundle is zero: in .v − τ and .v − a planes

This is the same as the direct change from .(0, 0, 0) to .(τ, 0, 0) by .τ (0, 0, 0) (Fig. 2.2). Similarly if we choose space translation in place of time translation, we get the same result. We do not get the irremovable phase factor in the representation of the Galilean group because the connection .a at .(0, 0, 0) is .−Pˆ and at .(0, 0, v) is .−(Pˆ − mv): ˆ

ˆ

ˆ

ˆ

e−imv X/h¯ eiτ (P −mv)a/h¯ eimv X/h¯ ψ0 = eiτ P a/h¯ ψ0 .

.

2.4

Application to Accelerated Frames

In the previous sections, we have developed a geometric picture. The dynamics of a system can be reduced to changing frames with respect to time translations as has already been discussed in Sect. 1.9.1. If all observers are equivalent, a physical system can be described by state and observables in any of the frames. However, dynamics requires evolution curve which connects different frames, just as, in the simplest of cases, time-translated frames determine the dynamics.

2.4.1

A Linearly Accelerated Frame

Choose a standard frame .S0 , with a state of the system represented by .| 0 and ˆ P, ˆ etc. observables .X, Let S be a frame whose origin lies a time .τ in the future of the origin of .S0 . Then the vector |

= exp(−i Pˆ 2 τ/2mh)|

¯ 0

.

represents the same physical state of the system in frame S. The average value of ˆ ≡ |X|

ˆ ˆ 0 ≡ 0 |X|

ˆ 0 by position . X

is related to . X

ˆ = X

ˆ 0+ X

.

ˆ 0 P

τ. m

30

2 A Bundle Picture of Quantum Mechanics

Similarly, if S is displaced by a distance .a with respect to .S0 , then |

= exp(i Pˆ · a/h)|

¯ 0

.

is a state with the property that ˆ = X

ˆ 0 − a. X

.

Therefore a wave packet located at some distance .x in .S0 is seen at .x = x − a in S. Moreover, ˆ · v/h)|

|

= exp(−imX ¯ 0

.

represents a wave packet with no change in location, but the average momentum changes by ˆ = P

ˆ 0 − mv. P

.

A frame .Sτ initially coincident with .S0 and moving with constant acceleration .g sees the state . 0 as 2 ˆ · v/h) |

τ = exp(−imX ¯ exp(i Pˆ · a/h) ¯ exp(−i Pˆ τ/2mh)|

¯ 0 ,

.

a = gτ /2,

(2.18)

v = gτ,

2

which determines the average values ˆ τ = X

ˆ 0+ X

.

ˆ 0 1 P

τ − gτ 2 , m 2

ˆ 0 − mgτ. ˆ τ = P

P

Differentiating the first of these equations by .τ twice, we get the Newtonian equations for the inertial force (or equivalence principle, so to speak!): .

d ˆ 1 ˆ X τ = P

τ. dτ m

d2 ˆ X τ = −g. dτ 2

(2.19) (2.20)

2.4 Application to Accelerated Frames

31

This is what we expect from Ehrenfest’s theorem. It is instructive to find the Hamiltonian in the accelerating frame .Sτ . By a simple calculation i h¯

.

  d ˆ − (Pˆ + mgτ ) · gτ + 1 (Pˆ + mgτ )2 |

τ |

τ = mg X dτ 2m   1 2 2 1 ˆ2 ˆ = P + mg · X + mg τ |

τ , 2m 2

(2.21)

which shows the presence of the “gravitational” potential as well as the extra cnumber term which generates the phase factor:  .



exp −i(m/2h) ¯

τ

 2 2

g τ dτ 0

= exp(−img2 τ 3 /6h), ¯

identified by Eliezer and Leach [1] for the Schrödinger equation of a linearly accelerated particle.

2.4.2

Rotating Frame: Centrifugal and Coriolis Forces

Let the frame S be chosen with the same origin as .S0 rotating about the 3-axis with constant angular velocity .ω. The rotation R as function of the parameter .τ is x  = x 1 cos ωτ + x 2 sin ωτ, 1

.

x  = x 2 cos ωτ − x 1 sin ωτ. 2

For the free particle, the frame S = (τ )(R)S0

.

which requires the state . 0 seen as

(τ ) = U (R)−1 U (−τ ) 0 ,

.

where U (R)−1 = exp(iωτ Jˆ3 /h), ¯

.

Jˆ3 = Xˆ 1 Pˆ2 − Xˆ 2 Pˆ1 ,

U (−τ ) = exp(−i Pˆ 2 τ/2mh). ¯ The evolution takes place as d

.i h = ¯ dτ



Pˆ 2 − ωJˆ3 (τ ) 2m

32

2 A Bundle Picture of Quantum Mechanics

One can wonder: how is this related to the centrifugal and Coriolis forces? A classical particle has position and velocity simultaneously, and therefore the centrifugal force (which depends on position) and the Coriolis force (which depends on velocity) both determine the trajectory. For a quantum description, we can only see the evolution of a wave packet through Ehrenfest’s theorem. Thus, as first derivatives of average position, we find: .

d ˆ d X1 = (τ )|Xˆ 1 | (τ )

dτ dτ Pˆ 2 1 ˆ ˆ − ωJ3 | (τ )

= (τ )| X1 , i h¯ 2m =

1 ˆ P1 + ω Xˆ 2 . m

(2.22)

Similarly, .

d ˆ 1 X2 = Pˆ2 − ω Xˆ 1 , . dτ m d ˆ 1 X3 = Pˆ3 . dτ m

(2.23) (2.24)

Using these the second derivatives can be calculated: m

d2 ˆ X1 = 2ω Pˆ2 − mω2 Xˆ 1 , dτ 2

m

d2 ˆ X2 = −2ω Pˆ1 − mω2 Xˆ 2 , dτ 2

m

d2 ˆ X3 = 0. dτ 2

.

Substituting for . Pˆ2 and . Pˆ1 from (2.23) and (2.22), respectively, the Coriolis and centrifugal forces become manifest: m

d d2 ˆ X1 = 2ω Xˆ 2 + mω2 Xˆ 1 , . dτ dτ 2

(2.25)

m

d2 ˆ d X2 = −2ω Xˆ 1 + mω2 Xˆ 2 , . 2 dτ dτ

(2.26)

m

d2 ˆ X3 = 0. dτ 2

(2.27)

.

2.5 Further Exercises

2.5

33

Further Exercises

Exercise 2.2 Let φ(x, t) be a solution of the Schrödinger equation for a (nonrelativistic) free particle: i h¯

.

h¯ 2 ∂ 2 φ ∂φ =− . ∂t 2m ∂x 2

Show that in an accelerated frame t  = t, x  = x − gt 2 /2, the wave function ψ(x  , t  ) = φ  (x  , t  ) exp(iS(x  , t  )/h), ¯

.

φ  (x  , t  ) = φ(x = x  + gt  /2, t = t  ), 2

satisfies the Schrödinger equation in the accelerated frame with gravitational potential mgx  : i h¯

.

∂ψ h¯ 2 ∂ 2 ψ(x  , t  ) = − + mgx  ψ(x  , t  ), ∂t  2m ∂x  2

provided that S is chosen as 1 3 S(x  , t  ) = −mgt  x  − mg 2 t  . 6

.

This is the argument used in reference [1]. Exercise 2.3 Consider a frame (x  , y  , t) rotating (with angular velocity ω) in a circle of radius R about the origin of an inertial frame (x, y, t) such that the origin of the rotating frame is always on the circle and its x  axis always in the radial direction. Find the Hamiltonian for a free particle of mass m in the rotating frame. Exercise 2.4 Force and curvature: Show that in the simple case of pure translations in one-dimensional space and time, if the connection for time translation is changed from 1 1 Pˆ 2 ˆt = −→ . h¯ 2m h¯





Pˆ 2 ˆ + V (X) 2m

then the curvature is (with ˆ x = Pˆ /h) ¯ i ˆ = 1 V  (X). ˆ Rˆ xt = i[ˆ x , ˆ t ] = 2 [Pˆ , V (X)] h¯ h¯

.

34

2 A Bundle Picture of Quantum Mechanics

2.6

Notes

2.6.1

Bundle Picture

The equation .idU = H dtU or .idU U −1 = H dt occurs so often in quantum mechanics that it is surprising that the identification of H dt as a connection 1form has been late in coming, although the ideas of differential geometry were being applied to quantum field theory for quite some time. One reason may be that discussions in quantum mechanics are predominantly about time evolution and one-dimensional base space is not interesting enough. The relevance of ideas of differential geometry came with discovery of the geometric phase where the Hamiltonian depends on several changing parameters. Despite that, as far as we know, the first explicit mention of time evolution as parallel transport occurs in M. Asorey et al. [2]. The idea was developed by D. Graudenz [3], B. Iliev [4] as a mathematical formalism. It was P. Chingangbam [5] who applied the bundle picture to actual problems like the quantum mechanics of accelerated frames. Experimental confirmation of quantum effects of Einstein’s equivalence principle in non-relativistic as well as relativistic domain is too vast a subject to be included here. The reader can follow the recent activity beginning from the paper by M. Zych and C. Brukner [6] and the chapter by Mashhoon [7].

2.6.2

Geometric Phase

Geometric phase also uses the language of a vector bundle, connection, and curvature, where the base is not that of frames of reference but of parameters on which the Hamiltonian of the system depends. The literature on geometric phase, or the Berry phase, is vast. It was indeed the “phase that launched a thousand scripts”! The collection of original papers can be found in A. Shapere and F. Wilczek [8]. The book by A. Bohm et al. [9] is a good introduction. The review by N. Mukunda and R. Simon [10] is devoted to applications of the geometric phase to group representations.

References 1. C.J. Eliezer, P.G. Leach, Am. J. Phys. 45, 1218 (1977); See also, M. Nauenberg, Am. J. Phys. 84, 879 (2016) 2. M. Asorey, J.F. Carinena, M. Paramio, J. Math. Phys. 23, 1451 (1982) 3. D. Graudenz, arXiv.org, gr-qc/9412013 4. B.Z. Iliev, Int. J. Mod. Phys. A 17, 245 (2002) 5. P. Chingangbam, Connection and Curvature in the Fibre Bundle Formulation of Quantum Theory Ph.D. thesis, Jamia Millia Islamia (2002) (Unpublished); P. Chingangbam, P. Sharan, Phys. Rev. A 64, 042107 (2001) 6. M. Zych, C. Brukner, Nat. Phys. 14, 1027 (2018) 7. B. Mashhoon, Lect. Notes Phys. 702, 112 (2006) 8. A. Shapere, F. Wilczek (eds.), Geometric Phases in Physics (World Scientific, Singapore, 2006)

References

35

9. A. Bohm, A. Mostafazadeh, H. Koizumi, Q. Niu, J. Zwanziger, The Geometric Phase in Quantum Systems (Springer, Berlin, 2003) 10. N. Mukunda, R. Simon, Ann. Phys. 228, 205 (1993)

A Beam of Particles = A Plane Wave?

3

.

Abstract

How can a plane wave represent a beam of particles? And what exactly is “probability per unit time”? These questions bother every student of quantum mechanics when they encounter them in scattering theory. This chapter takes the mystery out of these issues, apart from offering a very concise introduction to formal scattering theory.

3.1

A Coherent Bunch

In this section we prove a result which is quite general, but which has a special importance in scattering theory. A quantum system with translational degrees of freedom has a complete basis of eigenstates of total momentum labeled as |p, β,

.

p β  |pβ = 2ωp δ 3 (p − p)δβ  β , ωp =



p2 + m2 c2 ,

where .β’s are collective name of eigenvalues of other observables needed for a complete set of basis states. And, although the result applies equally well to non-relativistic case (apart from a normalization factor), we use the relativistic normalization to be specific. The eigenvalues .β may be discrete, or continuous, but we use the notation for discrete values for simplicity. We also refer to the system as a “particle.” Let .|φβ0  be a state of the system with respect to an observer. We assume the state to have total momentum close to some fixed value .k: pβ|φβ0  = f (p)δββ0

.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_3

37

3 A Beam of Particles .= A Plane Wave?

38

with .f (p) being sharply peaked around .p = k. Normalization of .|φβ0  requires  φβ0 |φβ0  =

.

d3 p |f (p)|2 = 1. 2ωp

Consider the same system as seen by another observer located with respect to the first observer at a distance .−r. If there is translational symmetry, this state is represented by a unitary translation operator .U (−r) acting on .|φβ0 : ˆ h¯ )|φβ0 , U (−r)|φβ0  = exp(ir · P/

.

where .Pˆ is the operator for total momentum. This state is the same as would be seen by the original observer had the system as a whole been displaced by a distance .r. Note that we do not assume that the system with state .|φβ0  is localized at any point in space. In fact, as we know from non-relativistic quantum mechanics, where position operators make sense, that a state with linear momentum wave function .f (p) more or less sharply defined, will have a very large wave packet in spatial dimensions. Now, imagine a very large number, N, of copies of the system all prepared in the same identical state .|φβ0  and, then, spatially translated from the place of their preparation to positions .ri , i = 1, . . . , N, the points being distributed with a uniform, constant density .ρ in space in a very large volume. These systems are then in states .|φi  whose momentum wave functions are given by pβ|φi  = f (p)δββ0 exp(iri .p/h). ¯

.

(3.1)

We call such a collection of particles a coherent bunch of particles. Suppose we want to make measurements on an observable B on all these states. The average of the expectation values of B in these states will be  3  3  d p d p 1  1  . φi |B|φi  = exp[i(p − p ) · ri /h¯ ] × N N 2ωp 2ωp i

i

 ∗

f (p ) f (p)p β0 |B|pβ0 . If the points .ri are closely  spaced, and the volume very large, we can replace the sum over points .ri by .ρ d3 r because .ρ is constant: .

   [. . . ] = ρd3 r[. . . ] = ρ d3 r[. . . ]. i

(3.2)

3.1 A Coherent Bunch

39

Carrying out the integration over .r, we get a factor .(2π h) ¯ 3 δ 3 (p − p ) which can be integrated out as well: .

 3 1  ρ(2π h¯ )3 d p |f (p)|2 φi |B|φi  = B = pβ0 |B|pβ0 . N N 2ωp 2ωp i

If the observable B is such that .pβ0 |B|pβ0  varies smoothly near .p = k, then the matrix element can be pulled out at value .k because that is where the main contribution to the integral comes from, and the expectation value, to a good approximation, is B =

.

=

ρ(2π h) ¯ 3 1 kβ0 |B|kβ0  N 2ωk



d3 p |f (p)|2 2ωp

ρ(2π h) ¯ 3 1 kβ0 |B|kβ0 , N 2ωk

(Relativistic)

(3.3)

because the last factor is equal to one. Dramatically, the dependence on details of the momentum profile of the state .|φβ0  disappears completely! All that mattered was that the momentum was sharply peaked around .k. We call the formula (3.3) the (relativistic) “bunch formula.” In fact, if the bunch had been made from a mixture of several states, all of them with different momentum profiles (but the same .β0 ), and with a sharp peak at the same value of .k, we would have obtained the same result. We get the bunch formula for the non-relativistic case if we take the nonrelativistic normalization for the momentum states: B =

.

ρ(2π h) ¯ 3 kβ0 |B|kβ0 . N

(Non-relativistic)

(3.4)

There is another way to look at this result. If these N particles were inside a large volume .L3 , then .ρ = N/L3 . Let .d3 k be a small volume element in the momentum space. The above result (e.g., for the non-relativistic case) can then be written as B =

.

kβ0 |B|kβ0 d3 k , n

where n is the number of “cells” of size .h3 in the occupied phase space: n=

.

L3 d3 k , h3

h = 2π h¯ = Planck’s constant.

3 A Beam of Particles .= A Plane Wave?

40

3.2

Scattering Theory

3.2.1

Scattering States and the Moller Operator

We work with Hamiltonian H of the system, and also with the free Hamiltonian .H0 when the interaction is “switched off.” Both the operators are defined in a common Hilbert space .H. We use the Schrödinger picture where all time dependence is carried by the states (Fig. 3.1). Let .(t0 ) be the state of the system at some fixed time, say, .t = t0 . Then the state at time t is (t) = U (t − t0 )(t0 ),

.

U (t − t0 ) = exp(−iH (t − t0 )/h¯ ).

A vector .φ(t0 ) ∈ H would represent the state of the system when free from interaction at .t = t0 if, at any other time t, it is given by φ(t) = U0 (t − t0 )φ(t0 ),

.

U0 (t − t0 ) = exp(−iH0 (t − t0 )/h¯ ).

Now suppose that .(t) is such that it becomes indistinguishable from some free state .φ(t) for large negative t, then we can say that the system behaves like a free system in remote past, that is, if .

lim (t) = lim φ(t).

t→−∞

t→−∞

A scattering state .(t0 ) at time .t = t0 is a state such that its “ancient history” (t) = U (t − t0 )(t0 ) for large negative times .t → −∞ coincides with that of some free state .φ(t) ≡ U0 (t − t0 )φ(t0 ) :

.

.

lim (t) − φ(t) = 0.

(3.5)

t→−∞

−→ χ(t) Ψ(t) t = −∞

φ(t)

←−

Fig. 3.1 Evolution of a scattering state .(t) from .t = −∞ to .+∞

t = +∞

3.2 Scattering Theory

41

Such a scattering state looks like a free state in remote future too. That is, there is a free state .χ (t0 ) such that .

lim (t) − χ (t) = 0,

(3.6)

t→∞

where .χ (t) = U0 (t − t0 )χ (t0 ). The existence of the first limit is equivalent to .

lim (t) − φ(t) = lim U (t − t0 )(t0 ) − U0 (t − t0 )φ(t0 )

t→−∞

t→−∞

= lim (t0 ) − U −1 (t − t0 )U0 (t − t0 )φ(t0 ) → 0, t→−∞

and defines the Moller wave operator . (+) :

(+) ≡ lim U (t − t0 )−1 U0 (t − t0 ).

(3.7)

(t0 ) = (+) φ(t0 ).

(3.8)

.

t→−∞

As a result, at .t = t0 .

It is not necessary to start at time .t = t0 . We could have started at any finite time .t1 and obtained (t1 ) = (+) φ(t1 ).

.

(3.9)

Thus, . (+) is actually independent of time. Similarly, for .t → ∞ we would get

(−) ≡ lim U (t − t0 )−1 U0 (t − t0 ),

(3.10)

(t) = (−) χ (t).

(3.11)

.

t→+∞

and, for any time t, .

Not every state can be a scattering state. If H admits bound states, then those certainly do not go to free states in remote past or future. There are confining Hamiltonians H which have only bound states, and no scattering states.

3.2.2

Scattering Matrix

A system starting out with a free state .φ(t) in remote past becomes a free state χ (t) in remote future. The two free states, final and initial, are related (using

.

3 A Beam of Particles .= A Plane Wave?

42 †

(−) (−) = 1, proved later), by

.





χ (t) = (−) (t) = (−) (+) φ(t) ≡ Sφ(t).

.

(3.12)

The operator S is called the scattering matrix or S-matrix. As we shall see, . (+) and S contain all information about scattering.

3.2.3

Properties of (±) and S

We now list a number of properties of the Moller operators . (±) , and the scattering matrix S with proofs detailed in a later section: (1) . (±) are norm preserving operators: †

(+) (+) = 1,

.





(−) (−) = 1.

(3.13)



But . (+) (+) (or . (−) (−) ) may not be equal to 1. That means . (±) may not be unitary if the scattering states do not span the whole Hilbert space .H. (2) . (±) convert .U0 (t) into .U (t):

(±) U0 (t) = U (t) (±) , .

(3.14)

(±) H0 = H (±) , .

(3.15)

.



(±) †

(±) †

H = H0

.

(3.16)

(3) S is unitary: SS † = S † S = 1.

.

(3.17)

(4) S commutes with the free Hamiltonian .H0 , although it may not commute with the total Hamiltonian: SH0 = H0 S.

.

(3.18)

(5) There is an integral equation for the Moller operator . (+) . Denote .V ≡ H −H0 , then  1 t0 (+) .

=1+ U0 (t − t0 )−1 V (+) U0 (t − t0 )dt. (3.19) i h¯ −∞

3.3 Transition Rate

43

(6) Similarly, there is an integral equation for the S-matrix: S =1+

.

3.3

1 i h¯





−∞

U0 (t)−1 V (+) U0 (t)dt.

(3.20)

Transition Rate

Let .φ and .ξ be two free states at time .t = t0 orthogonal to each other, .(φ, ξ ) = 0. Let us begin with a large number N of identically prepared systems in the free state .φ(−T1 ) = U0 (−T1 )φ, in remote past (.T1 a large positive number). In scattering theory we are interested in a question of the following type: how many of these N systems will be found in a free state .ξ(+T2 ) = U0 (+T2 )ξ in some remote future .t = +T2 (.T2 being a large positive number)? If there was no interaction and evolution took place only with the free Hamiltonian .H0 , then the answer would be zero, because for all values of time t the state .φ(t) would be orthogonal to .ξ(t), both states evolving with .H0 so that .(φ(T2 ), χ (T2 )) = 0. As it is, the state .φ(−T1 ) evolves to the scattering state .(t) = (+) φ(t) and may have a non-zero probability .Pξ  (t) = |(ξ(t), (t))|2 to be found in a free state .ξ(t) at time t. This probability is zero for large negative times, increases as interaction is “switched on,” and gradually saturates to a constant value .Pξ  = |(ξ(T2 ), (T2 ))|2 = |(ξ(T2 ), χ (T2 ))|2 because for large positive times the state again evolves as a free state .χ . Therefore the total number of transitions to state .ξ(T2 ) at time .T2 is NPξ  = N|(ξ(T2 ), (T2 ))|2 .

.

If we wait for a time . T2 more, the total number of transitions will be NPξ  = N|(ξ(T2 + T2 ), (T2 + T2 ))|2 .

.

Therefore the number of transitions per unit time or the transition rate is given by nξ = N

.

d |(ξ(t), (t))|2 , dt

to be evaluated for large times.

(t → ∞)

(3.21)

3 A Beam of Particles .= A Plane Wave?

44

3.3.1

Irreversibility of Transitions

The argument given above is based on the assumption that once the system makes a transition to the free state .ξ(t) from .(t), it continues to evolve as a free state from then on, and does nor revert back again to a scattering state. Therefore, transitions to .ξ occurring at different times keep accumulating and would be counted among the states which have already made transition to .ξ . At present we do not have a complete theory of measurement in quantum mechanics. These issues are not completely understood. But the formulas given below are based on these assumptions, and they agree completely with experimental observations.

3.3.2

Transition Probability per Unit Time

Let us put the transition rate in a more convenient form. For any complex function A(t) of t such that .dA/dt = C/i

.

.

d|A|2 C C∗ = A∗ − A = 2 Im(A∗ C). dt i i

Applying this simple identity to .A = ξ(t)|(t) = ξ(t)| (+) |φ(t), we obtain: .

dA 1 = dt i h¯

 i h¯

   d d ξ(t)| |(t) + ξ(t)| i h¯ |(t) dt dt

1 [ξ(t)|(−H0 )|(t) + ξ(t)|H |(t)] i h¯ 1 = ξ(t)|V |(t), V = H − H0 i h¯ 1 = ξ(t)|V (+) |φ(t) i h¯

=

which identifies the number C of the identity as C = ξ(t)|V (+) |φ(t)/h¯ .

.

The transition rate at time .t0 is, therefore, nξ = N2 Im(A∗ C) 2N

Im ξ(t0 )| (+) |φ(t0 )∗ ξ(t0 )|V (+) |φ(t0 ) , = h¯ 2N

† = Im φ(t0 )| (+) |ξ(t0 )ξ(t0 )|V (+) |φ(t0 ) . h¯

.

3.4 Cross Sections

45

Thus, 2N Imφ(t0 )|B(t0 )|φ(t0 ), h¯

nξ =

.

(3.22)

where †

B(t0 ) = (+) |ξ(t0 )ξ(t0 )|V (+) .

.

(3.23)

Note the appearance of the projection operator: Pξ(t0 ) = |ξ(t0 )ξ(t0 )|

.

(3.24)

to the final states. This is the formula for the number of transitions per unit time if all the N particles were in the same identical free state .φ in remote past. When we apply this formula to an actual beam, we have to replace the particles with the bunch average. That is

1  2N .nξ = φi (t0 )|B(t0 )|φi (t0 ) . Im N h¯

(3.25)

i

We can substitute the “bunch formula” inside the parentheses in the formula above depending on the situation of the specific case. To evaluate the above average for large values of t, we can choose the fixed “origin” of time .t0 large enough to include the time of the duration of interaction. In practice, both the free states .φ(t) and .ξ(t) are stationary states with trivial time dependence .exp(−iEt/h). ¯ Therefore .nξ is actually independent of time.

3.4

Cross Sections

3.4.1

Non-relativistic Scattering from a Potential

The initial free state .φ (whose spatial translates constitute the bunch for a beam) is an eigenstate of the linear momentum .|k with energy .Ek = |k|2 /2m where m is the mass of the particle. In potential scattering we also look for transitions to final states with definite energy as well. With this in mind, let .φ(t0 ) represent a beam with sharp momentum around .k, density .ρ, and energy .Ek . Let the final state have the sharp energy .Ef . Our bunch

3 A Beam of Particles .= A Plane Wave?

46

formula (3.4) will give the transition rate:

2N 1  .nξ = φi (t0 )|B(t0 )|φi (t0 ) Im N h¯ i

=

2N h¯

ρ(2π h)3 ¯

N

Im kβ0 |B(t0 )|kβ0 

= 16π 3 h¯ 2 ρ Im kβ0 |B(t0 )|kβ0  †

= 16π 3 h¯ 2 ρ Im kβ0 | (+) |ξ(t0 )ξ(t0 )|V (+) |kβ0 

= 16π 3 h¯ 2 ρ Im ξ(t0 )| (+) |kβ0 ∗ ξ(t0 )|V (+) |kβ0  . It should be noted that there is .t0 dependence of .φ(t0 ). Strictly speaking we should write .kβ0 , t0 | and .|kβ0 , t0 , but we omit it to simplify writing. It should be understood in the formulas above and below. Now, we first calculate .ξ(t0 )| (+) |kβ0  using the integral equation (3.19). Since both .|kβ0  and .ξ are free states at .t = t0 with energies .Ek and .Ef respectively, and they are chosen to be orthogonal, the first term in the integral equation, the identity term corresponding to “no scattering,” does not contribute. Therefore, ξ(t0 )| (+) |kβ0  =

.

1 i h¯

1 = i h¯



t0

−∞



ξ(t0 )|U0 (t − t0 )−1 V (+) U0 (t − t0 )|kβ0 dt

t0

−∞

 ei(Ef −Ek )(t−t0 )/h¯ dt ξ(t0 )|V (+) |kβ0 

1 ξ(t0 )|V (+) |kβ0 ,

→0 Ek − Ef + i

= lim

where we have added an infinitesimal quantity .−i to .Ef − Ek in the exponential to make the singular integral convergent. Plugging the complex conjugate of this back this into our formula for .nξ , we obtain: 

1 .nξ = 16π h |ξ(t0 )|V (+) |kβ0 |2 . ¯ ρ Im lim

→0 Ek − Ef − i 3

2

Since we have chosen .ξ to be an energy eigenstate, there is really no dependence of this transition rate on .t0 , because the phase factors occur inside the modulus square. From the well-known formula   1 1 . (3.26) =P + iπ δ(x) x − i x

3.4 Cross Sections

47

we calculate the imaginary part and get nξ = 16π 4 h¯ 2 ρ δ( Ek − Ef )|T (ξ, φ)|2 ,

.

(3.27)

where we have introduced the transition amplitude T (ξ, φ) = ξ(t0 )|V (+) |kβ0 .

.

(3.28)

We choose the final states .|ξ(t0 ) = |p, γ  into which transitions are taking place to lie in a narrow range . of momentum space .p and sum over the number of transitions: |ξ(t0 ) = |p, γ , Ep = |p|2 /2m, d3 p = m|p|dEp d p ,   . d3 p nξ = 16π 4 h¯ 2 ρ m|k| |T (p, γ ; k, β0 )|2 d p .

.



(3.29)



The number of transitions per unit time is proportional to flux, that is, density .× velocity: .ρ|k|/m. The cross section .σ is defined as the ratio of the number of transitions in the desired final states (here those in momentum range . ) to the initial flux. Therefore,  4 2 2 .σ = 16π h |T (p, γ ; k, β0 )|2 d p (3.30) ¯ m

It is well worth checking that the physical dimension of the cross section is an area. Since the momentum eigenstates are normalized as .p|p  = δ 3 (p − p ), the scattering amplitude T being a matrix element of .V (+) has dimensions of (energy).×(momentum).−3 . The importance of the integral equation (3.19) is that it can be used to calculate (+) by iteration if we can regard the interaction term V as small. To the lowest .

(+) order, the Born approximation, . 0 , is the identity, and therefore the transition amplitude   T (ξ, φ)

.

Born

= pγ |V |kβ0 .

(3.31)

Exercise 3.1 As an exercise one can check that for a central potential .V = V (r), 2 .k|V |k  = (2π h) ¯ 2 h¯ 

 0



r 2 drV (r)

sin(Kr) Kr

3 A Beam of Particles .= A Plane Wave?

48

where .K = |k − k |/h¯ . With this the differential cross section can be written in the standard form:     2m ∞ 2 sin(Kr) 2 dσ =  2 r drV (r) d k . Kr  h¯ 0

.

(3.32)

We calculate the Rutherford scattering by a Coulomb potential that is V (r) =

.

C . r

(3.33)

The integral in question is singular. We “screen” the Coulomb potential replacing C/r by .C exp(− r)/r with the understanding that . → 0. Then,

.

 .



dre− r sin(Kr) = lim

→0 2

0

K 1 = , 2 K +K

and the cross section formula is the familiar .

dσ = d



C 4E

2

1 sin4 (θ/2)

,

(3.34)

where .K 2 h¯ 2 = |k − k |2 = 4k 2 sin2 (θ/2), .θ is the angle between .k and .k and E is the energy .E = k 2 /2m.

3.4.2

Colliding Non-relativistic Beams

A beam of particles collides with a bunch of more or less stationary particles called the target. This is when we say the scattering in taking place in a “lab frame.” We can also have the situation when the incident beam collides with another beam. The free momentum states basis in this case can be chosen to be normalized as p1 β1 , p2 β2 |p 1 β1 , p 2 β2  = δ 3 (p1 − p 1 )δβ1 β  1 δ 3 (p2 − p 2 )δβ2 β  2 ,

.

where the subscript 1 refers to beam particles and 2 to target particles (or particles of the other beam). For the sake of clarity, we call the particles of the second beam as target. The quantity whose expectation value is to be measured on these bunches is B. We assume that B conserves, that is, commutes with, total linear momentum. Therefore, defining a reduced quantity b, write p1 β1 , p2 β2 |B|p 1 β1 , p 2 β2  = δ 3 (p1 + p2 − p 1 − p 2 ) ×

.

b(p1 β1 , p2 β2 ; p 1 β1 , p 2 β2 ).

3.4 Cross Sections

49

The average value over the colliding bunch is 1  φ1i φ2j |B|φ1i φ2j  N1 N2 i,j       ρ1 ρ2 3 3 3 3 3  = d r1 d r2 d p1 d p2 d p1 d 3 p2 N1 N2

B =

.

 exp[i(p1 − p1 ) · r1 /h] ¯ exp[i(p2 − p2 ) · r2 /h] ¯ ×

f1 (p1 )∗ f1 (p1 )f2 (p2 )∗ f2 (p2 ) × δ 3 (p1 + p2 − p 1 − p 2 )b(p1 γ1 , p2 γ2 ; p 1 γ1 , p 2 γ2 ). Now integration over .r1 gives .(2π h) ¯ 3 δ 3 (p1 − p1 ) which removes integration on .p1 . However, the momentum conserving delta function then becomes .δ 3 (p2 − p2 ). Thus  d 3 r2 = .p integration can be done. Therefore the .r2 integration is vacuous and .ρ2 2 N2 , which cancels with the .1/N2 outside the integral signs. Due to assumed sharp peaks in .f1 and .f2 at .k1 and .k2 , respectively, and assuming the absence of sharp peak at these values in b, we get: B =

.

ρ1 (2π h) ¯ 3 b(k1 γ1 , k2 γ2 ; k1 γ1 , k2 γ2 ). N1

(3.35)

We are now ready to define cross section for colliding beams, or, as explained above, beam on target. Two beams with sharp values of momenta .k1 and .k2 respectively and other quantum numbers .γ1 , γ2 respectively collide and scatter. We are interested in the final states with projection operators:  Pξ = |ξ1 ξ2 ξ1 ξ2 | =

.



d 3 k1



d 3 k2 |k1 γ1 , k2 γ2 k1 γ1 , k2 γ2 |.

The number of particles making a transition into these final states is nξ1 ξ2 =

.

2N1 N2 ImB, h¯

with B as before given by †

B = (+) Pξ V (+) .

.

The number becomes, using (3.35), 32 Im b(k1 γ1 , k2 γ2 ; k1 γ1 , k2 γ2 ) nξ1 ξ2 = ρ1 N2 (2π h) ¯ h¯

.

3 A Beam of Particles .= A Plane Wave?

50

where b is defined by (.K = k1 + k2 , etc.) k1 γ1 , k2 γ2 |B|k1 γ1 , k2 γ2  = δ 3 (K − K )b(k1 γ1 , k2 γ2 ; k1 γ1 , k2 γ2 ).

.

To calculate this we begin with the general expression: †

k1 γ1 , k2 γ2 | (+) Pξ V (+) |k1 γ1 , k2 γ2 

.

and put .k1 = k1 , k2 = k2 at the end. Use the (3.19) to write matrix element of †

(+) in terms of complex conjugate of that of .V (+) . There are two momentum conserving delta functions; we can integrate over one by changing to variables:

.

 .



d 3 k1



d 3 k2 =



d 3 K



d 3 k



where .K = k1 + k2 is the total momentum and .k = (m1 k2 − m2 k1 )/(m1 + m2 ) the relative momentum. For free states the energies are Ek1 k2 =

.

|k1 |2 |k2 |2 |K|2 |k|2 + = + ≡ EK + ek = EKk 2m1 2m2 2M 2μ

with .M = m1 + m2 as the total mass and .μ = m1 m2 /(m1 + m2 ) the reduced mass. We get, separating into total and relative momenta, p1 β1 , p2 β2 |V (+) |p 1 β1 , p 2 β2  ≡ δ 3 (P − P )T (pβ1 β2 , p β1 β2 ; P).

.

Therefore, †

k1 γ1 , k2 γ2 | (+) Pξ V (+) k1 γ1 , k2 γ2  = δ 3 (K − K ) ×

.

T (k γ1 γ2 , kγ1 γ2 ; K)∗ × T (k γ1 γ2 , k γ1 γ2 ; K ) × (EKk − EK k − i )−1 . This defines b in which we put .k = k. We can then take the imaginary part, which gives nξ1 ξ2 = ρ1 N2 (2π h¯ )3

.



2π × h¯

d 3 k δ(EKk − EK k )|T (k γ1 γ2 , kγ1 γ2 ; K)|2 .

3.4 Cross Sections

51

Because of .K = K the energy corresponding to total momentum is already equal, so .EKk − EK k = ek − ek . As, in the case of potential scattering, we change: d 3 k = |k|2 d|k|d k = μ|k|dek d k

.

this gives us the formula for cross section which is now defined as the rate of transitions per target particle, that is, a division by .N2 . The flux is now given by .ρ1 × relative velocity = ρ1 |k|/μ. We get: dσ = (2π )4 μ2 h¯ 2 |T (k γ1 γ2 , kγ1 γ2 ; K)|2 d k

.

(3.36)

This is the same formula as the one for the cross section in scattering by a fixed potential, with relative momentum .|k| = |k | for the particle momentum and the reduced mass in place of particle mass. In the center of mass frame, we must put .K = 0.

3.4.3

Relativistic Scattering

Relativistic scattering differs from non-relativistic scattering in following respects. The process of scattering involves creation or annihilation of particles. Therefore the time evolution operator generators .H0 or H are defined on a much larger Hilbert space, the Fock space. Relativistic one-particle states are given as momentum space wave functions in the basis .|pλ with normalization: pλ|p λ  = 2ωp δ 3 (p − p )δλλ

.

 where .ωp = p2 + m2 c2 . Multi-particle states are defined by tensor products but have to be properly symmetrized (or anti-symmetrized) over identical particles. We consider two colliding beams with momenta sharply defined around .p1 and .p2 and spins .σ1 and .σ2 , respectively. The final state projection operator is taken as Pξ =

.

  d 3 k1 d 3 kn ··· |k1 λ1 . . . kn λn k1 λ1 . . . kn λn |. 2ωk1 2ωkn {λ}

The argument for non-relativistic colliding beams can be repeated almost step by step except that the total momentum delta function is not integrated and the

3 A Beam of Particles .= A Plane Wave?

52

transition amplitude matrix is defined by k1 λ1 . . . kn λn |S|p1 σ1 p2 σ2  = k1 λ1 . . . kn λn |p1 σ1 p2 σ2 

.



2π i 4 δ (Pf − Pi ) M(k1 λ1 . . . kn λn ; p1 σ1 p2 σ2 ). c

Here .Pf and .Pi are, respectively, the total 4-momenta of the final and the initial states, and a factor .1/c appears because, in our notation, the 0-component of 4-momentum has the physical dimension of momentum, whereas the S-matrix contains a factor .δ(Ef − Ei ). This amplitude M (with two initial particles, n final particles, all with relativistic normalization) has the physical dimensions of (velocity).×(momentum).2−n . The cross section can be calculated as (see Ex. 3.4 in Sect. 3.6): (2π )4 h¯ 2  .σ = 2ωp1 2ωp2 (vrel c)



{λ}

d 3 k1 d 3 kn 4 ··· δ (Pf − Pi ) × 2ωk1 2ωkn

|M(k1 λ1 . . . kn λn ; p1 σ1 p2 σ2 )|2

(3.37)

where the relative velocity of particles in one beam relative to those in the other is given by vrel

.

   p1 p2   = c − . ωp1 ωp2 

3.5

Comments on Formulas of Sect. 3.2.3

3.5.1

Moller Operators

1. Although U (t − t0 )−1 U0 (t − t0 ) are unitary for all finite t their limits (±) as t → ±∞ may not be. For every scattering state, there is a free state, but there are, for example, bound states which do not go over to free states in remote past or future. Therefore the mappings (±) are not one-to-one and invertible as they should be if they were unitary. 2. (±) are norm preserving operators:

(+) φ = φ ,

and.

.



(−)

φ = φ

∀φ ∈ H

(3.38) (3.39)

because they are the limits of norm preserving operators. Thus we can write: †

(±) (±) = 1.

.

(3.40)

3.5 Comments on Formulas of Sect. 3.2.3

53

3. (±) are independent of time. This is because the limit t → ±∞ remains the same even if the origin of t is changed by a finite constant. † 4. (±) annihilates bound states: †

(±) bd = 0.

.

This can be seen as follows. Let φ be any free state and  = (+) φ the corresponding scattering state. All scattering states are orthogonal to the bound † states. So, (, bd ) = 0, and therefore (φ, (+) bd ) = 0 for any φ ∈ H. † Similarly for (−) . † 5. As (±) (±) bd = 0 for all bound states, the operator is a projection on the subspace orthogonal to the space of bound states. Thus, †

(±) (±) = 1 − Pbd = Pscatt

.

where the operators Pscatt and Pbd are projection operators on the subspaces of scattering and bound states, respectively. 6. For any fixed t U (t) (±) = lim U (t)U (s)−1 U0 (s)

.

s→∓∞

= lim U (t − s)U0 (s − t)U0 (t) s→∓∞

= lim U (u)−1 U0 (u)U0 (t), u = t − s u→∓∞

= (±) U0 (t).

(3.41)

Differentiating with respect to t and putting t = 0 gives another useful result: H (±) = (±) H0 ,

.

(3.42)

and its adjoint equation †



(±) H = H0 (±) .

.

Physically, this equation means that the energy spectrum of free particle states is contained in the spectrum of the total Hamiltonian : if φE is an eigenstate of H0 with energy E, (±) φE is an eigenstate of H with the same eigenvalue. One should appreciate that energy eigenvectors of H0 and H may have the same labels, but those of H0 span the whole space H, whereas those of H span only the subspace of scattering states.

3 A Beam of Particles .= A Plane Wave?

54

S-Matrix

3.5.2

S is unitary: †





SS † = (−) (+) (+) (−) = (−) (1 − Pbd ) (−) = 1

.



because . (−) Pbd = 0. Similarly, .S † S = 1. S commutes with the free hamiltonian .H0 : using (3.15) and (3.16) †





SH0 = (−) (+) H0 = (−) H (+) = H0 (−) (+) = H0 S.

.

The S-matrix may not commute with the total Hamiltonian H .

3.5.3

Integral Equations

1. Define V ≡ H − H0 , then

(+) = 1 +

.



1 i h¯

0

−∞

U0 (t)−1 V (+) U0 (t)dt.

(3.43)

2. Similarly, 1 .S = 1 + i h¯





−∞

U0 (t)−1 V (+) U0 (t)dt.

(3.44)

As U0 (t) is the free evolution operator, it is supposed to be known, and if V can be considered as small, the first equation (called the Lippmann-Schwinger equation) provides a way to calculate (+) by iteration. For example, the zero-order approximation to (+) (for V = 0) is (+) 0 = 1, and the first order, called the “first Born approximation,” is by substituting (+) 0 on the right hand side (+) .

1

1 =1+ i h¯



0

−∞

U0 (t)−1 V U0 (t)dt.

Once (+) is known, S can be calculated from the second equation above. Proof for the Integral Equations Start from the trivial identity: 

0

.

−∞

d [U (t)−1 U0 (t)]dt = 1 − (+) . dt

(3.45)

3.5 Comments on Formulas of Sect. 3.2.3

55

Inside the integral sign differentiation (using i hdU/dt = H U and i h¯ dU0 /dt = ¯ H0 U0 ) gives (+)



.



1 =1+ i h¯

0

U (t)−1 V U0 (t)dt.

−∞

The adjoint of this equation is (+) †



.



1 =1− i h¯

0

−∞

U0 (t)−1 V U (t)dt.

(3.46)

U0 (t)−1 V U (t)dt.

(3.47)

Similarly, †

(−) = 1 −

.

1 i h¯





0 †

Multiply (3.46) by (+) on the right and use (+) (+) = 1 to get

(+) = 1 +

.

1 i h¯



0

−∞

U0 (t)−1 V U (t) (+) dt.

(3.48)

This can be written in the desired form by using (3.14):

(+) = 1 +

.

1 i h¯



0

−∞

U0 (t)−1 V (+) U0 (t)dt.

(3.49)

Next, from (3.46) and (3.47) (+) †



.

(−) †



1 =− i h¯





−∞

U0 (t)−1 V U (t)dt,

(3.50)

Multiply on the right by (+) to get (again using (3.14)) (−) †

S=

.

3.5.4

(+)



1 =1+ i h¯





−∞

U0 (t)−1 V (+) U0 (t)dt.

(+) and S in Energy Basis

Let .|Eα be eigenvectors of .H0 with normalization: H0 |Eα = E|Eα

.

Eα|E  α   = δ(E − E  )δαα 

(3.51)

3 A Beam of Particles .= A Plane Wave?

56

where .α are observables other than energy needed to form a complete set of commuting observables. Sandwich the integral equation (3.19) in these states, and as U0 (t)|Eα = exp(−iEt/h¯ )|Eα,

.

we obtain: Eα| (+) |E  α   = δ(E − E  )δαα  +

.

Eα|V (+) |E  α   , E  − E + i

(3.52)

where we interpret the singular integral 

0

.

−∞





exp[i(E − E )t/h¯ ]dt = lim

0

→0 −∞

= lim

→0 E 

exp[i(E − E  − i )t/h¯ ]dt

i h¯ . − E + i

The sign of .i is chosen to make the integral convergent. Similarly, the equation for the S-matrix (3.20) can be written: Eα|S|E  α   = δ(E − E  )δαα  − 2π iδ(E − E  )Eα|V (+) |E  α  

.

≡ δ(E − E  )δαα  − 2π iδ(E − E  )T (Eα, E  α  ).

(3.53)

The quantity .T (Eα, E  α  ) ≡ Eα|V (+) |E  α   occurs very frequently in scattering theory and is called the off-shell transition amplitude or the off-shell T-matrix. But the transition amplitude in the equation above occurs with the energy conserving delta function and is actually the on-shell transition amplitude or Tmatrix .TE (α, α  ) ≡ T (Eα, Eα  ).

3.6

Further Exercises

Exercise 3.2 Define Green’s function: 1 .GE = i h¯



0

−∞

dt exp[i(H0 − E)t/h¯ ].

Show that in the coordinate basis |r of a non-relativistic particle of mass m r|GE |r  = −

.

exp(ipR/h¯ ) , R ¯

m

2π h2

√ p = + 2mE, R = |r − r |.

3.6 Further Exercises

57

Exercise 3.3 Most textbooks discuss non-relativistic potential scattering by assuming the “Sommerfeld radiation condition” in the asymptotic region. That is, for an incoming beam represented by A exp(ikz), the wave function of the scattered “wave” is assumed to have the form   eikr ikz , kr → ∞. .A e + f (θ ) r Justify this expression from the formal scattering theory. Hint: The incoming state (at t = 0) φE (0) is an eigenstate of energy with E = k 2 h¯ 2 /2m. Since H (+) = (+) H0 (see 3.42), the scattering state E (0) =

(+) φE (0) is also an eigenstate of energy with the same value. Therefore, from the integral equation for (+) (3.48) E (0) = φE (0) +

.

1 i h¯



0 −∞

dt exp(iH0 t/h)V ¯ exp(−iEt/h) ¯ E (0)

= φE (0) + GE V E (0), where GE is as defined in the previous problem. Multiply on the left by r|, use the result of the previous problem, and the fact that V (r ) is non-zero only in a small region. Exercise 3.4 Complete the steps for derivation of the relativistic cross-section formula (3.37): (We have omitted the spin variables σ1 , σ2 in the initial state |p1 σ1 p2 σ2 . They play no role in the derivation and can be restored in the end.) h¯ 2  (2π )4 .σ = 2ωp1 2ωp2 (vrel c) {λ}



d 3 kn 4 d 3 k1 ··· δ (Pf − Pi ) |M(k1 λ1 . . . kn λn ; p1 p2 )|2 2ωk1 2ωkn

where we define the relativistic transition matrix M as k1 λ1 . . . kn λn |V (+) |p1 p2  = δ 3 (Pf − p1 − p2 )M(k1 λ1 . . . kn λn ; p1 p2 ),

.

or, equivalently, from f |S|i = f |i −

.

2π i 4 δ (Pf − Pi ) f |M|i. c

Outline of Solution Let |ξ  be the final state and |φ1 φ2  the two-particle initial state. The number of transitions per unit time for two bunches of size N1 and N2 is given by 2 † nξ = N1 N2 Imφ1 φ2 |B|φ1 φ2 , B = (+) |ξ ξ |V (+) . h¯

.

3 A Beam of Particles .= A Plane Wave?

58

The average over the bunches is φ1 φ2 |B|φ1 φ2  =

.

1  φ1i φ2j |B|φ1i φ2j , N1 N2 i,j

where |φ1i  are spatially displaced by ri and |φ2j  by rj . Substituting the wave functions .

  3  3  3 d p2 d p1 d p2 1  d3 p1 φ1 φ2 |B|φ1 φ2  = N1 N2 2ωp1 2ωp2 2ωp1 2ωp2 i,j

exp(i(p1 − p1 ) · ri ) exp(i(p2 − p2 ) · rj ) × f1∗ (p1 )f1 (p1 )f2∗ (p2 )f2 (p2 ) p1 p2 |B|p1 p2 

(3.54)

We first calculate †

p1 p2 |B|p1 p2  = p1 p2 | (+) |ξ ξ |V (+) |p1 p2 

.

= (ξ | (+) |p1 p2 )∗ ξ |V (+) |p1 p2 . We have seen from the integral equation for (+) that (as ξ |p1 p2  = 0) ξ | (+) |p1 p2  =

.

ξ |V (+) |p1 p2  , E  − Eξ + i

E  = Ep1 + Ep2 .

Since total linear momentum is conserved ξ |V (+) |p1 p2  = δ 3 (Pξ − p1 − p2 )M(ξ, p1 p2 ).

.

Therefore, p1 p2 |B|p1 p2  = δ 3 (Pξ − p1 − p2 )δ 3 (Pξ − p1 − p2 ) ×

.

E

1 M(ξ, p1 p2 )∗ M(ξ, p1 p2 ). − Eξ − i

The two delta factors above can also be written as δ 3 (Pξ − p1 − p2 )δ 3 (Pξ − p1 − p2 ) = δ 3 (Pξ − p1 − p2 )δ 3 (p1 + p2 − p1 − p2 )

.

  Substitute this in (3.54) and (1) convert sums over i and j as ρ1 d3 r1 and ρ2 d3 r2 , respectively; (2) integration over r1 produces (2π h¯ )3 δ(p1 − p1 ) which allows integral over p1 to be performed; (3) since p1 = p1 , the delta functions in B, make a δ(p2 − p2 ) so that integral over p2 can be performed; (4) the r2 integration

3.6 Further Exercises

59

 is vacuous and ρ2 d3 r2 = N2 which cancels the 1/N2 factor outside. Thus (3.54) becomes (recall the derivation for the non-relativistic case): 1 (2π h) ρ1 ¯ 3 δ 3 (Pξ − p1 − p2 )|M(ξ, p1 p2 )|2 . N1 2ωp1 2ωp2 E − Eξ − i

φ1 φ2 |B|φ1 φ2  =

.

This can be substituted in the formula for nξ . The −i in the denominator gives the imaginary part of the factor as energy delta function times π by (3.26), and the N2 factor can be omitted if we are calculating cross section per target particle. The connection of M as a matrix element of V (+) related to the S-matrix is easily seen from the integral equation for S. Exercise 3.5 (Relativistic Decay Rate) The decay rate, of a relativistic particle of rest-mass m, is defined as the fraction of particles making a transition per unit time from initial state |k0  to final states |ξ  = |k1 λ1 . . . kn λn . Show that it is given by .

1 2π 1 1 = τ h¯ 2ωk c



d 3 kn 4 d 3 k1 ··· δ (Pf − Pi ) |M(k1 λ1 . . . kn λn ; k0 )|2 , 2ωk1 2ωkn (3.55)

where Pi is the initial 4-momentum vector (ωk0 , k0 ) and Pf the total 4-momentum of the final state. Hint: In this case there is no need to consider the bunch. The average of B in the initial state |φ is  φ|B|φ =

.

d3 k 2ωk



d3 k ∗  f (k )f (k) k |B|k, 2ωk

and, calling the final state as ξ k |B|k = δ 3 (pξ − k)δ 3 (pξ − k )

.

M(ξ ; k )∗ M(ξ ; k) . E  − Eξ − i

The integral on k can be done, which puts k = k, and as f (k) is peaked around k0 , the matrix elements (including the delta function!) can be pulled out at value k0 . So, 2 nξ = N Im φ|B|φ h¯ 2π 1 1 4 δ (Pξ − Pi )|M(ξ ; k0 )|2 . =N h¯ 2ωk0 c

.

3 A Beam of Particles .= A Plane Wave?

60

Divide by N to get the fraction and integrate over the final states to get the decay rate formula. Usually, the particle decays in its rest frame where k0 = 0 and ωk0 = mc. The physical dimension of M(k1 λ1 . . . kn λn ; k0 ) is (energy)×(momentum)2−n .

3.7

Notes

The probability interpretation of quantum mechanics was first given by Max Born [1] in connection with scattering theory. The idea of “coherent bunches” relating the actual number densities of particles to matrix elements of sharp momentum eigenstates has been used by the present author in classroom teaching. The formal scattering theory given here closely follows the treatment in R. G. Newton’s book [2].

References 1. M. Born, Quantenmechanik der StoBvorgange. Z. Phys. 38, 803 (1926); an English translation appears in G. Ludwig (ed.), Wave Mechanics (Pergamon Press, Oxford, 1968) 2. R.G. Newton, Scattering Theory of Waves and Particles (McGraw-Hill, New York, 1966)

4

Star-Product Formulation of Quantum Mechanics

Abstract

Star-product .(f ∗ g)(q, p) of two functions .f (q, p) and .g(q, p) on the phase space is a non-commutative product corresponding to the Hilbert space product .fˆg ˆ The quantum theory can be developed in analogy with ˆ of operators .fˆ and .g. the classical mechanics on the phase space, not with a Poisson bracket, but with a Moyal bracket .(f ∗g −g ∗f )/ i h¯ . This formulation is also called the deformation theory of quantization.

4.1

Weyl Ordering and the Star Product

Quantization is the process of arriving at a quantum theory starting from a classical theory by a set of rules. The usual procedure of assigning to phase space canonical variables q and p the Hermitian operators .qˆ and .pˆ in a Hilbert space runs into “ordering problems” when we seek to define observables other than the simplest ones because operators do not commute. For example, how is .q 2 p2 to be quantized? If we take .qˆ 2 pˆ 2 as the corresponding operator, it is not Hermitian. There are many choices even for a Hermitian operator: .(qˆ 2 pˆ 2 + pˆ 2 qˆ 2 )/2 or .qˆ pˆ2 qˆ or .(qˆ pˆ qˆ pˆ + etc. As part of our quantization procedure, we must also provide (at least pˆ qˆ pˆ q)/2, ˆ for the physically relevant observables) a rule for defining ordering of operators when converting a classical phase space quantity into its quantum mechanical counterpart. One of the oldest ordering rules is due to Weyl. It is simple to state: Let the classical quantity .f (q, p) to be quantized be written as a Fourier transform:  f (q, p) =

.



du dv exp[i(uq + vp)/h¯ ] f (u, v).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_4

61

62

4 Star-product formulation

The operator .fˆ to be associated with .f (q, p) by this rule is obtained by this same formula replacing classical q and p by .qˆ and .pˆ respectively on the right-hand side in the exponential: .fˆ =





du dv exp[i(uqˆ + v p)/ ˆ h] ¯ f (u, v).

(4.1)

We note down here the inverse formula which expresses the phase space function f (q, p) given the operator .fˆ:

.

 f (q, p) =

.

dx exp(ipx/h)q − x/2|fˆ|q + x/2. ¯

(4.2)

We will prove this formula in Sect. 4.4. Note: Throughout this chapter we stick to one degree of freedom for writing formulas. The generalization to many degrees of freedom is straightforward. Exercise 4.1 Show that if .f (q, p) is real, then .fˆ as defined above will be Hermitian. We can now ask the natural question: if .fˆ corresponds to .f (q, p) under Weyl ordering, and .gˆ to .g(q, p), is there a function, which under the Weyl rule, will correspond to the product .fˆg? ˆ f (q, p) −→ fˆ .

g(q, p) −→ gˆ ??

−→ fˆgˆ

The answer is yes, and the function on phase space is written as .(f ∗ g)(q, p) given by  (f ∗ g)(q, p) = f (q, p) exp

.

 i h¯ ↔ P g(q, p) 2

(4.3)



where .P is the Poisson bracket bi-differential operator ↔

← →

← →

∂ ∂ ∂ ∂ − . P = ∂q ∂p ∂p ∂q

(4.4)

4.2 Derivation for Star-Product Expression

63

the arrows indicating the direction in which the differential operators act. Thus, ↔

f (q, p)(P )n g(q, p) =

n 

.

r=0

n! ∂ nf ∂ ng (−1)n−r r n−r r n−r r!(n − r)! ∂q ∂p ∂p ∂q

and so, (f ∗ g)(q, p) =

.

 ∞   i h¯ r+s (−1)s ∂ r+s f ∂ r+s g . 2 r!s! ∂q r ∂ps ∂pr ∂q s

(4.5)

r,s=0

It is clear that this “star product” of f and g is not commutative .f ∗g = g ∗f unless fˆ and .gˆ commute. And .f ∗ g may not be real because .fˆgˆ may not be Hermitian. But the star product is associative, .f ∗ (g ∗ h) = (f ∗ g) ∗ h because the product ˆ etc. is associative. A direct proof is still required, but we omit of operators .fˆ, g, ˆ h, it here. See references at the end of this chapter.

.

4.2

Derivation for Star-Product Expression

The product .fˆgˆ .fˆg ˆ=







ˆ h] ˆ h] du dv du dv exp[i(uqˆ + v p)/ ¯ exp[i(u qˆ + v p)/ ¯ f (u, v) g (u , v )

has two operator exponentials. But the commutator of the exponents is a c-number, commuting with all operators. Using the identity .

ˆ exp(B) ˆ = exp(Aˆ + B) ˆ exp([A, ˆ B]/2) ˆ exp(A)

(4.6)

ˆ B] ˆ commutes with both .Aˆ and .B, ˆ we obtain therefore which holds when .[A,  ˆ .f g ˆ h] ˆ = du dv du dv exp[i((u + u )qˆ + (v + v )p)/ ¯ ∼



× exp[−i(uv − vu )/2h] ¯ f (u, v) g (u , v ).

(4.7)

This looks like the Weyl ordering of some phase space function whose Fourier transform goes with variables .(u + u ) and .(v + v ) and an extra factor of .exp[−i(uv − vu )/2h]. ¯ Instead of changing to those variables, we use a trick that

64

4 Star-product formulation

simplifies things enormously. We start with the ordinary product of .f (q, p) and g(q , p ) at two different points:

.

f (q, p)g(q , p ) =



.

du dv du dv exp[i(uq + vp + u q + v p )/h] ¯ ∼



× f (u, v) g (u , v ). A differential operator like  .

∂ ∂ ∂ ∂ − ∂q ∂p ∂p ∂q



produces a factor .−(uv − vu )/h¯ 2 inside the integral sign when acting on .f (q, p)g(q , p ). If we then put .q = q and .p = p , then .

   ∂ ∂ ∂ ∂  f (q, p)g(q , p ) − ∂q ∂p ∂p ∂q q=q ,p=p  = du dv du dv exp[i((u + u )q + (v + v )p)/h] ¯

i h¯ 2







×[−i(uv − vu )/2h] ¯ f (u, v) g (u , v ).

We can apply repeated powers of this operator as in an exponential and get  .

exp =

i h¯ 2 



∂ ∂ ∂ ∂ − ∂q ∂p ∂p ∂q



  f (q, p)g(q , p )

q=q ,p=p

du dv du dv exp[i((u + u )q + (v + v )p)/h] ¯ ∼



× exp[−i(uv − vu )/2h] ¯ f (u, v) g (u , v ).

Comparing it with (13.21) above, we see that the left-hand side is what we have defined as .(f ∗ g)(q, p) because converting q and p on the right- hand side into .qˆ and .pˆ we get precisely .fˆ .g. ˆ This completes the proof for the expression for the star product.

4.3

Wigner Distribution Function

Dynamics, whether classical or quantum mechanical, requires three things: specification of state, specification of observables, and equations of motion. We have seen the correspondence between the observables on the phase space and the Hermitian

4.3 Wigner Distribution Function

65

(or self-adjoint) operators on Hilbert space through the Weyl ordering. What can we say about specification of states? In classical mechanics the variables .(q, p) play a double role: they are both coordinates on the phase space and, like other observables, functions on it. Strictly speaking, it is not functions .q, p that specify the state; rather it is their specific values that identify the state. This is clarified if we define the classical state by a distribution function. For example, ρcl (q, p) = δ(q − q0 )δ(p − p0 )

.

is the state corresponding to the phase space point with coordinates .(q0 , p0 ). In a general case, the distribution function may not be a sharp Dirac delta function, but a probability distribution .ρ(q, p), positive definite and giving unity when integrated over the whole phase space. Dynamics will determine trajectories .t → (q(t), p(t)) for all the points of the phase space, leading to evolution of the probability distribution .ρ(q, p). For quantum theory we represent the state by a unit vector .ψ in the Hilbert space of the system. But as we discussed earlier, it is the unit ray which determines the state, all vectors in the ray being equally qualified to represent the same physical state. A better way to represent the state is to use the projection operator .ρˆψ = |ψψ|. We therefore look for a phase space function .ρψ (q, p) which under the Weyl ordering will produce .ρˆψ :  ρˆψ = |ψψ| =



du dv exp[i(uqˆ + v p)/ ˆ h] ¯ ρ (u, v)

.

 =



du dv exp[iuq/ ˆ h] ˆ h] ¯ exp[iv p/ ¯ exp[iuv/2h] ¯ ρ (u, v),

where in the second step, we have separated the exponents using the identity (4.6). ∼ Once we identify .ρ (u, v), we can construct the phase space function .ρψ (q, p). Let .|q , etc. be the eigenstates of .q. ˆ By taking matrix element with these eigenstates ψ(q )ψ ∗ (q ) =

.





ρ du dv exp[iuv/2h] ˆ h]|q  (u, v). ¯ exp[iuq /h] ¯ q | exp[iv p/ ¯

As q | exp[iv p/ ˆ h]|q  = δ(v + q − q ) ¯

.

we get:





ψ(q )ψ (q ) =

.





du exp[iu(q + q )/2h] ¯ ρ (u, q − q ).

66

4 Star-product formulation

We can choose variables .Q = (q + q )/2 and .x = q − q and invert the Fourier ∼ transform to obtain .ρ and from there  .ρψ (q, p) = dx exp(ixp/h)ψ(q − x/2)ψ ∗ (q + x/2). (4.8) ¯ The function .ρψ (q, p) on phase space is called the Wigner distribution function. The Wigner distribution function when integrated over the whole phase space (with a suitable factor) gives unity:  .

dq dp ρψ (q, p) = (2π h) ¯

 

=

dq dp 2π h¯



dx exp(ixp/h)ψ(q − x/2)ψ ∗ (q + x/2) ¯

dq|ψ(q)|2

= 1.

(4.9)

Exercise 4.2 The function .ρψ (q, p) can be interpreted as a “quasi-probability” density on the phase space with marginal probabilities for q and p equal to .|q|ψ|2 and .|p|ψ|2 , respectively:  .



dp ρψ (q, p) = |q|ψ|2 , (2π h) ¯ dq ρψ (q, p) = |p|ψ|2 . (2π h) ¯

As we shall see in Sect. 4.5, Wigner distributions for two normalized states .φ and ψ allow us to calculate the transition probability:

.

 |φ|ψ| = Tr(|φφ| |ψψ|) =

.

2

dq dp ρφ (q, p)ρψ (q, p). (2π h) ¯

(4.10)

This, incidentally, also shows that .ρψ (q, p) (for any .ψ) cannot be a true positivedefinite probability distribution, because if .φ and .ψ are orthogonal, the left-hand side will be zero. It can be proved that the Wigner function is positive definite only for Gaussian wave functions. Exercise 4.3 Calculate the Wigner function for the ground state .ψ0 and for the first level .ψ1 of a one-dimensional harmonic oscillator. Estimate the area of phase space where .ρψ1 (q, p) is negative.

4.4 Trace of fˆ

67

Trace of fˆ

4.4

Let .fˆ be an observable with the corresponding phase space function.f (q, p). We show that the trace (which is actually independent of the basis in which it is taken, but here we choose .|q  for simplicity) is equal to the integral of .f (q, p) over the phase space: Trfˆ =



.

dq q |fˆ|q  =



dq dp f (q, p). (2π h) ¯

(4.11)

The matrix element of .fˆ as given by (4.1) is q |fˆ|q  =



.

 =



du dvq | exp[i(uqˆ + v p)/ ˆ h]|q  f (u, v), ¯ ∼

du exp[iu(q + q )/2h] ¯ f (u, q − q ),

where we have taken exactly the same steps as in calculating the matrix elements of .|ψψ| for the Wigner distribution. Substituting the expression for the inverse Fourier transform  ∼ dq dp exp[−iuq/h¯ − i(q − q )p/h]f . f (u, q − q ) = ¯ (q, p) 2π h¯ in the above equation and integrating over u, .q |fˆ|q  =



  q + q dq dp δ q− exp[−ip(q − q )/h]f ¯ (q, p). (2π h) 2 ¯

(4.12)

Before we complete the proof for the trace formula, let us prove the Weyl inverse correspondence (4.2) quoted in the very beginning of this chapter. For this purpose ∼ define .Q = (q + q )/2 and .q = q − q , then ∼



Q− q /2|fˆ|Q+ q /2 =

.



∼ dq dp δ(q − Q) exp(−ip q /h)f ¯ (q, p). (2π h) ¯ ∼

Integrating over q, multiplying both sides with .exp(iP q /h), ¯ and integrating with ∼ respect to .q gives the formula (4.2), written for .Q, P in place of .q, p. We can now come back to Eq. (4.12). Put .q = q in this equation and integrate over .q to give us the result.

68

4 Star-product formulation

4.5

Trace of a Product: Expectation Values

From the formula for the trace (4.11), we know that   dq dp dq dp (f ∗ g)(q, p) = (g ∗ f )(q, p). .Tr(fˆg) ˆ = (2π h) (2π h) ¯ ¯

(4.13)

We now show that provided one of the functions, say, g, vanishes along with its derivatives at infinitely large values of q and p, then  .

dq dp (f ∗ g)(q, p) = (2π h) ¯



dq dp f (q, p)g(q, p). (2π h) ¯

(4.14)

The proof is based on the star-product formula and using integration by parts repeatedly:  .

dq dp (f ∗ g) = (2π h) ¯



r+s



dq dp  (−1)s ∂ r+s f ∂ r+s g (2π h) ¯ r,s r!s! ∂q r ∂ps ∂pr ∂q s



∂ 2(r+s) f dq dp  (−1)s (−1)r+s r+s r+s g (2π h) ∂q ∂p ¯ r,s r!s!

= 

i h¯ 2



i h¯ 2

r+s

dq dp fg (2π h) ¯   n ∞  i h¯ n 1 ∂ n f  n! dq dp  − g (−1)n−r + (2π h) 2 n! ∂q n ∂pn r!s! ¯ n=1 r=0  dq dp fg. = (2π h) ¯ =

All terms .n = 1 onward drop out because of the factor .(1 − 1)n . The main use of the trace of product formula is in calculating expectation values. If .ρψ is the function corresponding to pure normalized state .ψ .ψ|fˆ|ψ = Tr(|ψψ|fˆ) =

4.6



dq dp ρψ (q, p)f (q, p). (2π h) ¯

Eigenvalues

ˆ a = aψa can be written as The eigenvalue equation for an observable .Aψ ˆ ˆ a ψa | = a|ψa ψa | = |ψa ψa |A. A|ψ

.

(4.15)

4.7 Dynamics and the Moyal Bracket

69

Its counterpart in the phase space is A(q, p) ∗ ρa = ρa ∗ A(q, p) = aρa .

.

However, the eigenvalue problem in the phase space is neither convenient nor very useful.

4.7

Dynamics and the Moyal Bracket

The Schödinger equation, written for a normalized state .|ψ in terms of its projection operator .ρˆψ = |ψψ|, is i h¯

.

d ρˆψ d = i h¯ (|ψψ|) = Hˆ |ψψ| − |ψψ|Hˆ = [Hˆ , ρˆψ ]. dt dt

(4.16)

The projection operator is also called the “density matrix” corresponding to the “pure state” .|ψ. If .|r is an orthonormal basis, then ρˆψ =

.

   (ψr ψs∗ ) |rs| = (|ψr |2 )|rr| + (ψr ψs∗ ) |rs|. r,s

r

r =s

The first term on the extreme right shows the probabilities of .ψ to be in the states |r, and the second contains the quantum interference terms. An operator of the form .ρ = pr |rr| without the interference terms is said to represent a “mixed” state if .pr are probabilities, and there are more than one term in the sum. For a pure state, there is no orthonormal basis in which it can be put in the form of a mixed state. The density matrix equation (4.16) above is analogous to the Liouville equation of classical probability distribution on the phase space: .

.

dρcl = {H, ρcl }. dt

For this reason it is called quantum Liouville equation. This analogy becomes even more close when .ρˆψ = |ψψ| is replaced by its counterpart in the phase space as we see below. Let .ρψ (q, p) be the Wigner function corresponding to the pure state .|ψ. Then, the quantum Liouville equation (4.16) can be written in the phase space as .

dρψ 1 = (H ∗ ρψ − ρψ ∗ H ) ≡ [H, ρψ ]M . dt i h¯

70

4 Star-product formulation

Here we have defined the Moyal bracket between two phase space functions as [A, B]M =

.

  2 1 h¯ ↔ (A ∗ B − B ∗ A) = A(q, p) sin P B(q, p). i h¯ 2 h¯

(4.17)

If we were to expand the sine function, the first term is the classical Poisson bracket followed by terms with higher powers of .h¯ . The quantum Liouville equation in this version looks like a series in powers of .h¯ whose limit as .h¯ → 0 is the classical Liouville equation. The above discussion is in the Schrödinger picture as it appears in phase space. If we were to keep .ρψ independent of time, we can write the equations for any observable f as .

df = [f, H ]M . dt

(4.18)

The equations of motion for q and p are like the classical equations: .

dq ∂H = [q, H ]M = , dt ∂p

dp ∂H = [p, H ]M = − . dt ∂q

Because of this, the phase space volume (the “Liouville measure”) is preserved by time evolution. But in quantum mechanics, trajectories like these do not make sense. What corresponds to a quantum state is its Wigner function. And that is just a quasi-probability distribution, which cannot be squeezed in both coordinates and momenta at the same time. Moreover, as the Moyal bracket involves infinitely many differentiations for arbitrary functions, it is not clear that the time evolution for general functions is local.

4.8

Star Exponential and the Path Integral

The one-to-one correspondence between the Hilbert space operators and phase space functions corresponding to them allows us to construct the exponential evolution operator: .

exp∗ (−itH /h)(q, p) = ¯

  t n 1 (H ∗ H ∗ · · · ∗ H )(q, p). i h¯ n! n

From the inverse formula (4.2), we can relate it to the matrix element needed for the propagator .q | exp(−it Hˆ /h)|q ¯ . Start with  .

dp ∗ p) exp(−ipy/h) ¯ ¯ exp (−itH /h)(q, 2π h¯

4.9 Further Exercises

71

and substitute for the expression for .exp∗ (−itH /h)(q, p). Then, ¯ 

.

dp ∗ exp(−ipy/h) p) ¯ exp (−itH /h)(q, ¯ 2π h¯   dp exp[ip(x − y)/h]q − x/2| exp(−it Hˆ /h)|q + x/2 = dx ¯ ¯ 2π h¯

The integration on p on the right-hand side gives the delta function putting .x = y. Redefining .q = q − y/2 and .q = q + y/2 q | exp(−it . Hˆ /h)|q  ¯    dp q + q ∗ exp[ip(q − q )/h] , p . = exp (−itH / h) ¯ ¯ 2π h¯ 2

(4.19)

The left-hand side is related to the path integral of action .S([q(τ )]) for all paths q(τ ) with .q(0) = q and .q(t) = q :

.

 . exp(iS([q])/h) d[q] ¯

 =

  dp q + q ∗ exp[ip(q − q )/h] , p . exp (−itH / h) ¯ ¯ 2π h¯ 2

(4.20)

The usefulness of the formula is dependent, of course, on the feasibility of calculating the star exponential in the phase space.

4.9

Further Exercises

Exercise 4.4 Show that the Weyl ordering for q 2 p2 is (q 2 p2 )∧ =

.

1 2 2 [qˆ pˆ + qˆ pˆ 2 qˆ + qˆ pˆ qˆ pˆ + pˆ qˆ pˆ qˆ + pˆ qˆ 2 pˆ + pˆ 2 qˆ 2 ]. 6

By using [q, ˆ p] ˆ = i h¯ appropriately, show that we can also write 1 2 2 [qˆ pˆ + 2qˆ pˆ 2 qˆ + pˆ 2 qˆ 2 ] 4 1 = [pˆ 2 qˆ 2 + 2pˆ qˆ 2 pˆ + qˆ 2 pˆ 2 ]. 4

(q 2 p2 )∧ =

.

Hint: Write δ(u)δ(v) =

.

1 (2π )2

 dqdp exp[−i(qu + pv)]

72

4 Star-product formulation

and differentiate both sides with respect to u and v an appropriate number of times ∼

to get the Fourier transform f of monomials like f = q n pm . The Weyl ordering formula (4.1) will then involve differentiating exp[i(qu ˆ + pv)] ˆ with respect to u and v a suitable number of times before putting both u and v to zero. Effectively, for f = q n pm , it will involve looking for the coefficient of un v m in the term (qu ˆ + pv) ˆ n+m /(n + m)!. Remark The Weyl ordering for formula q n pm is more convenient in the N. McCoy [1] form: q n pm → (q n pm )∧ =

.

n n! 1  qˆ n−r pˆ m qˆ r n 2 r!(n − r)! r=0

=

m m! 1  pˆ m−r qˆ n pˆ r . n 2 r!(m − r)! r=0

For a proof of the formula, see N. McCoy, reference [1]. McCoy had corrected an error in a similar formula suggested by Born and Jordan. Exercise 4.5 For H (q, p) = (p2 + q 2 )/2 and F (q, p) = f (H ), show that (H ∗ F )(q, p) = H F −

.

h¯ 2 h¯ 2 f (H ) − Hf (H ). 4 4

Exercise 4.6 Choose H as in the previous problem and f (H, t) = exp∗ (−itH /h). ¯ Show that f satisfies

1 ∂f h¯ 2 h¯ 2 = . Hf − f − Hf ∂t i h¯ 4 4 and verify that it has the solution   2H 1 exp tan(t/2) . .f = cos(t/2) i h¯ Substitute f in the path integral formula (4.20) and obtain the path integral.

4.10

Notes

4.10.1 Weyl Correspondence and Wigner Distribution The Weyl ordering appears in the classic book The theory of groups and quantum mechanics [2]. The quantum mechanics of phase space can be said to have been

References

73

started by E. P. Wigner [3] when he introduced the phase space function described here. The original motivation was to apply it to problems of statistical mechanics in situations where quantum effects can be considered small. As an example, an expansion of the partition function in powers of .h¯ can be made to approximate physical quantities. This so-called Wigner-Kirkwood expansion (see Kirkwood [4]) lends itself for symbolic computation through *-product for higher-order terms, as in P. Sharan [5].

4.10.2 Star Product and Moyal Bracket Moyal bracket was introduced by J. E. Moyal in reference [6]. Orderings other than the Weyl ordering can be considered to define phase space quantities. A good reference is the paper by G. S. Agarwal and E. Wolf [7]. The associative property of the star product can be followed, apart from other details, in C. L. Mehta [8] and T. F. Jordan and E. C. G. Sudarshan [9]. The Moyal bracket when expanded in a series has the first term as the Poisson bracket, and then other terms have powers of .h. ¯ This is called by mathematicians as a deformation of the symplectic structure. An extensive introduction to this “deformation theory of quantization” can be found in the paper by F. Bayen, M. Flato, C. Fronsdal, A. Lichnerowicz, and D. Sternheimer [10]. The relation of the path integral and the star exponential was given in P. Sharan [11].

References 1. N. McCoy, Proc. Natl. Acad. Sci. (USA) 18, 674 (1932) 2. H. Weyl, Theory of Groups and Quantum Mechanics (Dover Publications, New York, 1950), Chapter IV, section 14 3. E.P. Wigner, Phys. Rev. 40, 749 (1932). A review on various distribution functions is M. Hillery, R.F. O’Connel, M.O. Scully, E.P. Wigner, Phys. Rep. 106, 121 (1984) 4. J.G. Kirkwood, Phys. Rev. 44, 31 (1933) 5. P. Sharan, Computer Phys. Commun. 69, 235 (1992) 6. J.E. Moyal, Proc. Camb. Phil. Soc. 45, 99 (1949) 7. G.S. Agarwal, E. Wolf, Phys. Rev. D 2, 2161 (1970) 8. C.L. Mehta, J. Math. Phys. 5, 677 (1969) 9. T.F. Jordan, E.C.G. Sudarshan, Rev. Mod. Phys. 33, 515 (1961) 10. F. Bayen, M. Flato, C. Fronsdal, A. Lichnerowicz, D. Sternheimer, Ann. Phys. 111, 61–150 (1978) 11. P. Sharan, Phy. Rev. D 20, 414 (1979)

5

Can There Be a Non-linear Quantum Mechanics?

Abstract

Quantum mechanics can be looked upon as a Hamiltonian theory with linear equations of motion by choosing the real and imaginary parts of the Schrodinger wave function as phase space variables with expectation value of the quantum Hamiltonian as the classical Hamiltonian. One can ask the question: does there exist a non-linear generalization of this formalism?

5.1

Hamiltonian Equations in Quantum Mechanics

Let us choose an orthonormal basis .|r, r, s = 1, 2, . . . and write .ψr = r|ψ as representatives of the state vector .|ψ. The Schrödinger equation can be expressed as  .i h Hrs ψs , r = 1, 2, . . . (5.1) ¯ ψ˙r = s

where .Hrs = r|Hˆ |s is the matrix of the Hamiltonian .Hˆ in this basis. For a vector .ψ, define real quantities .qr , pr , and .ρr , θr by ψr =

.

(qr + ipr ) = ρr exp(iθr ), √ 2h¯

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_5

75

76

5 Non-linear quantum mechanics

and write the expectation value .ψ|H |ψ in terms of .qr ’s and .pr ’s as H (q, p) =



.

ψr∗ Hrs ψs ,

r,s

 1  + − Hrs (qr qs + pr ps ) + Hrs (pr qs − ps qr ) , . 2h¯ r,s    + − = ρr ρs Hrs cos(θr − θs ) + Hrs sin(θr − θs ) ,

=

(5.2) (5.3)

r,s

where + − Hrs = Hrs + iHrs ,

.

+ + Hrs = Hsr ,

− − Hrs = −Hsr .

The real and imaginary parts of .Hrs are, as shown, a real symmetric matrix .H + and a real antisymmetric matrix .H − . The Schrödinger equation (5.1) and its complex conjugate can be written as q˙r =

.

∂H , ∂pr

p˙ r = −

∂H . ∂qr

(5.4)

The norm ψ2 =



.

|ψr |2 =

r

  1  2 qr + pr2 = ρr2 2h¯ r r

is preserved by these equations, as expected. Therefore all motion is restricted to the surface: .

   qr2 + pr2 = constant, or ρr2 = constant. r

5.2

r

Observables and Poisson Bracket

Any observable represented by a Hermitian matrix can be separated into real and imaginary parts: − Ars = A+ rs + iArs ,

.

5.2 Observables and Poisson Bracket

77

that is, into real symmetric and real antisymmetric matrices .A+ and .A− , respectively. The expectation value of A has the standard form: A(q, p) = (ψ, Aψ)  1  + = Ars (qr qs + pr ps ) + A− rs (pr qs − ps qr ) . 2h¯ r,s    − ρr ρs A+ = rs cos(θr − θs ) + Ars sin(θr − θs ) .

.

(5.5) (5.6)

r,s

Note that just as .ψ is not directly observable, .qr , pr are also not observables. Exercise 5.1 Show that if a normalized .ψ corresponds to .(q, p) and normalized . to .(Q, P ), then the transition probability can be written as ⎡

2

2 ⎤   1 2 ⎣ (qr Qr + pr Pr ) + (pr Qr − qr Pr ) ⎦ . .||ψ| = (2h) ¯ 2 r r =

1  (qr qs + pr ps )(Qr Qs + Pr Ps ) (2h) ¯ 2 r,s  +(pr qs − ps pr )(Pr Qs − Ps Qr ) .

(5.7)

(5.8)

Let .A, B be two Hermitian matrices, and .A(q, p) and .B(q, p) the observables constructed as above from the expectation values, and let [A, B] = i hC. ¯

.

Then the Poisson bracket   ∂A ∂B ∂A ∂B .{A, B} = − ∂qr ∂pr ∂pr ∂qr r

(5.9)

corresponds to C. Proof Writing in matrix notation, let the real part (symmetric) and imaginary part (antisymmetric) of matrices A and B be .a, b, c, d, respectively: A = a + ib,

.

B = c + id,

then i hC ¯ = [A, B] = ([a, c] − [b, d]) + i([a, d] + [b, c])

.

= i[([a, d] + [b, c]) + i([b, d] − [a, c])].

78

5 Non-linear quantum mechanics

This shows that the real and imaginary parts of the matrix C are C=

.

1 [([a, d] + [b, c]) + i([b, d] − [a, c])]. h¯

On the other hand, using A(q, p) =

1  [ars (qr qs + pr ps ) + brs (pr qs − ps qr )] , 2h¯ r,s

B(q, p) =

1  [crs (qr qs + pr ps ) + drs (pr qs − ps qr )] 2h¯ r,s

.

the Poisson bracket can be calculated: {A(q, p), B(q, p)} =

.

1  qr [a, c]rs ps + pr [b, d]rs qs h¯ 2 r,s

 +qr (ad − cb)rs qs + pr (bc − da)rs ps . The first two terms on the right-hand side can be combined as follows: .[a, c] and [b, d] are antisymmetric matrices because both a and c are symmetric and both b and d are antisymmetric. As there is a summation over r and s, only the antisymmetric part .(pr qs − ps qr )/2 survives in the .[b, d] term and .(ps qr − pr qs )/2 in the .[a, c] term. Interchanging dummy indices .r, s, we get the first two terms as

.

.

− ([a, c] − [b.d])rs (pr qs − ps qr )/2.

Similarly, in the last two terms qr (ad)rs qs = (ad)rs qr qs = (1/2)[(ad)rs + (ad)sr ]qr qs = (1/2)[a, d]rs qr qs

.

where in the last step we use the antisymmetry of d and symmetry of a (ad)sr qr qs = ast dtr qr qs = −drt ats qr qs .

.

Similarly for the .pr ps term. Thus, {A(q, p), B(q, p)} =

.

1 2h¯ 2

[([a, d] + [b, c])rs (qr qs + pr ps )

+([b, d] − [a, c])rs pr qs ] = C(q, p).

5.3 Symmetry Transformations

79

Exercise 5.2 Use the definition of the Poisson brackets in the .(qr , pr ) coordinates to calculate the bracket in variables .ρr2 , θr as {A, B} =

.

  1  ∂A ∂B ∂A ∂B − ∂θr ∂ρr2 h¯ r ∂ρr2 ∂θr

(5.10)

and  .

5.3

 1 ρr2 , θs = δrs . h¯

(5.11)

Symmetry Transformations

An infinitesimal canonical transformation is generated by an observable .A(q, p) through the Poisson bracket: δqt = {qt , A(q, p)},

.

δpt = {pt , A(q, p)}

which, for an observable constructed of the standard form (5.5) in terms of .A± rs , is    + Ats ps + A− ts qs h¯ s    + δpt = − Ats qs − A− ts ps h¯ s δqt =

.

leading to an infinitesimal unitary transformation    i i   + − 1− A .ψt + δψt = ψt − ψs . Ats + iAts ψs = h¯ s h¯ ts s as expected. Exercise 5.3 A unitary transformation .ψ  = U ψ with .Urs as its matrix is given. .Urs can be separated into its real and imaginary parts as Urs = urs + ivrs .

.

Show these matrices satisfy (T denotes transpose) uuT + vv T = 1 = uT u + v T v,

.

uT v − v T u = 0 = uv T − vuT ,

80

5 Non-linear quantum mechanics

and give rise to a linear canonical transformation:  .

q p



 =

u −v v u

  q . p

All observables, since they correspond to generators of infinitesimal unitary transformations, preserve the norm .ψ2 . This shows up as the vanishing of the Poisson bracket of the norm square:   1  2 qr + pr2 = ρt2 2h¯ r t

n = ψ2 =

.

with any .A(q, p) in the standard form. It can be checked immediately. The bracket {n, A} =



.

  − ρt2 , ρr ρs A+ cos(θ − θ ) + A sin(θ − θ ) r s r s rs rs

t,rs

=

 ∂  + 1 Ars cos(θr − θs ) + A− ρr ρs rs sin(θr − θs ) ∂θt h¯ t,rs

= 0, because the derivative with respect to .θ turns symmetric cosine into antisymmetric sine (and vice versa) causing the sum of a product of symmetric and antisymmetric quantities to zero.

5.4

Eigenvalues

Eigenvector of an observable A in quantum mechanics can be determined as that ψ which is an extremum of its average value .(ψ, Aψ) under variation .ψ → ψ + δψ subject to constraint that .ψ2 is kept constant. If we introduce a Lagrange’s multiplier .λ then the condition of eigenvector amounts to demanding the extremum of

.

(ψ, Aψ) − λ(ψ, ψ),

.

under free variation of .ψ. Let .ψ0 be one such extremal point, then the corresponding eigenvalue is just the value of .

 (ψ, Aψ)  . (ψ, ψ) ψ0

5.5 Non-linear Quantum Mechanics?

81

In the classical language we are using here, this can be taken over as the extremal points of A(q, p) − λn(q, p),

.

under the free variations of .qr , pr , r = 1, 2, . . . . The eigenvalue will correspond to the value of .A(q, p)/n(q, p) at the extremal point.

5.5

Non-linear Quantum Mechanics?

What we did in the last few sections is simply to put the ordinary quantum mechanics into a language of classical mechanics. This is a linear Hamiltonian system of infinite number of dimensions in the general case, and it has the following features: 1. The original coordinates .qr , pr or .ρr , θr do not represent the state completely. The coordinates .qr , pr themselves are not observables. 2. All observables are quadratic in .q, p and of the standard form (5.5) or (5.6). Their eigenvalues can be determined by variational principle applied to A restricted to the surface of constant n, or by diagonalizing the Hermitian matrix .A = A+ + iA− where  

2

2 h¯ h¯ ∂ A ∂ A ∂ 2A ∂ 2A + − , Ars = . + − .Ars = 2 ∂qr ∂qs ∂pr ∂ps 2 ∂pr ∂qs ∂ps ∂qr 3. Symmetry transformations are determined by linear canonical transformations. In particular, the equations of motion for observables are linear. One can naturally ask the question: if quantum theory is equivalent to a classical mechanical system with a phase space, Poisson bracket, and linear Hamiltonian equations of motion, could it be that it is a special case of a more general, nonlinear theory? One can ask a counter question: is there a need to look for a general theory? Quantum mechanics is well established with no experiment suggesting any conflict so far despite the fact that its interpretation by measurement theory, projection postulate or collapse of the wave packet, non-separability, etc. are still not well understood. There are two reasons worth considering for such general theories. When special relativity was well established, the motivation for searching for a general theory of relativity came from the fact that the gravitational field (the only other classical field theory apart form electrodynamics) could not be accommodated with special theory. The quantum theory is very well adapted to special relativity theory, but has failed so far to accommodate the general relativistic theory of

82

5 Non-linear quantum mechanics

gravitation. Could it be due to the conflict of an essentially linear quantum theory with an essentially non-linear general relativity? The second reason, due to Weinberg [1], is this: quantum mechanics has been verified excessively, but not been tested enough in the sense that there are no alternative theories which can give predictions different from quantum mechanics. A non-linear quantum theory offers a chance: hence the interest. So, which features of standard quantum theory should we carry over to the general theory? Following Weinberg we make the assumptions: 1. The non-linear effects are small; therefore the observables have the standard homogeneous quadratic form plus a non-linear term. 2. The non-linear term to be added is also homogeneous of second degree in .qr ’s and .pr ’s, such that .

   ∂A ∂A qr = 2A, + pr ∂qr ∂pr r



pr

r

∂A ∂A − qr ∂qr ∂pr

 = 0. (5.12)

3. All symmetry transformations preserve the norm n. The condition 2 above, which we call the “Weinberg condition,” ensures that the state .ψ and .zψ represent the same physical state with the same average values for all observables for any complex number z. If we treat .ψr and .ψr∗ as independent variables, the Weinberg condition for .A(q, p) = A(ψ, ψ ∗ ) would be written as  .

r

ψr

 ∂A ∂A =A= ψr∗ . ∗ ∂ψr ∂ψ r r

(5.13)

Exercise 5.4 Show that the Weinberg condition can be satisfied in the .ρ, θ expression for an observable A if  .

r

ρr2

∂A = A, ∂ρr2

 ∂A = h{n, A} = 0. ¯ ∂θr r

(5.14)

In the next section, we see how one can implement these requirements and see the non-linear effects in some very simple models.

5.6

Non-linear Terms: Simple Examples

In order to introduce non-linear terms satisfying the Weinberg condition, we can simply take the observables in the standard form (5.5) or (5.6) and substitute the constant matrices .A± into similar (i.e., symmetric and antisymmetric) matrices, but dependent on .(q, p) or .(ρ, θ ). We take some simple examples to illustrate the idea.

5.6 Non-linear Terms: Simple Examples

5.6.1

83

Example 1: Extra Energy Level

We take a two-level system in the basis of eigenstates .E1 , E2 of a linear part .H0 of the full Hamiltonian. In the .(ρ, θ ) coordinates H0 = E1 ρ12 + E2 ρ22

.

which corresponds to the .2 × 2 symmetric matrix .H0+ being diagonal with elements .E1 and .E2 , and the antisymmetric matrix being zero. We now add a non-linear part to the symmetric matrix:  .



E1 0 0 E2

 +g

0 ρ1 ρ2 /(ρ12 + ρ22 ) 2 2 0 ρ1 ρ2 /(ρ1 + ρ2 )



so that the Hamiltonian becomes H = E1 ρ12 + E2 ρ22 + 2g

.

ρ12 ρ22 ρ12 + ρ22

.

As there is no dependence on .θ1 or .θ2 , .ρ1 , ρ2 are constants in time and the .θ ’s grow linearly with t: θ˙1 = −

.

1 ∂H , h¯ ∂ρ12

θ˙2 = −

1 ∂H h¯ ∂ρ22

or, .

− h¯ θ˙1 = E1 + 2g



2

ρ22 ρ12 + ρ22

,

−h¯ θ˙2 = E2 + 2g

ρ12

2

ρ12 + ρ22

It is clear that the original eigenstates of .H0 are also the eigenstates of the non-linear Hamiltonian with the same eigenvalues .E1 and .E2 because the additional term is zero if either .ρ12 or .ρ22 is zero. This can also be checked directly. A normalized state starting with .(ρ1 , ρ2 ), ρ12 + ρ22 = 1 at time .t = 0 will become  .

ψ1 (t) ψ2 (t)



 =

 ρ1 exp(−it (E1 + 2gρ24 )/h) ¯ . ρ2 exp(−it (E2 + 2gρ14 )/h) ¯

(5.15)

Therefore, except for the eigenstates corresponding to .E1 , E2 (.ρ1 = 1, ρ2 = 0 or .ρ2 = 1, ρ1 = 0) which have the usual time dependence .exp(−itE1 /h) ¯ or .exp(−itE2 /h) ¯ respectively, for all other states, the relative phase of the two components is dependent on the initial value of .ρ1 (or .ρ2 ).

84

5 Non-linear quantum mechanics

Let us assume .E2 > E1 without any loss of generality, and write .E2 −E1 = E. Then, there is an additional eigenvalue provided .2g ≥ E: E3 =

.

g ( E)2 E1 + E2 + + 2 2 8g

corresponding to the eigenvector (in the Hilbert space language of .ψr ),  .

ψ1 ψ2



 =

ρ1 ρ2

 exp(−iE3 t/h), ¯

where  

E 1/2 1 , ρ1 = √ 1 − 2g 2

.

 

E 1/2 1 ρ2 = √ 1 + . 2g 2

It should be noted that the extra eigenvalue appears only for .2g > E.

5.6.2

Example 2: Asymptotic States

As a second simple example, choose   H = E1 ρ12 + E2 ρ22 + g ρ12 − ρ22 (θ1 − θ2 ).

.

This corresponds to non-linearity in the .θ variables due to choosing the .2 × 2 anti− with symmetric .Hrs − H12 =g

.

ρ12 − ρ22 (θ1 − θ2 ). ρ1 ρ2

We are also allowing the phases to take all values, not being limited to the range (0, 2π ). In regular quantum mechanics, we only require periodic functions of the phases. The equations of motion are

.

 1 ∂H g 2 = ρ1 − ρ22 h¯ ∂θ1 h¯  1 ∂H g 2 ρ1 − ρ22 =− ρ˙22 = h¯ ∂θ2 h¯

ρ˙12 =

.

5.7 Further Exercises

85

which shows that .(ρ12 + ρ22 ) remains constant and can be chosen to be equal to 1, and so ρ12 (t) = ρ12 (0) exp(2gt/h) ¯ + (1 − exp(2gt/h))/2 ¯

.

ρ22 (t) = ρ22 (0) exp(2gt/h) ¯ + (1 − exp(2gt/h))/2. ¯ For .θ ’s θ˙1 = −

1 1 ∂H = − [E1 + g(θ1 − θ2 )] h¯ ∂ρ12 h¯

θ˙2 = −

1 ∂H 1 = − [E2 − g(θ1 − θ2 )]. h¯ ∂ρ22 h¯

.

Therefore (if we write . θ = θ2 − θ1 and . E = E2 − E1 ) (θ1 + θ2 )(t) = (θ1 + θ2 )(0) − t (E1 + E2 )/h. ¯

.

and

θ (t) = θ (0) exp(−2gt/h) ¯ −

.

E (1 − exp(−2gt/h)). ¯ 2g

As .(ρ12 + ρ22 ) = 1, the solution for .ρ12 and .ρ22 shows that the coupling constant g has to be negative. Therefore all states grow or decay asymptotically to a state with 2 2 .ρ = 1/2, ρ = 1/2. 1 2 This is an unphysical case as .θ1 and .θ2 can take arbitrarily large values making energy negative.

5.7

Further Exercises

Exercise 5.5 Every Hamiltonian system preserves the volume elements of the phase space under Hamiltonian evolution. In the formalism of Sect. 5.1, verify this statement. How can one handle a volume element for the infinite dimensional Hilbert space? Does it make sense to consider probability distributions on this classical space? Exercise 5.6 The Hermitian observables of quantum mechanics have the standard form (5.5). What will be the expression for the (in general non-Hermitian) product AB where A and B are Hermitian? Will the expression satisfy the Weinberg condition? Show that the expressions satisfy the associative law (AB)C = A(BC), as expected.

86

5 Non-linear quantum mechanics

Exercise 5.7 Verify the expressions in Sect. 5.6.1 for the eigenvalue E3 and the corresponding eigenvector.

5.8

Notes

5.8.1

Non-linear Quantum Mechanics

The fact that quantum mechanics is a linear Hamiltonian theory has been known from the very beginning. Dirac [2] in 1927 wrote the Hamiltonian equations derived from the expectation value of a quantum mechanical operator, identified the canonical variables and quantized once again, thereby achieving a “second quantization” for the first time, although the term came to be used much later! The material in the first four sections of this chapter is based on a 1983 unpublished Jamia Millia Islamia preprint of the author and Vasundhra Choudhry [3]. In 1989, Weinberg’s “Testing Quantum Mechanics” paper [1] practically exhausted the possibilities of non-linear quantum theory, with experimental bounds. The reasoning for such models goes like this: one of the features of non-linear quantum mechanics is the dependence of the relative phase between two level system (as in the example in Sect. 5.6.1) on the initial state. Under fairly general assumptions, this would lead to a broadening of the incident absorption or emission frequency of radiation causing transitions between the two levels. A measurement by Bollinger et al. [4] gives an upper limit on the relative size of the non-linear term in the Hamiltonian as less than .10−20 eV.

5.8.2

Non-linear Schrodinger Equation

The equation i h¯

.

h¯ 2 2 ∂ψ =− ∇ ψ + V ψ + λ|ψ|2 ψ ∂t 2m

has been discussed by Bialynicki-Birula and Mycielsky [5] in great detail. See also the problems with non-linear quantum mechanics as discussed by Kibble [6].

References 1. S. Weinberg, Phys. Rev. Lett. 62, 485 (1989). Testing Quantum Mechanics. Ann. Phys. 194, 330 (1989) 2. P.A.M. Dirac, The quantum theory of the Emission and absorption of Radiation. Proc. Roy. Soc. A 114, 243 (1927) 3. P. Sharan, V. Choudhry, Quantum Mechanics in the ‘Classical’ Language: A Step Towards a Non-linear Quantum Theory, Preprint (Jamia Milliia Islamia, New Delhi, 1983) 4. J.J. Bollinger, J.D. Prestage, S.M. Itano, D.J. Weinland, Phys, Rev. Lett. 63, 1031 (1989)

References

87

5. I. Bialynicki-Birula, J. Mycielski, Non-linear wave mechanics. Ann. Phys. 100, 62 (1976) 6. T.W.B. Kibble, Relativistic models of nonlinear quantum mechanics. Commun. Math. Phys. 64, 73 (1978)

Interaction = Exchange of Quanta

6

.

Abstract

Interaction between a particle with a potential or between two particles illustrates the fundamental fact of quantum theory that exchange of a quantum leads to interaction.

6.1

Non-relativistic “Potential”

The concept of force in classical theory or quantum theory is the same. A particle changes its linear momentum under the influence of a force. Let .|p be the state of a particle with momentum .p. If the Hamiltonian is just .Pˆ 2 /2m, the momentum is constant in time. Since the Hamiltonian determines the time development, we must include in the Hamiltonian terms which change momentum in order to introduce force. What kind of operators change momentum? Let us define an operator .h(k) for a fixed .k, acting on the basis .{|p} as ˆ h(k)|p = |p + k.

.

This is not Hermitian (it is unitary actually), and its adjoint is hˆ † (k)|p = |p − k,

.

ˆ which shows that .hˆ † (k) = h(−k). ˆ Of course .h(k) gives just a jump, or a kick, to the momentum by a fixed amount .k. A general Hamiltonian will include terms for all .k. Let .v(k) be a complex number representing the strength or amplitude of the force for a kick by amount .k. We add

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_6

89

6 Interaction .= Exchange of Quanta

90 Fig. 6.1 Symbolic expression of a particle changing momentum

to the Hamiltonian the following term: ˆ1 = .H

 

=  =  ≡

ˆ d 3 k[v(k)h(k) + v ∗ (k)hˆ † (k)] ˆ ˆ d 3 k[v(k)h(k) + v ∗ (k)h(−k)] ˆ d 3 k[v(k) + v ∗ (−k)] h(k) ˆ d 3 kV (k)h(k).

ˆ From Chap. 1 we know what the operator .h(k) is ˆ h¯ ), ˆ h(k) = exp(ik · X/

.

ˆ is the position operator. Therefore the term to be added to the Hamiltonian where .X in the Schrödinger representation acts on the wave functions as x|Hˆ1 |ψ =

.

 d 3 k V (k) exp(ik · x/h¯ )ψ(x) ≡ V(x)ψ(x).

The function .V(x) is called the potential of the force. It is sensible to look at the potential as a classical field which absorbs or emits quanta which can change the momentum of the particle interacting with it. The quanta of the field are not quantized part of the system, but the picture much looks like Fig. 6.1.

6.2

The Simplest Model of Quanta Exchange

We take two oscillators with frequencies .ω and .. One can think of these systems as “fields” of “particles” or quanta with energy .h¯ ω and .h¯ . Thus, the oscillator with n quanta has energy .nh¯ ω above the ground state and, similarly, the second with N quanta has energy .N h¯ .

6.2 The Simplest Model of Quanta Exchange

91

A simple, exactly solvable, toy model of these two interacting oscillators illustrates many features.

6.2.1

The Hamiltonian

Let the two types of “particles” be called B and A. They are described by operators .b, b† and .a, a † , the usual “ladder” operators of harmonic oscillators with commutation rules [b, b† ] = 1,

.

[a, a † ] = 1,

[a, b] = 0,

[a, b† ] = 0.

The “free,” noninteracting, part of the Hamiltonian is H0 = h¯ b† b + h¯ ωa † a,

.

(6.1)

omitting the constant ground state energies .h¯ ω/2 and .h¯ /2. Energy levels of this free system are labeled by two integers N and n: EN,n = N h¯  + nh¯ ω

.

The interpretation of these levels is that there are n quanta of the type A and N of B. The total Hamiltonian including the interaction is now chosen as H = H0 + h¯ λb† b(a + a † ),

.

(6.2)

where .λ is a coupling constant of the dimensions of a frequency. The “vacuum” or zero quanta state of free Hamiltonian is defined by b0 = 0,

.

a0 = 0.

(6.3)

This state also happens to be the eigenstate of the total Hamiltonian: H 0 = 0,

.

which follows because .a † and b commute.

6.2.2

Bare and Dressed States

The number operators for the two particles are .NB = b† b and .NA = a † a. We see that [H, NB ] = 0.

.

6 Interaction .= Exchange of Quanta

92

This means that under time evolution, a state with some fixed number of the B quanta retains this number. That is not the case with A quanta because .[H, NA ] = 0. Under an infinitesimal time evolution .ψ → (1 − idtH /h¯ )ψ, the interaction term † † .λb b(a +a ) causes annihilation or creation of A-quanta but keeps the number of Bquanta the same. An eigenstate of H with one B-quantum has an indefinite number of A-quanta around it. The B-quantum acquires a “cloud” of A-quanta around it, and we say that this eigenstate of H is a dressed state. In contrast, the eigenstates of .H0 are said to contain bare B quanta.

6.2.3

Single-Dressed B-particle

We now calculate a single-dressed B-particle state. It is a state that contains one bare B-quantum and will have any number of A-quanta in the cloud surrounding it. Let . be a one B-particle eigenstate of H : H  = E.

.

Choose . as a superposition of states with one B-particle, and 0, 1, 2, . . . etc. of A-particles:  = (d0 + d1 a † + d2 (a † )2 + . . . )b† 0 .

.

We substitute . and compare the coefficients of various powers of .a † : (H − E) = (h¯  − E)(d0 + d1 a † + d2 (a † )2 + . . . )b† 0

.

+h¯ ω(d1 a † + 2d2 (a † )2 + . . . )b† 0 +h¯ λ(d1 + 2d2 a † + 3d3 (a † )2 + . . . )b† 0 +h¯ λ(d0 a † + d1 (a † )2 + . . . )b† 0 . This gives us a sequence of equations, the first three of which are (h¯  − E)d0 + h¯ λd1 = 0, .

(6.4)

(h¯  − E)d1 + h¯ ωd1 + h¯ λ(2d2 + d0 ) = 0, .

(6.5)

(h ¯ − E)d2 + h¯ ω2d2 + h¯ λ(3d3 + d1 ) = 0.

(6.6)

.

If we argue from a perturbation theory point of view, then as .λ → 0, the state . must become the bare one B-particle state proportional to .b† 0 . Therefore all .di except .d0 must go to zero as .λ → 0. In fact we expect .d1 = O(λ), d2 = O(λ2 ) and so on. The first equation of the sequence of equations then tells us that .(h¯  − E) = O(λ2 ). A look at the second equation tells us that it consists of terms of .O(λ3 ) and

6.2 The Simplest Model of Quanta Exchange

93

O(λ), which should separately be equated to zero. Therefore,

.

ωd1 + λd0 = 0,

.

(h¯  − E)d1 + h¯ λ2d2 = 0. This fixes λ d1 = − d0 , ω

.

which determines the value of .(h¯  − E) from the first Eq. (6.4) E = h¯ ( − λ2 /ω),

.

as well as, from (6.5), d2 =

.

  1 λ 2 d0 . − 2! ω

In the third Eq. (6.6), there are second- and fourth-order terms in .λ. The secondorder terms .ω2d2 + λd1 are identically zero, and the fourth-order terms give   1 λ 3 .d3 = d0 . − 3! ω The general solution is not difficult to guess:   λ  = d0 exp − a † b† 0 . ω

.

The normalization constant .d0 can be determined from the condition: 1 = (, ) = |d0 |2 (0 , exp[−λa/ω] exp[−λa † /ω]bb† 0 )

.

= |d0 |2 exp[λ2 /ω2 ], where we use the identity .

exp(A) exp(B) = exp([A, B]) exp(B) exp(A).

Thus the one B-particle dressed state is given by (dropping the over-all phase in .)  = exp[−λ2 /2ω2 ] exp[−λa † /ω]b† 0 ,

.

(6.7)

6 Interaction .= Exchange of Quanta

94

with eigenvalue E = h¯ ( − λ2 /ω).

.

6.2.4

(6.8)

B-B Effective Interaction

Now we take two B-particles. In perturbation theory the second-order terms in interaction will involve creation of an A-quantum by one B-particle and its absorption by the other. This back-and-forth exchange of A- by B-particles leads to an interaction between the B-particles. The A-exchange can be summed or “integrated out” and replaced by an effective B-B interaction as shown symbolically in the diagram below (Fig. 6.2). We notice that the one B-particle dressed state is obtained from the bare one particle state .b† 0 by operating by .exp[−λ2 /2ω2 ] exp[−λa † /ω]. We do a little manipulation as follows:  = exp[−λ2 /2ω2 ] exp[−λa † /ω]b† 0

.

= exp[−λ2 /2ω2 ] exp[−λa † /ω] exp[λa/ω]b† 0 because .exp[λa/ω]0 = 0 . Now use .

exp(A) exp(B) = exp([A, B]/2) exp(A + B)

˜ .B˜ interaction Fig. 6.2 Exchanges of particle A between two B particles lead to effective .B-

6.2 The Simplest Model of Quanta Exchange

95

which holds whenever .[A, B] commutes with both A and B to get  = exp[λ(a − a † )/ω]b† 0

.

The advantage of this manipulation is that the operator .exp[λ(a − a † )/ω] is unitary. The two-particle dressed state will similarly involve .exp[2λ(a − a † )/ω] acting on the two-particle bare state because there are two factors of .exp[λ(a − a † )/ω]. Similarly the three-particle dressed state will be a similar operator with 2 replaced by 3 in the exponent. Let us define a unitary operator:  U = exp

.



λ † b b(a − a † ) ω

so that it gives the right factor in the exponent because .b† b is the number operator. This unitary operator changes the bare to dressed states. If we apply it to all our observables, then the transformed operators are .

a˜ = U aU †     λ † λ = exp b b(a − a † ) a exp − b† b(a − a † ) ω ω =a+

λ † b b ω

where we use the identity .

exp[S]A exp[−S] = A + [S, A] +

1 [S, [S, A]] + . . . . 2!

Similarly, a˜ † = a † +

.

λ † b b, ω

and     λ λ † b b(a − a † ) b exp − b† b(a − a † ) b˜ = exp ω ω  2 1 λ λ (a − a † )2 b − . . . = b − (a − a † )b + ω 2! ω   λ = exp − (a − a † ) b , ω

.

6 Interaction .= Exchange of Quanta

96

and   λ † † † ˜ .b = b exp (a − a ) . ω We can transform the Hamiltonian in terms of these new operators, by writing the inverted formulas: λ ˜† ˜ b b, ω λ ˜ a † = a˜ † − b˜ † b, ω   λ † ˜ b = exp (a − a ) b, ω   λ b† = b˜ † exp − (a − a † ) , ω a = a˜ −

.

so that .

1 H = b† b + ωa † a + λb† b(a + a † ) h¯   λ2 λ2 ˜ † ˜ ˜ b b + ωa˜ † a˜ − b˜ † b˜ † b˜ b. = − ω ω

The Hamiltonian has separated into dressed particles, call them by .B˜ created by .b˜ † which interact with themselves by the .b˜ † b˜ † b˜ b˜ term and a species of free quanta, call ˜ which are created by .a˜ † ! them .A,

6.3

Exercises

Exercise 6.1 Annihilation and creation operators of a particle in a potential The formalism of Sect. 6.1 indirectly uses the annihilation operators a(k) and their adjoint, the creation operators, a † (k) acting on a background state (ground state or “vacuum”), |0, and one particle states as follows: a(k)|0 = 0,

.

for all k,

a (k)|0 = |k, for all k,  a(k), a(k ) = 0, [a(k), a † (k )] = δ 3 (k − k ), †



for all k, k .

Write the expression for the term to be added to the Hamiltonian to generate the effect of the classical potential.

6.4 Notes

97

Exercise 6.2 Derive an expression for amplitudes V (k) in (6.1) for transitions |p → |p + k for the potential (spherical square-well): V(r) = V0 for r ≤ R0 and zero for r > R0 . Exercise 6.3 Follow the arguments of Sects. 6.2–6.4 for a more realistic model. B is a massive particle, a “spinless fermion” of rest mass M, and A a much lighter particle of rest mass μ, a “meson.” The free Hamiltonian is  H0 = Mc2

.

 d3 p b† (p)b(p) +

d3 p Ep a † (p)a(p)

where Ep = c p2 + μ2 c2 . The interaction part of the Hamiltonian is  HI = λ

.

d3 pd3 k f (k)b† (p + k)b(p)a(k) + h.c.

where “h.c.” denotes the Hermitian conjugate of the previous term, λ an appropriate coupling constant, and f (k) the coupling strength at momentum k. The annihilation and creation operators of particles B and A are the standard commutation/anticommutation relations: 

 a(k), a(k ) = 0, [a(k), a † (k )] = δ 3 (k − k ), for all k, k ,   for all k, k , b(k), b(k ) + = 0, [b(k), b† (k )]+ = δ 3 (k − k ),   a(k), b(k ) = 0, [a(k), b† (k )] = 0, for all k, k . .

The energy of the particle B is assumed to be independent of its momentum. Refer to Section (12a) of Schweber [1] for details.

6.4

Notes

It is a pity that quantum theory which had its beginnings in the emission and absorption of quanta is taught as if non-relativistic quantum mechanics does not require any mention of creation, annihilation (or emission/absorption) of quanta. This is due to the extraordinary emphasis on the wave function in the configuration space with the potential function taken as unchanged classical potential. There is a total disregard for the physically meaningful momentum space. For a proper understanding, both the configuration space and the momentum space are needed. And the so-called wave-particle duality, such a favorite of textbooks, should properly be called and interpreted as “field-particle” duality. The model of A and B particles discussed here is a brutally simplified version of a solvable model discussed in S. S. Schweber’s classic book [1]. The original model was given by L. Van Hove [2].

98

6 Interaction .= Exchange of Quanta

References 1. S.S. Schweber, An Introduction to Relativistic Quantum Field Theory (Harper and Row, New York, 1961), Section 12a 2. L. Van Hove, Physica 18, 145 (1952)

7

Proof of Wigner’s Theorem

Abstract

Wigner’s theorem which relates a unitary or anti-unitary operator to a symmetry transformation is proved following V. Bargmann’s version of the proof.

7.1

Rays and Symmetry Transformation

A ray in a Hilbert space .H is the set .{f } obtained by multiplying a non-zero vector f ∈ H with all complex numbers of modulus unity. All vectors in a ray have the same norm. A vector .f ∈ {f } is called a representative of the ray. A unit ray is a ray such that all the vectors belonging to it have unit norm. A unit ray can be obtained by taking a unit vector and obtaining the set of all its multiples by phase factors. The set of all unit rays will be denoted by .R0 . A ray .{f } multiplied by two different positive numbers .α > 0, β > 0, α = β gives two distinct rays. We cannot talk about change in quantities like rays because they cannot be added or subtracted. We say the state is described by a unit ray, but how do we define the rate of change of state unless we can subtract the original quantity from the changed quantity? That is why, although there is a one-to-one correspondence between physical states and unit rays, we must choose a representative vector to work with in practice. A symmetry transformation is a mapping .s : R0 → R0 from unit rays to unit rays such that if .{f } → {f  } = s{f } and .{g} → {g  } = s{g}, then for any vectors   .f, g, f , g belonging to their respective rays .

|(f, g)|2 = |(f  , g  )|2 .

.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_7

99

100

7.2

7 Proof of Wigner’s Theorem

Wigner’s Theorem

For the sake of completeness, let us recall that a unitary operator U is a one-to-one, invertible mapping of the Hilbert space such that for any .f, g ∈ H, .(Uf, Ug) = (f, g). An anti-unitary operator A is a one-to-one, invertible mapping of the Hilbert space such that for any .f, g ∈ H, .(Af, Ag) = (f, g)∗ . Wigner’s theorem allows us to work with vectors instead of rays. Let .s : R0 → R0 be a symmetry and T a vector mapping .T : H → H. We say that T is compatible with symmetry s if T maps vectors in the unit ray .{f } into vectors in the ray .s{f } for every unit ray .{f } ∈ R0 . Exercise 7.1 Which of the following vector mappings can be compatible with some symmetry transformation? (i) The constant linear operator .f → 2f for every .f ∈ H. (No)  (ii) If .{ei }, i = 1, . . . , ∞ is an orthonormal basis in .H and .f = i ci ei , then T is defined by .Tf = i ci∗ ei . (Yes) (iii) T is a unitary operator. (Yes) (iv) T is an anti-unitary operator. (Yes) (v) A projection operator .PM onto a subspace .M ⊂ H. (No) (vi) A bounded self-adjoint operator A satisfying .A2 = 1. (Yes) (vii) A bounded linear operator J satisfying .J 2 = −1. (Yes) Wigner’s theorem is stated as follows : Every symmetry transformation .s : R0 → R0 determines either a unitary or an antiunitary operator compatible with it. The unitary or anti-unitary nature of the operator is determined by the mapping s itself and the operator is determined uniquely except for a factor of modulus unity.

The symmetry transformation which has been defined only on the set of unit rays R0 can be extended to the set of all rays .R as follows. Let .{f } ∈ R. Then .{f/ f } is a unit ray. Define

.

s{f } ≡ f s{f/ f }.

.

7.3

Bargmann Invariant

Let .f1 , f2 , f3 ∈ H be three unit vectors. Define a number: (f1 , f2 , f3 ) ≡ (f1 , f2 )(f2 , f3 )(f3 , f1 ).

.

This is a complex number which actually depends only on the three unit rays {f1 }, {f2 }, {f3 } as can be checked by choosing any other representatives .eiα f1 ,

.

7.4 A Lemma

101

eiβ f2 , .eiγ f3 in place of .f1 , f2 , f3 . We write therefore

.

(f1 , f2 , f3 ) = ({f1 }, {f2 }, {f3 }).

.

If s is a symmetry transformation, we can calculate:  = (s{f1 }, s{f2 }, s{f3 }).

.

Given a symmetry mapping s, . can be used to test whether the operator determined by the Wigner theorem is a unitary or an anti-unitary operator. If T is the operator compatible with s, then for unitary T (Tf1 , Tf2 ) = (f1 , f2 ),

.

whereas for anti-unitary T (Tf1 , Tf2 ) = (f1 , f2 )∗ .

.

Thus in one case . =  and in the other . = ∗ . Except for the trivial case of a one-dimensional Hilbert space, there will always be at least three unit rays to check the unitary or anti-unitary nature of a symmetry transformation by the Bargmann invariant.

7.4

A Lemma

This simple result is at the heart of the proof. Let .e1 , . . . , en ∈ H be a finite orthonormal set. We call the set of unit rays .{e1 }, . . . , {en } ∈ R0 an orthonormal set of rays. Although there is no inner product defined for rays, we can still define two rays as being orthogonal. Let .{e1 }, . . . , {en } ∈ R0 be an orthonormal set of rays, and s a symmetry transformation. Let .f ∈ H be a vector constructed as a linear combination from representatives of the rays of the set: f =



.

ci fi ,

fi ∈ {ei }.

i

Note that .f1 , . . . , fn is also an orthonormal set of vectors and .ci = (fi , f ). The lemma is about the structure of a typical vector .f  in the ray .s{f } mapped by the symmetry transformation.

102

7 Proof of Wigner’s Theorem

Lemma Any representative vector .f  ∈ s{f } can be written as f =



.

ci fi

i

where .fi ∈ s{ei } and .|ci | = |ci |, i = 1, . . . , n. Proof of the Lemma The set of rays .s{e1 }, . . . , s{en } ∈ R0 is also orthonormal because s being a symmetry transformation preserves modulus of the inner product. Choose any vectors .fi ∈ s{ei } and define: ci = (fi , f  )

.

Then, |ci | = |(fi , f  )| = |(fi , f )| = |ci |.

.

And, as .|(f, f )|2 = |(f  , f  )|2 because f and .f  belong to rays mapped by s f  −



.

 ci fi 2 = f  2 − |ci |2  = f 2 − |ci |2 = 0.

This proves that .f  =

7.5



  i ci fi .

Proof of the Theorem

Construction of a vector mapping T compatible with the symmetry transformation s is carried out in nine steps. Step 1 Choose a unit vector .e0 ∈ H and let .{e0 } be the corresponding unit ray. Choose any vector .e0 belonging to .s{e0 } and define: T e0 = e0 .

.

(7.1)

This defines the operator T on a single vector .e0 . The choice of T is obviously arbitrary up to a phase factor. This is the sole arbitrariness in the definition and construction of T.

7.5 Proof of the Theorem

103

Step 2 Let .M be the subspace orthogonal to .e0 : M = {f ∈ H|(e0 , f ) = 0}.

.

Choose a unit vector .e1 in .M, that is, .e1 is a unit vector orthogonal to .e0 . We will now define T on the vector .e0 + e1 . As .{e0 } and .{e1 } are orthogonal unit rays, according to the Lemma above, any representative of .s({e0 + e1 }) is of the form .a0 e0 + a1 e1 where .|a0 | = 1, |a1 | = 1,   .e 1 ∈ s{e1 } and where the vector .e0 has already been fixed in step 1 as the representative from .s{e0 }. Of these vectors .a0 e0 + a1 e1 in .s({e0 + e1 }), there is exactly one vector of the form .e0 +e1 , because we can just choose any .a0 e0 +a1 e1 and divide it by .a0 . Define T (e0 + e1 ) = e0 + e1 .

.

(7.2)

This step fixes the vector .e1 . Step 3 We proceed to define T on a vector of type .e0 + be1 where b is a complex number. T will map .e0 + be1 to a vector in .s{e0 + be1 }. A vector .f ∈ s{e0 + be1 } has the form, according to the Lemma, .c0 e0 + c1 e1 with .|c0 | = 1, |c1 | = |b|. Again, among these vectors is a unique vector of the form     .e + b e with .|b | = |b|. Define 0 1 T (e0 + be1 ) = e0 + b e1

.

(7.3)

We now show that not only .|b | = |b|, but their real parts are equal and imaginary parts equal up to a sign: .Re b = Re b and .Im b = ±Im b. This follows from the property of symmetry transformation: |1 + b|2 = |(e0 + e1 , e0 + be1 )|2 = |(e0 + e1 , e0 + b e1 )|2 = |1 + b |2 ,

.

which implies .Re b = Re b . Moreover, as |b|2 = (Re b)2 + (Im b)2 = (Re b )2 + (Im b )2 = |b |2 ,

.

we also have .Im b = ±Im b. For the fixed number b, the choice of .b was uniquely determined by our construction. Therefore the sign in .Im(b ) = ±Im(b) is also fixed. Let us write b = Re b + b Im b,

.

where the sign factor .b is .+1 or .−1.

104

7 Proof of Wigner’s Theorem

Actually .b does not depend on b. To see this take two different vectors .e0 + b1 e1 and .e0 + b2 e1 and determine .e0 + b1 e1 and .e0 + b2 e1 by the above procedure. We have: b1 = Re(b1 ) + b1 Im(b1 ),

.

b2 = Re(b2 ) + b2 Im(b2 )

Using |(e0 + b1 e1 , e0 + b2 e1 )|2 = |(e0 + b1 e1 , e0 + b2 e1 )|2

.

gives .|1 + b1∗ b2 |2 = |1 + b1 ∗ b2 |2 which is equivalent to Re(b1 )Re(b2 ) + Im(b1 )Im(b2 ) = Re(b1 )Re(b2 ) + Im(b1 )Im(b2 ).

.

But .Re(b1 ) = Re(b1 ) and .Re(b2 ) = Re(b2 ) therefore, Im(b1 )Im(b2 ) = Im(b1 )Im(b2 ) = b1 b2 Im(b1 )Im(b2 ).

.

This means b1 b2 = 1.

.

In other words, either both .b1 and .b2 are .+1 or both .−1. We write the common value of .b as .1 where the index 1 is to remind that the choice of .b may still depend on the choice of .e1 . From now on we write: T (e0 + be1 ) = e0 + b e1 ,

.

b = Re(b) + 1 Im(b).

(7.4)

Step 4 Let .e2 be another unit vector in .M orthogonal to both .e0 and .e1 . By an exactly similar procedure that we used in the last step, we can define T (e0 + ce2 ) = e0 + c e2 ,

.

c = Re(c) + 2 Im(c)

(7.5)

where .e2 is the unique vector in the ray .s{e0 + e2 } of the form .e0 + e2 . Step 5 Now consider the three orthonormal rays .{e0 }, {e1 }, {e2 } and apply the lemma to vector .e0 + e1 + e2 . Choose the vector .e0 + c1 e1 + c2 e2 in the ray .s{e0 + e1 + e2 }. We must have .|c1 | = 1 = |c2 |. But as |(e0 + e1 , e0 + e1 + e2 )|2 = |(e0 + e1 , e0 + c1 e1 + c2 e2 )|2

.

therefore .4 = |1+c1 |2 . This means .Re(c1 ) = 1 and so .c1 = 1 because .|c1 | is already equal to one. Similarly, taking .|(e0 + e2 , e0 + e1 + e2 )|2 , we can prove .c2 = 1.

7.5 Proof of the Theorem

105

Therefore, T (e0 + e1 + e2 ) = e0 + e1 + e2 .

.

(7.6)

Step 6 We try next to define T on a vector of type .e0 + be1 + ce2 . Choose the unique vector      .e + b e + c e in the ray .s{e0 + be1 + ce2 }. By taking 0 1 2 |(e0 + e1 , e0 + be1 + ce2 )|2 = |(e0 + e1 , e0 + b e1 + c e2 )|2

.

we get using the by-now-familiar argument .b = Re(b) + 3 Im(b) where this .3 is peculiar to the ray .s{e0 + be1 + ce2 }. We will now discuss the equality: |(e0 + be1 , e0 + be1 + ce2 )|2 = |(e0 + b e1 , e0 + b e1 + c e2 )|2

.

(7.7)

with .b = Re(b) + 1 Im(b) and .b = Re(b) + 3 Im(b). Note that although we know that .b is determined by the condition that .e0 +  b e1 + c e2 is in the ray .s{e0 + be1 + ce2 }, .b is determined by the condition that      .e + b e is in the ray .s{e0 + be1 }. We show below that .b = b . 0 1 We get from the above equality (7.7) of transition probabilities: ∗

|1 + |b|2 |2 = |1 + b b |2

.

or .Re(b ∗ b ) = |b|2 . This means .1 3 = 1 or .3 = 1 . Thus .b = Re(b) + 3 Im(b) = b . Similarly considering |(e0 + e2 , e0 + be1 + ce2 )|2 = |(e0 + e2 , e0 + b e1 + c e2 )|2

.

we get .c = Re(c) + 4 Im(c) and by |(e0 + ce2 , e0 + be1 + ce2 )|2 = |(e0 + c e2 , e0 + b e1 + c e2 )|2

.

where .c = Re(c) + i2 Im(c) we get again .4 = 2 and .c = c . Therefore from .T (e0 + be1 ) = e0 + b e1 and .T (e0 + ce2 ) = e0 + c e2 , we infer that the vector .e0 + b e1 + c e2 is in the ray .s{e0 + be1 + ce2 }. We therefore define T (e0 + be1 + ce2 ) = e0 + b e1 + c e2

.

b = Re(b) + i1 Im(b),

.

c = Re(c) + i2 Im(c).

(7.8) (7.9)

106

7 Proof of Wigner’s Theorem

Step 7 We now show that actually .1 = 2 . √ √ Consider the unit vector .e3 = (e1 + e2 )/ 2. Define .e3 = (e1 + e2 )/ 2. We show that .e3 is in the ray .s{e3 }. According to the lemma, a typical vector in .s{e3 } is of the form .c1 e1 +c2 e2 where √ √    .|c1 | = |c2 | = 1/ 2. Choose the unique vector .e / 2 + c e . From .|(e0 + e1 + 1 2 2 √ √ e2 , e3 )|2 = |(e0 + e1 + e2 , e1 / 2 + c2 e2 )|2 , we get .2 = |1/ 2 + c2 |2 which implies √  .c = 1/ 2. 2 We already have T (e0 + e3 ) = e0 + e3 .

.

(7.10)

Let us choose from .s{e0 + de3 } the unique vector .e0 + d  e3 with .d  = Re(d) + i5 Im(d) and define T (e0 + de3 ) = e0 + d  e3 .

.

(7.11)

On the other hand √ √ √ √ T (e0 + de3 ) = T (e0 + de1 / 2 + de2 / 2) = e0 + d  e1 / 2 + d  e2 / 2

.

with d  = Re(d) + i1 Im(d),

.

d  = Re(d) + i2 Im(d).

The comparison of these two definitions gives: 1 = 5 = 2 .

.

Since .e1 and .e2 could be any two arbitrary orthogonal unit vectors in .M, it follows that there is a common value . for the whole of .M. Let us define a function .χ of complex numbers: χ (c) = Re(c) + iIm(c).

.

(7.12)

This function which is either .χ (c) = c for . = 1 or .χ (c) = c∗ for . = −1 has the obvious properties: χ (c + c ) = χ (c) + χ (c )

(7.13)

χ (cc ) = χ (c)χ (c )

(7.14)

.

.

χ (c∗ ) = χ (c)∗ ,

.

|χ (c)| = |c|.

(7.15)

7.5 Proof of the Theorem

107

We can summarize the progress so far by saying that T has been defined on all vectors of the type .e0 + f where .f ∈ M (recall that .M is the subspace orthogonal to .e0 ) by the equation: T (e0 + f ) = e0 + f 

(7.16)

.

where the vector .f  is uniquely determined. Let us therefore give the definition of T on all vectors of .M by T (f ) = f  ,

(7.17)

.

This mapping on vectors of .M satisfies the following rules: T (f + f  ) = T (f ) + T (f  ),

(7.18)

T (cf ) = χ (c)T (f ).

(7.19)

.

.

Step 8 Finally, define T on any vector of .H of the type .ae0 + f where .f ∈ M by T (ae0 + f ) = χ (a)e0 + T (f ).

(7.20)

.

This completes the construction of T . As defined it satisfies T (f + g) = T (f ) + T (g),

(7.21)

T (cf ) = χ (c)T (f ).

(7.22)

.

.

If e is a unit vector, then .T (e)is also a unit vector and for any numbers a and b: (T (ae), T (be)) = χ (a)∗ χ (b) = χ (a ∗ b).

.

Sinceevery vector can be expanded in an orthonormal basis from .f = .g = bj ej , we get: (T (f ), T (g)) =

.



χ (ai∗ bi ) = χ ((f, g)).



ai ei and

(7.23)

The vector mapping T is the required compatible unitary or anti-unitary operator, depending on whether .χ (c) = c or .χ (c) = c∗ . But we must remember the entire construction was made starting from the initial vector .e0 . We must prove that it does not matter which vector .e0 is chosen in the beginning.

108

7 Proof of Wigner’s Theorem

7.6

The Final Step

We first prove that if .T1 and .T2 are two vector mappings compatible with the same symmetry transformation, then they can differ only by a phase factor. Strictly speaking this holds in Hilbert spaces of dimension greater than one. For onedimensional vector space, there can be a unitary as well as an anti-unitary mapping. Let us first note that if f and g are two linearly independent vectors, then the Schwarz inequality is a strict inequality. The equality holds if and only if the vectors are proportional. So |(f, g)|2 < (f, f )(g, g).

.

This implies for any vector mapping compatible with a symmetry transformation |(T (f ), T (g))|2 < (T (f ), T (f ))(T (g), T (g)).

.

This shows that if .f, g are linearly independent, then so are .T (f ) and .T (g). Now let .T1 and .T2 be two operators both compatible with the same symmetry transformation. As .T1 (f ) and .T2 (f ) belong to the same ray, they differ by a phase factor. Let the factor for vector f be written as .ω(f ): T1 (f ) = ω(f )T2 (f )

.

where .|ω(f )| = 1. We show that .ω(f ) actually does not depend on f . Let f and g be two linearly independent vectors. Then, T1 (f + g) = ω(f + g)T2 (f + g) = ω(f + g)T2 (f ) + ω(f + g)T2 (g)

.

On the other hand, T1 (f + g) = T1 (f ) + T1 (g) = ω(f )T2 (f ) + ω(g)T2 (g)

.

Equating the two and using the linear independence of the vectors .T2 (f ) and .T2 (g), we get: ω(f ) = ω(g) = ω(f + g)

.

This proves the independence of our construction on any particular choice of .e0 . And this completes the proof.

7.7 Further Exercises

7.7

109

Further Exercises

In addition to the continuous spacetime symmetries of the Poincare group, discussed in the first chapter, the discrete symmetries like the space inversion or parity, and time reversal also keep the metric, or the line element of special relativity invariant. Denote by .Is = −η and .It = η the .4 × 4 parity and time reversal matrices. Their product, the space-time inversion .Is It ≡ I = −1, is also a symmetry: ⎛ ⎜ It = −Is = η = ⎜ ⎝

.



−1 1

⎟ ⎟. 1 ⎠ 1

(7.24)



Their group multiplication laws with Poincare transformations .(a, ), ∈ L+ are Is (a, )Is−1 = (Is a, Is Is−1 ),

.

It (a, )It−1 = (It a, It It−1 ), I (a, )I −1 = (−a, ). (7.25) Wigner’s theorem assigns operators .P, T , and .I to the symmetries defined by Is , It , and I , respectively. These operators could be unitary or anti-unitary, and undetermined up to a phase factor. Since the squares of all three .Is , It , and I are equal to the identity of the group (which is denoted by the identity operator on the Hilbert space of states), the operators .P 2 , T 2 , and .I 2 must be equal to phase factors .ωs , ωt , and .ωI times the identity operator, respectively. We are free to choose the phase factor of .I so that (as .I = Is It ): .

I = PT

.

P 2 = ωs = 1,

chosen by convention

T 2 = ωt I 2 = ωI . The choice .P 2 = 1 still leaves the parity operator undetermined up to .±1 which can only be fixed in specific cases. Exercise 7.2 Show that as .Is (a, 1)Is−1 = (Is a, 1) and .It (a, 1)It−1 = (It a, 1), we must choose .P to be unitary and .T to be anti-unitary to avoid these operators’ mapping states of positive energy into negative energy states.

110

7 Proof of Wigner’s Theorem

Exercise 7.3 As .T is anti-unitary (and so is .I), show that phases .ωt and .ωI are real, and therefore equal to .±1. Solution: Let .T 2 = ω; then the anti-unitary nature of .T implies T 3 = T 2 T = ωT = T T 2 = T ω = ω∗ T .

.

Therefore .T 2 is real. It can be shown that for spin s particles .ω = (−1)2s . Exercise 7.4 Derive the multiplication table of the discrete symmetries: P T I 1 I T P . T ωt ωI I ωt 1 ωI P I ωt ωI T ωt P ωI 1

.

(7.26)

Hint: .P −1 = P, T −1 = ωt T and .I −1 = ωI I. Exercise 7.5 Work out the relations (7.25) for infinitesimal generators .P μ , J, and .K of Poincare group and .P and .T : PP μ P −1 = (Is P )μ ,

.

PJP −1 = J, PKP −1 = −K, T P μ T −1 = (Is P )μ , T JT

−1

= −J,

T KT

−1

= K.

Is and not It !

Exercise 7.6 Follow the argument for proof of Wigner’s theorem as given in References [3] and [4].

7.8

Notes

Wigner’s theorem was first proved in his original German book of 1931. The English translation is [1]. We follow the proof of the theorem by V. Bargmann in Reference [2]. More recent and elegant proof where references to other, earlier, proofs can be found is a paper by R. Simon, N. Mukunda, S. Chaturvedi, and V. Srinivasan [3], and its follow-up by R. Simon, N. Mukunda, S. Chaturvedi, V. Srinivasan, and J. Hamhalter [4]. Among textbooks, Wigner’s theorem is discussed in S. Weinberg [5].

References

111

References 1. E.P. Wigner, Group Theory (Academic Press, New York, 1959) 2. V. Bargmann, J. Math. Phys. 5, 862 (1964) 3. R. Simon, N. Mukunda, S. Chaturvedi, V. Srinivasan, Phys. Lett. A 372, 6847 (2008) 4. R. Simon, N. Mukunda, S. Chaturvedi, V. Srinivasan, J. Hamhalter, Phys. Lett. A, 378, 2332 (2014) 5. S. Weinberg, The Quantum Theory of Fields I (Cambridge, 1995). Chapter 2, Appendix A

8

Hilbert Space: An Introduction

Abstract

An elementary introduction to Hilbert spaces is given.

8.1

Definition

A Hilbert space is a complex vector space with an inner product such that the space is complete with respect to the norm determined by the inner product. We explain these terms now. Let .H be a complex vector space with elements .f, g, h, etc. Let .a, b, c, etc. be complex numbers. Let us denote the zero vector by .0. An inner product on .H associates with any two vectors .f, g ∈ H a complex number denoted by .(f, g) which has the following properties : for any .f, g, h ∈ H and any complex number a: 1. .(f, g + h) = (f, g) + (f, h) and .(f, ag) = a(f, g) 2. .(f, g) = (g, f )∗ where the .∗ denotes the complex conjugate. 3. .(f, f ) ≥ 0, and .(f, f ) = 0 if and only if .f = 0. Because of the second property above, .(f + g, h) = (f, h) + (g, h) and .(af, h) = a ∗ (f, h) for any .f, g, h ∈ H and any complex number a. The positive square root of .(f, f ) is called the norm of the vector f and is denoted by .f . The condition .f  = 0 is equivalent to .f = 0. A vector f with unit norm .f  = 1 is called a unit vector, or a normalized vector.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_8

113

114

8 Hilbert Spaces

For vectors .f, g ∈ H, the number .f − g satisfies all the properties of a distance function between f and g considered as points in the set .H: (1) the distance of a point from itself is zero, (2) the distance is symmetric (.f − g = g − f ), and (3) it satisfies the triangle inequality: f + g ≤ f  + g

.

as we will see later. Let .f1 , f2 , . . . be a sequence of vectors in .H. This sequence is called a fundamental sequence, or a convergent sequence, or a Cauchy sequence, if the distance between any two members of the sequence goes to zero as we choose these members farther and farther down in the sequence. This means .fn − fn+p  → 0 as .n → ∞ for any fixed integer p. A Hilbert space .H is complete in the sense that for every convergent sequence .f1 , f2 , . . . , there exists a vector .f ∈ H to which the sequence converges, that is, .fn − f  → 0 as .n → ∞. The usefulness of the idea of a fundamental sequence is that just by looking at the sequence, we can be assured that there is a limit. We do not have to guess what that limit f is. There are infinitely many sequences which are fundamental, many of them actually converging to the same vector. The situation is similar to the definition of real numbers starting from rational numbers where the idea of a fundamental sequences is encountered first. We know that the sequence .1/n, n = 1, 2, . . . is fundamental, but so is .(n2 + 1)/n4 , n = 1, 2, . . . . How do we know that both converge to the same real number? Two fundamental sequences .an , bn are called equivalent if the distance between their respective members .|an − bn | keeps on decreasing as .n → ∞. In that case the two sequences cannot tend to different limits. Irrational real numbers cannot be described by a single formula, but only by an infinite decimal, which is actually a sequence of rational numbers. A real number is by definition a collection of equivalent fundamental sequences which then are said to converge to it.

8.2

Pythagoras Theorem

Two vectors f and g are called orthogonal if .(f, g) = 0. The zero vector is orthogonal to every vector: .(0, f ) = (0 + 0, f ) = 2(0, f ) therefore .(0, f ) = 0. For orthogonal vectors f and g, the Pythagoras theorem holds: f + g2 = f 2 + g2 .

.

(8.1)

This is because, as .(f, g) = 0 f + g2 = (f + g, f + g) = (f, f ) + (f, g) + (g, f ) + (g, g) = f 2 + g2

.

8.3 Bessel’s Inequality

115

Let .e1 , e2 , . . . , en be n unit or normalized vectors orthogonal to each other. This means that for .i, j = 1, . . . , n (ei , ej ) = δij ,

δij = 1, if i = j

.

δij = 0, if i = j

Such a set of vectors is called an orthonormal set. Exercise 8.1 Show that two (non-zero) orthogonal vectors are always linearly independent.

8.3

Bessel’s Inequality

Let .{e1 , e2 , . . . , en } be an orthonormal set and .f ∈ H, then f 2 ≥

n 

.

|(ei , f )|2

(Bessel’s Inequality).

(8.2)

i=1

Proof Call .ci ≡ (ei , f ), and let .χ ≡ χ 2 =

.

n

i=1 ci ei

whose norm square is

   (ci ei , cj ej ) = ci∗ cj δij = |ci |2 . i,j

i,j

i

But .χ is orthogonal to .f − χ : (f − χ , χ ) = (f, χ ) − χ 2 =



.

ci (f, ei ) −

i



|ci |2 = 0,

i

because .(f, ei ) = (ei , f )∗ = ci∗ . By Pythagoras theorem f 2 = (f − χ ) + χ 2 = f − χ 2 + χ 2 ≥ χ 2 =



.

|(ei , f )|2 .

i

which proves the inequality. The proof  also shows that the inequality becomes equality when f is exactly equal to . ni=1 ci ei . Bessel’s inequality is easy to interpret: take the linearly independent orthonormal vectors .e1 , e2 , . . . en as part of a basis. Then  .ci = (ei , f ) are components or expansion coefficients of f . By writing .χ = i ci ei , we are trying to “reconstruct” f from its components, and .f − χ  is a measure of the failure of the attempt.

116

8 Hilbert Spaces

8.4

Schwarz and Triangle Inequalities

The full name of this inequality is Cauchy-Schwarz-Bunyakovsky inequality. For any two vectors .f, g in a Hilbert space .H, the Schwarz inequality can be written |(f, g)| ≤ f g

(Schwarz Inequality).

.

(8.3)

This is a special case of the Bessel inequality. There is nothing to prove if one or both the vectors are zero vectors. Therefore we assume .g = 0. Let .e = g/g. Then for this one member orthonormal set .{e}, Bessel’s inequality says |(e, f )|2 ≤ f 2 .

.

Substituting for e we get: .

      1   g =  ≤ f   , f (g, f )   g   g

which gives the Schwarz inequality. Exercise 8.2 The Schwarz inequality becomes an equality if Bessel’s inequality becomes an equality. Show that it happens when f and g are proportional: f =

.

(g, f ) g. g2

Hint: For Bessel’s equality the one member orthonormal set .{e}, e = g/g is sufficient to reconstruct f : .f = (e, f )e. An immediate consequence of Schwarz inequality is the triangle inequality for any two vectors .f, g ∈ H : f + g ≤ f  + g

.

(Triangle Inequality).

Proof f + g2 = (f + g, f + g) = f 2 + g2 + (f, g) + (f, g)∗

.

Now .(f, g) + (f, g)∗ = 2Re (f, g) ≤ 2|(f, g)| ≤ 2f g, therefore f + g2 ≤ f 2 + g2 + 2f g

.

≤ (f  + g)2

(8.4)

8.5 Complete Orthonormal Set

117

which proves the result. The name triangle inequality has obvious geometric connotation: the sum of lengths of two sides of a triangle is greater than the length of the third side. Exercise 8.3 Prove the following for any .f, g ∈ H: 1. .f − g ≤ f  + g 2. .|f  − g| ≤ f − g. This too has a geometric analogy. The difference of lengths of two sides of a triangle is smaller than the third side. 3. .f + g2 + f − g2 = 2f 2 + 2g2 This equation is called the parallelogram law because of the obvious geometrical analogy. 4. Show that the norm can be used to “recover” the inner product (which has been used to define it) as follows: 4(f, g) = f + g2 − f − g2 − if + ig2 + if − ig2

.

This relation for some obscure reason is called polarization identity. 5. Show that if .f + g = f  + g, then f and g differ by a positive non-zero constant; .g = af, a > 0. One can ask why we should first define an inner product on a space and then a norm. Why not start with a vector space and define a norm . ·  as the positive valued function satisfying the properties : 1. .f  ≥ 0, f  = 0 iff f = 0 2. .af  = |a|f  3. .f + g ≤ f  + g (Triangle Inequality) and then use the polarization identity to define an inner product? The answer is that unless the norm also satisfies the parallelogram law (which does not follow from this definition of norm), it is not possible to prove the linearity property of the inner product.

8.5

Complete Orthonormal Set

We defined an orthonormal (o.n.) set in Sect. 8.2 as a set of mutually orthogonal unit vectors. A general o.n. set contains finite, countably infinite, or even uncountably many vectors. All we require is that each vector in the set is of unit norm and any two distinct vectors in the set are orthogonal. An o.n. set in a Hilbert space .H is called complete if it is not a proper subset of another o.n. set.

118

8 Hilbert Spaces

A complete o.n. set has the property that the only vector orthogonal to all the members of the set is the zero vector .0. This is so because if .f = 0 was such a vector, then we could enlarge the o.n. set by including the unit vector .e = f/f  falsifying the claim that the set cannot be a proper subset of a larger o.n. set. A complete o.n. set is called an orthonormal basis. The o.n. bases in a Hilbert space could be finite, or countable (that is infinite but in one-to-one correspondence with natural numbers .1, 2, 3, . . . ) or uncountable. In quantum mechanics the Hilbert spaces always have countable orthonormal bases. Such spaces are called separable Hilbert spaces. Let .{e1 , . . . , en , . . . } be a complete o.n. basis, and f any vector. Then we can write (“expand”) f as f =

∞ 

.

ci = (ei , f ),

ci ei

(8.5)

i=1

and the norm square of f is f 2 =

∞ 

.

|ci |2 .

(8.6)

i=1

We omit the proof.

8.6

Subspaces

A subset .M ∈ H is called a linear manifold if any finite linear combination .a1 f1 + · · · + an fn of vectors .f1 , . . . , fn ∈ M is again in .M. A linear manifold need not be closed in the sense that there may be a convergent sequence of vectors .f1 , . . . , fn , . . . all .∈ M but the limit .f = limn→∞ may not be in .M. By including all the limits of all possible convergent sequences, one gets a bigger set .M called the closure of .M. This set is again a linear manifold and is called a closed linear manifold or a subspace. With the given inner product of .H, a subspace is a Hilbert space in its own right. Let .M ⊂ H be a subspace and f a vector not in .M. Let us define the distance .d(f, M) between f and .M as the smallest value of .f − g as g goes over all of .M. It follows from the closed nature of .M (but we omit the proof) that there is ∼ actually a unique vector .g ∈ M which is precisely this minimum distance away ∼

from f : .f − g  = d(f, M).



Moreover, it follows that .f − g is orthogonal to every vector of .M (Fig. 8.1): ∼

(f − g , h) = 0,

.

h∈M

8.7 Direct Sum of Hilbert Spaces

119

Fig. 8.1 Resolution of a vector into its parts lying in, and orthogonal to, the subspace .M

Let us call .M⊥ the set of all vectors orthogonal to every vector of a subspace .M. Then .M⊥ is a subspace as well, because linear combinations of vectors orthogonal to .M are orthogonal to .M and so are their limit points. This space .M⊥ is called the orthogonal complement to .M. Obviously if a subspace .M1 is contained in a subspace .M2 , then the subspace ⊥ ⊥ .M is contained in the subspace .M . 2 1 The only vector common to .M and its orthogonal complement .M⊥ is the zero vector because that is the only vector orthogonal to itself. Thus every vector in .H can be decomposed uniquely as a sum: f = g˜ + (f − g), ˜

.

g˜ ∈ M,

f − g˜ ∈ M⊥

which shows that the space .H is the direct sum of the two subspaces .M and .M⊥ . We write .H = M ⊕ M⊥ . For any linear manifold .M, .(M⊥ )⊥ = M.

8.7

Direct Sum of Hilbert Spaces

Given two Hilbert spaces, .H1 and .H2 , we can construct the third space .H1 ⊕ H2 . Vectors of .H1 ⊕ H2 are ordered pairs .< f, g > with .f ∈ H1 and .g ∈ H2 . The sum and multiplication by a complex number a is defined as follows: .

< f1 , g1 > + < f2 , g2 > = < f1 + f2 , g1 + g2 >, a < f, g > = < af, ag >

It is clear that the zero vector of .H1 ⊕ H2 is .< 0, 0 >. The inner product of this space is defined as (< f1 , g1 >, < f2 , g2 >) =< f1 , f2 > + < g1 , g2 > .

.

120

8 Hilbert Spaces

The subset .M1 = {< f, 0 > |f ∈ H1 } is practically identical to .H1 and forms a subspace of .H1 ⊕ H2 . Similarly, the subset .M2 = {< 0, g > |g ∈ H2 } is the same as .H2 and forms another subspace of .H1 ⊕ H2 . These two subspaces are orthogonal to each other because (< f, 0 >, < 0, g >) = (f, 0) + (0, g) = 0.

.

We can define the direct sum of more than two Hilbert spaces inductively as H1 ⊕ H2 ⊕ H3 = (H1 ⊕ H2 ) ⊕ H3

.

and so on. Note that for a Hilbert subspace .M ⊂ H, we can always write .H = M ⊕ M⊥ .

8.8

Tensor Product of Hilbert Spaces

Given two complex vector spaces V and W of dimensions n and m with bases e1 , . . . , en and .f1 , . . . fm , respectively, we can define a vector space .V ⊗ W of dimension nm by considering all possible linear combinations of .ei ⊗ fj , i = 1, . . . , n; j = 1, . . . , m. The tensor product .f ⊗ g of the two vectors .f ∈ V and .g ∈ W has the following properties for any .f, f1 , f2 ∈ V , .g, g1 , g2 ∈ W and any complex number a: .

f ⊗ (g1 + g2 ) = f ⊗ g1 + f ⊗ g2 ,

.

(f1 + f2 ) ⊗ g = f1 ⊗ g + f2 ⊗ g, f ⊗ (ag) = a(f ⊗ g) = (af ) ⊗ g. A tensor product .H1 ⊗H2 of two Hilbert spaces .H1 and .H2 is also defined similarly. The bases may have an infinite number of elements, and the inner product for the product space is defined in terms of the individual inner products as (f1 ⊗ g1 , f2 ⊗ g2 ) = (f1 , f2 )(g1 , g2 ).

.

If .e1 , e2 , . . . and .f1 , f2 , . . . are the bases in .H1 and .H2 , respectively, then a general element of .H1 ⊗ H2 will be of the form:  .h = cij (ei ⊗ fj ). i,j

Note that .h ∈ H1 ⊗ H2 cannot always be written as a tensor product of two vectors unless the coefficients can be factorized as .cij = ai bj in these or some other bases. In such a case, the vector .h ∈ H1 ⊗ H2 is said to be entangled between the two Hilbert spaces.

8.9 Bounded Linear Operators

121

As an example let h = 6e1 ⊗ f1 + 10e1 ⊗ f2 + 9e2 ⊗ f1 + 15e2 ⊗ f2 .

.

it is not immediately obvious that it can be factorized. But it can be. In general it is not easy to determine if a given vector can be factorized, or how it can be written with the smallest number of terms each of which can be factorized. The tensor product of three, four, or any finite number of Hilbert spaces can be defined similarly. The Fock space of quantum field theory is a direct sum of an infinite number of Hilbert spaces which correspond to .0, 1, 2, . . . n . . . particle spaces. The n-particle space is the repeated tensor product .H⊗H⊗· · · H (n-factors) of one particle space .H. A rigorous definition of tensor product of Hilbert spaces in given in Chap. 12, Sect. 12.9.1.

8.9

Bounded Linear Operators

A linear operator T defined on a Hilbert space .H when acting on a vector .f ∈ H gives a vector of length .Tf . And when acting on 2f , it gives a vector of length .2Tf . Therefore to see what the operator does to the length of a vector, we can restrict ourselves to finding how it acts on all the unit vectors in .H. If it turns out that for all .e ∈ H, with .e = 1, the set of positive numbers .{T e} is bounded, then the linear operator T is called a bounded linear operator. Another way to say is this: if there exists a positive number .t > 0, such that Tf  ≤ tf 

.

for every .f ∈ H, then the operator is bounded. The smallest of these positive numbers t as f goes about all .H is denoted by .T : Tf  ≤ T f .

.

(8.7)

Actually, since there will be an infinite number of bounds t, the “smallest” in the above, means in the sense of a limit, or “greatest lower bound.” Bounded linear operators are easiest to deal with because they can be defined on the whole Hilbert space and they are continuous which has the usual meaning that we can make .Tf − T g as small as possible by choosing f and g sufficiently close: Tf − T g = T (f − g) ≤ T f − g.

.

This definition of continuity can be replaced by .T h → 0 as .h → 0 because continuity at the zero vector assures us the continuity at all other vectors due to

122

8 Hilbert Spaces

linearity. And we can simplify this still further as: an operator T is continuous if for every sequence .fn → 0 it follows that .Tfn → 0. The identity mapping which maps a vector .f ∈ H to itself is a bounded linear operator denoted by .1 (or simply 1). And for any complex number c, the operator .c1 (or just c), which maps f to cf , is bounded linear operator as well. In fact, boundedness of a linear operator is equivalent to continuity because every continuous linear operator defined on the whole of .H can be shown to be bounded. Sum and multiplication of bounded operators is bounded. Using triangle inequality for the norm (Sect. 8.4), we check that for any .f ∈ H (A + B)f  = Af + Bf  ≤ Af  + Bf  ≤ (A + B)f .

.

Thus, A + B ≤ A + B.

.

Further, as bounded operators are defined on the whole Hilbert space, there is no problem in defining the product of two bounded operators: let A and B be two bounded operators, then the product defined as .(AB)f = A(Bf ) is also bounded because (AB)f  = A(Bf ) ≤ ABf  ≤ ABf ,

.

or .AB ≤ AB. We can deal with an infinite series of bounded operators as well because the series A = a0 + a1 T + a2 T 2 + · · ·

.

has a meaning if T is bounded and if the partial sums of bounded operators An = a0 + a1 T + a2 T 2 + · · · + an T n

.

converge as .n → ∞. On any .f ∈ H, using triangle inequality, and .cf  = |c|f , An f  ≤ |a0 |f  + |a1 |Tf  + · · · |an |T n f 

.

≤ |a0 |f  + |a1 |T f  + · · · |an |T n f  = (|a0 | + |a1 |T  + · · · |an |T n )f . Therefore the absolute convergence of the series S = a0 + a1 z + a2 z2 + · · ·

.

8.10 Bounded Linear Functionals

123

with .z = T  determines the convergence of the series of bounded operators. We will give examples of series for projection and unitary operators which are bounded.

8.10

Bounded Linear Functionals

Linear functionals are linear mappings or functions from .H into complex numbers. They are called “functionals” rather than just “functions” because, vectors in Hilbert spaces .H are usually functions (or wave functions), and a function defined on functions is usually called a “functional.” Any functional .f → F (f ) from .H into complex numbers is called linear if F (f + g) = F (f ) + F (g),

.

and

F (af ) = aF (f )

for any .f, g ∈ H and any complex number a. It is called bounded if there is a positive constant .C > 0 such that |F (f )| ≤ C f .

.

Let .g ∈ H be a fixed vector. Define a mapping .Fg of .H into complex numbers given by Fg (f ) = (g, f ).

.

This mapping is linear, and Schwarz inequality (Sect. 8.4) shows that it is bounded: |Fg (f )| = |(g, f )| ≤ gf .

.

It is amazing that all bounded linear functionals are of this type. This is the content of the Riesz-Frechet theorem, which says: for every bounded linear functional B on .H, there corresponds a unique vector .gB ∈ H such that .B(f ) = (gB , f ).

Before giving the proof, we note that if the bounded linear functional was given by .f → (g, f ), we can split the space .H into two orthogonal subspaces: the onedimensional space spanned by g and its orthogonal complement .M = {h ∈ H : (g, h) = 0}. The proof makes use of this pattern. Find the space .N = {f ∈ H : B(f ) = 0}. Then N is a closed linear manifold or subspace. If .N = H there is nothing to prove, .gB is just the zero vector. If .N = H, let .N ⊥ be the orthogonal complement (Sect. 8.6) of N. The space .N ⊥ can only be one dimensional, because if there were two linearly independent vectors .g1 and .g2 with .B(g1 ) = 0 and .B(g2 ) = 0, then .g3 = B(g1 )g2 − B(g2 )g1 (which belongs to .N ⊥ ) satisfies .B(g3 ) = 0, which is not possible unless it is the zero vector. But that will mean .g1 and .g2 are proportional to each other which contradicts their linear independence.

124

8 Hilbert Spaces

Choose a non-zero vector g in the one-dimensional subspace .N ⊥ and define gB =

.

B(g)∗ g. g2

On a general vector, .f ∈ H, we split f in the direction of g and the rest f = (g, f )

.

  g g . + f − (g, f ) g2 g2

The second term in this sum is orthogonal to g (or .gB ) and therefore belongs to N where B is zero. Therefore, using the linearity of B   (g, f ) g = B(g). B(f ) = B (g, f ) 2 g g2

.

On the other hand,  (gB , f ) =

.

B(g)∗ g, f g2

 =

(g, f ) B(g) = B(f ), g2

which shows that .B(f ) = (gB , f ) for all .f ∈ H. Also the choice for g in the one-dimensional space is immaterial because if .g  = ag (a = 0) is chosen, then gB =

.

B(ag)∗ ag = gB ag2

showing the uniqueness.

8.11

Adjoint of a Bounded Operator

Let T be a bounded linear operator on .H with .T g ≤ T g. Choose a fixed vector f and define a bounded linear functional F on .H by F (g) = (f, T g).

.

It is bounded because |F (g)| = |(f, T g)| ≤ f T g ≤ (T f ) g

.

Therefore, by Riesz-Frechet theorem, there exists a unique vector h such that (f, T g) = F (g) = (h, g).

.

8.12 Projection Operators

125

The mapping from any .f ∈ H to .h ∈ H defined in this manner is linear. The operator so defined is called the adjoint to T and written .T † : h = T † f,

(f, T g) = (T † f, g).

and

.

The adjoint .T † is also bounded because T is bounded: 2

T † f  = |(T † f, T † f )| = |(f, T T † f )| ≤ f T (T † f ) ≤ f T T † f .

.

Therefore, dividing by .T † f  (if .T † f  = 0, there is nothing to prove), T † f  ≤ T f .

.

For .T , S bounded, and c a complex number (cT )† = c∗ T † ,

.

(T S)† = S † T † ,

(T † )† = T .

A bounded linear operator T which is identical to its own adjoint, .T = T † , is called a bounded self-adjoint operator, or a Hermitian operator. We reserve the word Hermitian for bounded operators only. The sum and products of bounded operators are bounded, and multiplication by a number to a bounded operator is a bounded operator.

8.12

Projection Operators ∼

If .M ⊂ H is a subspace, then any .f ∈ H can be uniquely decomposed as .f =g ∼ ∼ ∼ +(f − g ) with .g ∈ M and .(f − g ) orthogonal to whole of .M. (See Sect. 8.6 for details.) We define a linear operator, called the projection operator .PM : H → M corresponding to the subspace .M. This operator acting on any .f ∈ H gives: ∼

PM f =g .

(8.8)

.

∼ 2



2

It is bounded because by Pythagoras theorem .f 2 =  g  + (f − g ) and so ∼ 2 .Pm f  =  g  ≤ f . Moreover, it has the property that .P M = PM because for every .f ∈ H ∼



2 PM f = PM (g ) = g = PM f.

.

126

8 Hilbert Spaces ∼



Furthermore, the operator .PM is Hermitian too: if .PM f1 =g 1 and .PM f2 =g 2 for any .f1 , f2 ∈ h ∼









(f1 , PM f2 ) = (f1 − g 1 + g 1 , g 2 ) = (g 1 , g 2 )

.



because .f1 − g 1 is orthogonal to the whole of .M. Similarly, ∼









(PM f1 , f2 ) = (g 1 , f2 − g 2 + g 2 ) = (g 1 , g 2 ),

.

which shows that .(f1 , PM f2 ) = (PM f1 , f2 ) and that .PM is Hermitian. Conversely, if there is a Hermitian operator P which satisfies .P 2 = P , then the set .M of vectors .g ∈ H which are the image of vectors .f ∈ H (i.e., .Pf = g) constitutes the subspace on which it projects. Writing .f = Pf + (1 − P )f , we see that .(1 − P )f is always orthogonal to ⊥ ⊥ .Pf ∈ M and so belongs to .M . Therefore .1 − PM = P M. Exercise 8.4 Let .{e1 , . . . , en } be an orthonormal set, and .M the subspace spanned by it. Show that the projection operator .PM on the subspace is given by PM f =

n 

.

(ei , f )ei

i=1

Exercise 8.5 Subspaces .M1 and .M2 are called orthogonal if every vector of M1 is orthogonal to every vector of .M2 . This means that .M1 ⊂ (M2 )⊥ (and ⊥ .M2 ⊂ (M1 ) ). Show that the corresponding projection operators .P1 and .P2 satisfy .P1 P2 = 0 = P2 P1 . If .M1 ⊂ M2 , then prove that .P1 P2 = P2 P1 = P1 . .

8.13

Unitary Operators

A linear bounded operator T on .H has an inverse if it is one-to-one onto mapping of .H to itself. The inverse operator, if it exists, is linear too and is denoted by .T −1 . As the product of an operator and its inverse is the identity mapping, .T T −1 = 1. The inverse of the inverse is the original operator. A linear bounded operator U which is one-to-one and onto such that the inverse operator .U −1 is the same as its adjoint .U † is called unitary. For a unitary operator (Uf, Ug) = (U † Uf, g) = (U −1 Uf, g) = (f, g)

.

Therefore, the inner product is unchanged by a unitary operator, .(Uf, Ug) = (f, g). In particular the norm is preserved, .Uf  = f . These facts are of great importance in quantum mechanics.

8.14 Eigenvalues and Spectrum

8.14

127

Eigenvalues and Spectrum

The question of eigenvalues is central to quantum mechanics. A non-zero vector g is called an eigenvector of an operator A if Ag = λg.

.

The number .λ is called the eigenvalue. The process of finding all the eigenvectors g and corresponding eigenvalues is called solving the eigenvalue problem. If the Hilbert space .H is of finite dimension (say n), we can choose an orthonormal basis: (ei , ej ) = δij ,

e1 , e2 , . . . en ,

.

 in it. The vectors f of the space can be expanded in the basis as .f = ci ei and so can be represented by column vectors of size n with entries of complex numbers .ci . An operator A is then represented by a complex matrix. As  Af = A



.

 ci ei

i

=



ci Aei ,

i

expanding each .Aei in the basis and taking the inner product with .ej Aj i = (ej , Aei )

and so

.

(ej , Af ) =



Aj i ci .

i

Thus Af is represented by a column vector obtained from the column vector representing f by the action of the matrix .Aj i . The eigenvalue equation .Ag = λg for operator  A will then be the linear homogeneous equations for finding .ci such that if .g = ci ei  .

(λδj i − Aj i )ci = 0,

j = 1, . . . , n.

i

Since .g = 0, the .ci cannot all be zero; the only solution for them is possible if .

det(λδj i − Aj i ) = 0.

For a Hermitian operator A, the matrix .Aj i is Hermitian, its eigenvalues are all real, and eigenvectors belonging to unequal eigenvalues are orthogonal. If .Aj i is unitary, then all eigenvalues are of complex numbers of modulus unity, and eigenvectors belonging to different eigenvalues orthogonal.

128

8 Hilbert Spaces

When the space is infinite dimensional, even if we choose the infinite o.n. basis, we cannot define the determinant of a matrix of infinite size to solve the eigenvalue problem. But we know that the existence of a non-zero determinant of a finite matrix is a guarantee that its inverse matrix exists. In other words, the condition for eigenvectors to exist, that is, the determinant .det(λ1 − A) = 0, means that the operator (.λ1 − A) fails to have an inverse. We use this to define the eigenvalue problem in infinite dimensions as follows. A complex number .λ is said to be in the resolvent set of a linear operator A in a Hilbert space .H if the inverse of .(λ1 − A) exists. All other points in the complex plane of .λ are said to be in the spectrum of A. The operator .(λ1 − A)−1 , when it exists, is called the resolvent of A. An eigenvalue .λ of A, which satisfies .(λ

− A)g = 0,

if it exists, belongs to the spectrum, and is said to be in the point spectrum of A. There may be other values of .λ in the spectrum which are not eigenvalues.

For a Hermitian operator T , we try to find inverse of .λ − T . The inverse will exist if the operator .λ−T is one-one (injective) and onto (surjective). It can fail to have an inverse if either of these conditions fails. We do not prove the surjective nature here, but we prove one-to-one nature. If there are two distinct non-zero vectors mapped to the same vector by .λ − T , then their difference .f = 0 must satisfy .(λ − T )f = 0. Choose .λ = a + ib with a and b real. Then, (a + ib − T )f 2 = ((a + ib − T )f, (a + ib − T )f )

.

= (f, (a − ib − T )(a + ib − T )f ) = (a − T )f 2 + b2 f 2 . Therefore if .b = 0, (i.e., .λ has non-zero imaginary part), the inverse will exist, and λ belongs to the resolvent set. The spectrum therefore has to lie on the real line.

.

Example On .H = L2 ([a, b]), the space of square integrable functions .f (x) on the interval .[a, b] define the “position operator” X as (Xf )(x) = xf (x),

.

a ≤ x ≤ b.

It is a bounded Hermitian operator. Its resolvent  .

1 f λ−X

 (x) =

f (x) , λ−x

8.14 Eigenvalues and Spectrum

129

exists for all .λ outside the interval .[a, b]. Thus the spectrum of X is the real interval [a, b]. However, if .λ ∈ [a, b], then there is no eigenvector .g ∈ H which can satisfy

.

((λ − X)g)(x) = (λ − x)g(x) = 0.

.

In other words, although the spectrum consists of the real interval .[a, b], there is no point spectrum, only, continuous spectrum. Exercise 8.6 Show that a projection operator has eigenvalues 1 and 0. Every vector in the range .M of P , the subspace to which it projects, is an eigenvector with eigenvalue 1, and every vector in the orthogonal complement .M⊥ is an eigenvector with eigenvalue 0. What is the inverse operator .(λ − P )−1 ? Hint: Try, formally, to be justified later, .

1 1 1 = λ−P λ 1 − P /λ   P P 1 1 + + 2 + ··· = λ λ λ   P 1 1−P + = λ 1 − 1/λ =

as P 2 = P etc.

1 1 (1 − P ) + P. λ λ−1

This is indeed such that .(λ − P )−1 (λ − P ) = 1. This expression also shows that the resolvent or inverse exists for all .λ except .λ = 0 and .λ = 1, which are the eigenvalues. This example shows that there can be an infinite number (whole subspaces), belonging to the same eigenvalue. The subspace of all vectors belonging to the same eigenvalue is called the eigen-space of that eigenvalue. Spectrum of a Unitary Operator Let U be unitary with .U  = 1 = U −1 . λ − U = λU −1 U − U = (λU −1 − 1)U.

.

Taking the inverse, formally, and assume that .|λ| < 1. Then, .

1 1 = −U −1 λ−U 1 − λU −1 = −U −1 (1 + λU −1 + λ2 U −2 + · · · ).

and series will converge and .λ will be in the resolvent set.

130

8 Hilbert Spaces

If .|λ| > 1, then take   1 λ−U = λ 1− U , λ

.

so that, .

1 1 1 = λ−U λ 1 − U/λ = λ−1 (1 + λ−1 U + λ−2 U 2 + · · · )

which again converges, and so all values of .λ outside the unit circle (.|λ| > 0) are also in the resolvent set. This leaves only the values .|λ| = 1 which must be in the spectrum of U . Exercise 8.7 Check that the two expressions for the resolvent .1/(λ−U ) given in the above example for .|λ| < 1 and .|λ| > 1 are indeed such that .(λ−U )[1/(λ−U )] = 1 by multiplying .(λ − U ) to the two infinite series term-by-term.

8.15

Spectral Theorem

We know that eigenvectors of distinct eigenvalues .λ1 , λ2 , etc. of a Hermitian operator A are orthogonal. If there are no degeneracies, that is, if there is a single eigenvector for each eigenvalue, then we can ask if these eigenvectors (properly normalized) will provide an orthonormal basis. They actually do, and we can write a completeness relation:  .

Pn = 1,

(8.9)

n

where .Pn is the projection operator on the one-dimensional space spanned by .en , the eigenvector of A. That is, .Aen = λn en , (en , em ) = δnm . In Dirac’s notation this will be written as  . |en en | = 1. (8.10) n

Multiplying either of these completeness relations (8.9) and (8.10) by A on the left gives us the spectral formula for the operator: A=



.

n

λn Pn .

(8.11)

8.16 Density Matrix

131

When there are degeneracies, then eigenvalues with degeneracies are associated not with single one-dimensional eigen-spaces but with a many-dimensional, even  infinitely many-dimensional, eigen-spaces. But the formula .A = λ P will n n n still be true. Actually even for self-adjoint unbounded operators, this formula is true with the modification to include continuous eigenvalues in the spectrum.

8.16

Density Matrix

Let .{ei = |i} be an orthonormal basis. The expectation value of any observable A in the state vector f (.f |f  = 1) can be written as (f, Af ) = f |A|f  =

.

  f |ii|A|j j |f  = Aij (Pf )j i = Tr(APf ), i,j

i,j

where .A and .Pf are the matrices in the basis corresponding to the operator A and the projection operator .Pf = |f f |, respectively. The trace is actually independent of the basis chosen as is clear from the expression above. The projection operator (or its matrix) is called the density matrix. A projection operator like .Pf = |f f | is called the density matrix for a pure state f . Since the density matrix gives the average value or expectation value, it is traditional to relate it to an ensemble, that is, a large number of separate copies of the physical system. When all the systems in the ensemble are in the same state f , measurement of A on all of them will provide the average value by the formula above. It is one of the hypotheses of quantum mechanics that after the  measurement, the system will acquire the eigenstate of the measured value. If .f = cn φn where .φn are the eigenvectors of A with eigenvalue .an (and we assume that all the eigenspaces are one-dimensional for simplicity), then the probability of obtaining the result .an is .pn = |cn |2 . Therefore, if initially there were N copies of the system in the state f in the ensemble, then after the measurement, there will be .p1 N copies in state .φ1 , .p2 N copies in .φ2 , and so on. Then we say that the ensemble has become mixed. We write a density matrix as ρ = p1 Pφ1 + p2 Pφ2 + · · · =



.

pk |φk φk |,

A|φk  = ak |φk 

k

and check that Tr(Aρ) =



.

m,n,k

pk m|A|nn|Pk |m =



pk ak = (f, Af ).

k

Although .TrAPf and .Tr(Aρ) both give the same result .(f, Af ), the process of measurement has converted the pure density matrix to a mixed density matrix.

132

8 Hilbert Spaces

Note that although .ρ above is a positive linear combination of projection operators, it is not itself a projection operator because .ρ 2 is not .ρ. It is possible to define the state of a system not by a unit vector or a unit ray but by a density matrix. Then we say that the state of a system is pure or a mixture depending on the whether the density matrix is pure or a mixture. In theories of measurement, this terminology is helpful.

8.17

Notes

This chapter has been written with the hope that it will provide some flavor of how rigorous statements about concepts which physicists use on a daily basis look like. Hilbert spaces are a part of all mathematics textbooks on functional analysis but one has to go through a number of initial chapters to reach there. The authoritative source is, of course, the person who brought Hilbert spaces to quantum mechanics in the first place: John von Neumann. His Mathematical Foundations of Quantum Mechanics first published in 1955 in typewriter font has been recently reprinted in TeX [1] with a note by Freeman Dyson. The book had unfairly acquired a reputation of being difficult to read by physicists. This is not so. Chapter 2 of this classic (which covers in greater detail the contents of this chapter) can be, and should be, read by any student of quantum mechanics. Hilbert spaces and operators on them are introduced in Reed and Simon [2] Chapters II and VI and VII. There is also a fairly detailed treatment in Chap. 5 of T. Kato [3].

References 1. J. von Neumann, Mathematical Foundations of Quantum Mechanics (New Edition), ed. by N.A. Wheeler (Princeton University Press, Princeton and Oxford, 2018) 2. M. Reed, B. Simon, Methods of Modern Mathematical Physics, vol. I: Functional Analysis (Academic Press, New York and London, 1972) 3. T. Kato, Perturbation Theory for Linear Operators (Springer, Berlin, 1966)

What Is an “Essentially Self-Adjoint” Operator?

Abstract

A brief introduction to the definition of an essentially self-adjoint operator in a Hilbert space is given.

9.1

Introduction

The formalism of quantum theory is linear. That greatly simplifies things. But what one gains in simplicity by the linear character of the space of states is more than lost by the fact that the space is infinite-dimensional in general. Important physical observables, like momentum and energy, are represented by unbounded operators and with continuous spectrum. These operators require careful handling. One of the difficulties is that naive manipulation of operators is no longer possible. An example (perhaps going back to 1950s) is this. In one dimension the wave function ψ=

.

1 exp(−1/4x 2 ) |x|3/2

is square integrable and hence belongs to the Hilbert space of wave functions in one dimension. The operator A = px 3 + x 3 p,

p = −i

.

is Hermitian in the usual naive sense:   ∞ . φ ∗ (x)Aχ (x) = −∞



−∞

d dx

(Aφ)∗ (x)χ (x).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_9

133

9

134

9 Self-adjoint operator

But a simple calculation gives: Aψ = −iψ

.

which shows that A has an imaginary eigenvalue! How can a Hermitian operator have an imaginary eigenvalue? Exercise 9.1 Check the above statement. In this chapter we discuss some of the problems in defining self-adjoint operators in an infinite-dimensional Hilbert space. A complete account will be too long for a chapter. The purpose here is more in the nature of creating awareness about some of these questions. Why do we need self-adjoint operators in quantum mechanics? The reason is that interpretation of the theory is not possible without concepts like: 1. the observed or measured values of physical quantities which are eigenvalues (or in the spectrum) of self-adjoint operators, 2. the probabilities of obtaining those eigenvalues as a result of measurement, and 3. the unitary development of the state of the system which is possible only if the Hamiltonian is self-adjoint. We begin with unbounded operators, assuming the reader to be familiar with elementary notions of a Hilbert space and bounded operators as given in the previous chapter.

9.2

Unbounded Operators

Recall that a bounded operator T is one for which .T e has a finite length .T e for every unit vector .e ∈ H. Therefore, an unbounded operator is one in which the norm .T e can keep on increasing indefinitely as we go over all the unit vectors. An unbounded operator cannot be defined everywhere on the Hilbert space because there is no vector of infinite norm. Simplest Example of an Unbounded Operator The simplest of the examples of an unbounded operator is the position operator x of a non-relativistic one-dimensional system where the Hilbert space .H = L2 (R) consists of all square integrable complex functions, .ψ(x) defined on the real line:    H = L2 (R) = ψ 



.

−∞

 |ψ(x)|2 dx < ∞ .

9.3 Graph of an Operator

135

 But x cannot be defined for those .ψ for which . |xψ(x)|2 dx is infinite even though .ψ is square integrable.

9.2.1

Domain and Range of an Operator

Since an unbounded operator A cannot be defined everywhere on .H, we must specify the subset .D(A) ⊂ H on which it can be defined. .D(A) is called the domain of definition of A. If A is linear, then the domain .D(A) ⊂ H has to be a linear manifold to allow .A(af1 + f2 ) = aAf1 + Af2 . The image of vectors mapped by A, the range .R(A) ⊂ H of the operator A, is also a linear manifold : R(A) = {g ∈ H|g = Af, f ∈ D(A)}.

.

An operator A is called equal to B (and we write .A = B) if and only if .D(A) = D(B) and .Af = Bf for all .f ∈ D(A). An operator B is called an extension of an operator A if .D(A) ⊂ D(B) and .Af = Bf for all .f ∈ D(A). We also say that operator A is a restriction of operator B. We write .A ⊆ B or .B ⊇ A in this case. If an unbounded operator A cannot be defined everywhere on .H, then we expect it to be the next best thing: that is, its domain .D(A) is dense in .H. A subset D of .H is called dense (or everywhere dense) if, given any .f ∈ H, there is an element of D arbitrarily close to it. It is like rational numbers in the real line: every real number can be arbitrarily closely approximated by a rational number. Similarly, given a vector .f ∈ H and for any . > 0, however small, if there is a .g ∈ D such that .f − g < , then we say D is dense in .H. We will see in Sect. 9.5 that the requirement of a dense domain is natural for defining the adjoint of an unbounded operator.

9.3

Graph of an Operator

We construct the space .H ⊕ H. As pointed out in the previous chapter, .H ⊕ H is a Hilbert space, and the original two Hilbert spaces are identical to subspaces of vectors of the form .< f, 0 > and .< 0, f >. These subspaces are orthogonal complements to each other in .H ⊕ H. The graph .G(A) of an operator A is the subset .{< f, Af > |f ∈ D(A)} of .H ⊕ H. This is in complete analogy with the graph of a function of a real variable plotted in the x-y plane (Fig. 9.1). The graph .G(A) has the following properties: 1. As there is a unique Af for .f ∈ D(A), .< f, g > and .< f, h >, both cannot belong to it unless .g = h. As A is linear, this means that if .< 0, h >∈ G(A), then .h = 0.

136

9 Self-adjoint operator

Fig. 9.1 Symbolic representation of the graph of an operator

2. .G(A) is a linear manifold because .

< f1 , Af1 > + < f2 , Af2 > = < f1 + f2 , A(f1 + f2 ) >, and c < f, Af > = < cf, A(cf ) > .

A linear manifold need not be closed of course. Two operators are equal if their graphs are identical. An operator A is a restriction of B (that is .A ⊆ B) if .G(A) ⊆ G(B).

9.4

Closed Operator

An operator A is called closed if .G(A) is a closed linear manifold, that is, a subspace. (See Sect. 8.6 for a definition of a closed linear manifold in a Hilbert space.) Another way to put it is as follows: A is closed if whenever a sequence .{fn } ∈ D(A) converges, and the sequence .{Afn } also converges, then .f = lim fn belongs to .D(A) and .Af = g where .g = lim Afn . We can ask: if .G(A) is not closed, why not take its closure .G(A) to define the graph of a closed operator? The catch here is that .G(A) may fail to be a graph of any operator. That is, there is no guarantee that there may not exist two sequences .f1 , f2 , . . . and .g1 , g2 , . . . in .D(A) both converging to the same .f ∈ D(A) but with different limit vectors for the images : .lim Afn = h1 and .lim Agn = h2 , so that vectors .< f, h1 > and .< f, h2 > are both in .G(A) but .h1 = h2 . In those cases where this does not happen, and .G(A) is indeed the graph of a well-defined operator, we say the A is closable and the operator determined by .G(A) is called the closure of A denoted by .A.

9.5 Adjoint in the General Case

9.5

137

Adjoint in the General Case

In Sect. 8.11 we defined the adjoint of a bounded linear operator using the RieszFrechet theorem: for a fixed .g ∈ H, there was a bounded linear functional .f → (g, Af ) on the whole of .H so that there was a unique vector h such that .(g, Af ) = (h, f ). We then defined .A† g = h. But for a general operator which cannot be defined everywhere, we need a new definition. In .H ⊕ H, a unitary operator V is defined as follows. For any .f, g ∈ H V < f, g > = < g, −f > .

.

As defined, V is unitary: .

(V < f1 , g1 >, V < f2 , g2 >) = (< g1 , −f1 >, < g2 , −f2 >) = (g1 , g2 ) + (f1 , f2 ) = (< f1 , g1 >, < f2 , g2 >) ,

and, moreover, .V 2 = −1. We expect the adjoint of A to satisfy an equation of the type (f, Ag) − (A† f, g) = 0,

.

f, g ∈ H.

This can be written in .H ⊕ H as (< f, A† f >, < Ag, −g >) = (< f, A† f >, V < g, Ag >) = 0.

.

Notice the appearance of members of the supposed graph .G(A† ) of .A† as the left member of the inner product. This suggests the following definition : The adjoint .A† of a linear operator A is the operator whose graph .G(A† ) is the subspace ⊥ in .H ⊕ H.

.(V G(A))

It is not clear if the set .G(A† ) = (V G(A))⊥ is a proper graph, that is, there is only one image when .A† acts on a vector. We must make sure that there are no vectors of the form .< 0, h > ∈ (V G(A))⊥ with .h = 0. In other words, if we find † .< 0, h > ∈ G(A ), then h has to be the zero vector .h = 0. Let .< 0, h >∈ (V G(A))⊥ . This means that .(< 0, h >, < Ag, −g >) = 0 for all .g ∈ D(A). Or, .(h, −g) = 0 for every .g ∈ D(A): The condition for existence of the adjoint .A† therefore becomes the demand that any vector orthogonal to all vectors of the domain .D(A) should be the zero vector.

138

9 Self-adjoint operator

If the linear manifold .D(A) was the whole space .H, this would be automatically satisfied, as indeed it is in the case of a bounded operator. But if .D(A) is dense so that there is a vector of .D(A) arbitrarily close to any vector of .H, then the only vector orthogonal to whole of .D(A) can only be .0. So, briefly, A† is well defned if D(A) is dense in H.

.

(9.1)

Exercise 9.2 Show that for a linear operator A whose domain is dense (so that .A† is defined) .A†† ≡ (A† )† is defined if A is closable (for “closable” see Sect. 9.4). Then .A†† is equal to the closure .A. Solution: .G(A†† ) = (V G(A† ))⊥ but .V G(A† ) = V [(V G(A))⊥ ]. As V is unitary, it respects orthogonality; therefore, for any set M, .V [M ⊥ ] = [V M]⊥ . So, V G(A† ) = V [(V G(A))⊥ ] = [V 2 G(A)]⊥ = [−G(A)]⊥ = [G(A)]⊥ .

.

Therefore, G(A†† ) = (V G(A† ))⊥ = [(G(A))⊥ ]⊥ = G(A)

.

because for any linear manifold M, .M = (M ⊥ )⊥ .

9.6

Symmetric and Self-Adjoint Operators

An operator is called symmetric if .D(A) is dense (so that .A† is defined) and .(f, Ag) = (Af, g) for all .f, g ∈ D(A). A symmetric operator is always a restriction of its adjoint. To show that .A ⊆ A† (or .G(A) ⊆ G(A† )), we note that for any .f, g ∈ D(A) (f, Ag) − (Af, g) = (< f, Af >, V < g, Ag >) = 0.

.

This implies that .< f, Af > is orthogonal to .V G(A) in .H ⊕ H. As defined in the previous section, the graph of .A† contains precisely such orthogonal vectors. So, † .< f, Af > which belongs to .G(A) is a member of .G(A ) as well: .< f, Af > ∈ † † † G(A ). Thus .G(A) ⊆ G(A ) or .A ⊆ A . An operator is called self-adjoint if .A = A† , that is, .G(A) = G(A† ). Momentum Operator in .L2 [a, b] 2 .L [a, b] is the Hilbert space of square-integrable functions defined on the closed interval .[a, b] of the real line. The “momentum” operator p = −i

.

d dx

9.6 Symmetric and Self-Adjoint Operators

139

has as its domain .D(p) all functions .φ(x) for which 

b

.

|φ  (x)|2 dx < ∞

φ  ≡ dφ/dx.

a

For such functions 

b

(φ, p ψ) =

.

a

  φ ∗ (x)ψ  (x) = (p φ, ψ) − i φb∗ ψb − φa∗ ψa

where we write .φb for .φ(b), etc. This shows that our functions should moreover satisfy the condition: φb∗ ψb = φa∗ ψa

.

for any pair .φ, ψ ∈ D(p). The boundary condition can be satisfied if for all wave functions in .D(p), for any fixed real .θ ψb = eiθ ψa ,

.

φb = eiθ φa etc.

Exercise 9.3 Show that for .H = L2 (R), the space of square-integrable functions on the real line, the operator .−d 2 /dx 2 cannot be defined on the function: φ(x) = |x|3/2 e−x , 2

.

as .ψ  (x) ∈ / H. Hint: Near .x = 0, the function .ψ  (x) goes as .|x|−1/2 .

9.6.1

Boundary Conditions for Self-Adjointness of H = −d 2 /dx 2 on L2 ([a, b])

On .L2 ([a, b]), the operator .p = −id/dx is symmetric if φb∗ ψb − φa∗ ψa = 0.

.

(9.2)

Within this set of functions, the second derivative will imply further conditions. Firstly, since .pφ = −iφ  (x) will also be acted upon by another p, the wave function  .φ should also satisfy condition (9.2): (φ  )∗b ψb − (φ  )∗a ψa = 0.

.

(9.3)

140

9 Self-adjoint operator

Moreover, 

b

(φ, H ψ) = −

.

φ ∗ ψ  = +



a

b a

(φ  )∗ ψ  − φb∗ ψb + φa∗ ψa

= (H φ, ψ) + (φb )∗ ψb − (φa )∗ ψa − φb∗ ψb + φa∗ ψa . Therefore H will be symmetric if (φb )∗ ψb − (φa )∗ ψa − φb∗ ψb + φa∗ ψa = 0.

(9.4)

.

We can combine the boundary conditions (9.2), (9.3), and (9.4) into a single statement as follows. Define two-dimensional complex vectors: eψ =

.

ψb ψa



fψ =

,

ψb −ψa



eφ =

,

φb φa



,

fφ =

φb −φa

.

Then, in a two-dimensional Hilbert space (eφ , eψ ) ≡ eφ† eψ = φb∗ ψb + φa∗ ψa = 2φb∗ ψb ,

.

(fφ , fψ ) ≡ fφ† fψ = (φ  )∗b ψb + (φ  )∗a ψa = 2(φ  )∗b ψb . On the other hand, (fφ , eψ ) = (φb )∗ ψb − (φa )∗ ψa

.

(eφ , fψ ) = φb∗ ψb − φa∗ ψa so that, in view of (9.4), (fφ , eψ ) − (eφ , fψ ) = 0.

.

Using these, we see that for any .φ and .ψ (eφ + ifφ , eψ + ifψ ) = (eφ , eψ ) + (fφ , fψ )

.

= (eφ − ifφ , eψ − ifψ ). Since a general vector in the two-dimensional Hilbert space can always be written  , etc. are arbitrary complex numbers in the form of .eφ ± ifφ because .φb,a and .φb,a as .φ and .ψ go over the entire .H, the vectors .eφ + ifφ and .eφ − ifφ must be related by a unitary transformation because their inner product is preserved. That is, there exists a .2 × 2 unitary matrix U such that eφ + ifφ = U (eφ − ifφ ),

.

9.7 Self-Adjoint Extensions

141

or more explicitly .

φb + iφb φa − iφa



=U

φb − iφb φa + iφa

.

(9.5)

There are important special cases: fφ = 0 φb = 0 = φa . Neumann b.c. U =1 . U = −1 eφ = 0 φb = 0 = φa . Dirichlet b.c.   U = σ1 φb + iφb = φa + iφa φb = φa , φb = φa . Periodic b.c. In the last line above, .σ1 is the first Pauli matrix and in .φb + iφb = φa + iφa the two numbers .φb and .φb are independent, therefore separately equal to .φa and .φa , respectively.

9.7

Self-Adjoint Extensions

Let A and .A1 be two symmetric operators such that A ⊆ A1 .

.

We say that .A1 is a symmetric extension of A. What can we say about their adjoints? As .G(A) ⊆ G(A1 ), we also have .V G(A) ⊆ V G(A1 ). Therefore any vector in .H ⊕ H which is orthogonal to .V G(A1 ) is orthogonal to .V G(A). Or, using the definition of the adjoint G(A†1 ) = [V G(A1 )]⊥ ⊆ [V G(A)]⊥ = G(A† ).

.

Therefore we have the sequence A ⊆ A1 ⊆ A†1 ⊆ A†

.

For several symmetric operators .A ⊆ A1 · · · ⊆ An , we have similarly: A ⊆ A1 · · · ⊆ An ⊆ A†n · · · ⊆ A†1 ⊆ A†

.

Thus the larger we choose the symmetric extension, the smaller is its adjoint containing it. There is a possibility therefore of finding a large enough extension † .AE for the given symmetric operator A which will coincide with its adjoint .A . E There can be, in fact, many extensions of this kind depending on the “direction” in which we extend.

142

9 Self-adjoint operator

A symmetric operator which has a unique self-adjoint extension is called essentially self-adjoint.

9.8

Notes

This chapter has been more in the nature of precisely defining and pointing out the difficulties encountered in dealing with self-adjoint operators in quantum mechanics. The references [2–4] remain the same as in the last chapter. In addition, see S. M. Roy, Boundary conditions, fractional fermion number and theta vacua [1] from which example (9.6.1) has been taken. Despite its title, this article in the first 30 pages or so gives a fairly complete introduction to self-adjointness and the important role boundary conditions play in defining it.

References 1. S.M. Roy, in Recent Advances in Theoretical Physics, ed. by R. Ramachandran (World Scientific, Singapore, 1985) 2. J. von Neumann, Mathematical Foundations of Quantum Mechanics (New Edition), ed. by N.A. Wheeler (Princeton University Press, Princeton and Oxford, 2018) 3. M. Reed, B. Simon, Methods of Modern Mathematical Physics, vol. I: Functional Analysis (Academic Press, New York and London, 1972) 4. T. Kato, Perturbation Theory for Linear Operators (Springer, Berlin, 1966)

Is There a Time-Energy Uncertainty Relation?

10

Abstract

The status of time as an observable in quantum mechanics is a vast subject. In this chapter we only discuss three points of view on uncertainty relation involving time and energy. The books referred to at the end of the chapter provide extensive literature on this topic.

10.1

Introduction

The simple answer to the question in the title of this chapter is “no.” An uncertainty relation between two “canonical” observables A and B (with .[A, B] = i h ¯ ) requires both of them to be self-adjoint operators in the first place. But time, both in quantum mechanics and in quantum field theory, is a parameter, and not an observable. Even if we could construct a self-adjoint operator T canonical to the energy H , such that .[T , H ] = −i h¯ , it would not be good enough, as W. Pauli pointed out long back in 1933 [1]. That argument is simple. For any real number .ε, let .U (ε) = exp(−iT ε/h¯ ) be the unitary operator determined by T . Then, U (ε)H U (ε)−1 = e−iT ε/h¯ H eiT ε/h¯ = H − ε.

.

This will make eigenvalues of H unbounded below as .ε → ∞. And that cannot be allowed for energy. Despite this, time-energy uncertainty relation has been discussed and debated from the earliest days of quantum mechanics. There may be no self-adjoint time operator, but the product of uncertainty .t in time and the uncertainty .E in energy is interpreted in various situations to satisfy a relation the type .tE ∼ h¯ .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_10

143

144

10 Time-Energy Uncertainty Relation

10.1.1 Internal and External Time We have discussed frames of reference in quantum mechanics in an earlier chapter. When we say “position” of a system, it is with reference to a frame used by the observer or the experimenter. This is the external position. But there is also, for a non-relativistic particle at least, an observable .xˆ which determines the wave function and the probabilities for the system to occupy certain regions. That operator is an internal property of the system. For time there is a similar distinction. Although there may be no self-adjoint operator called time, we may construct other observables which may help play a role of time, or time uncertainties. Of course, just as the eigenvalues of position observable of a particle is seen to correspond to values of coordinates of a frame of reference, the observables constructed for time also have to correspond to some external clock of the frame of reference.

10.1.2 Individual Measurement Versus Statistical Relations From the beginning of quantum mechanics, the uncertainty relations have been interpreted as our inability to measure two canonical observables accurately at the same time. The argument goes like this: to measure the value of one observable A to within a range .a is seen to disturb the value of the other, canonically conjugate observable B, by an unknown amount .b such that the product .ab of these two “uncertainties” cannot be made less than Planck’s constant. It is not clear how the initial state of the system is taken into account in these arguments. But these arguments have a historical importance for clarifying the physical content of quantum mechanics. These arguments discuss the uncertainties in the act of a single measurement. This should be contrasted with the standard quantum mechanical interpretation of .ψ A obtained from the statistical averages over a large number of measurements on a system with the fixed initial state .ψ.

10.1.3 The Einstein-Bohr Argument of 1927 This famous argument by Einstein and its rebuttal by Bohr at the 1927 Solvay Conference is recorded by Bohr in the Einstein-Philosopher Scientist [2]. We are going through it just for cultural-historical reason as being the first serious discussion between the two great physicists on time-energy uncertainties. Briefly the Einstein argument is this. A box contains a clock which can open or close a hole in the wall of the box by some mechanism. The box contains radiation out of which one photon of known

10.2 Mandelstam-Tamm Relation

145

energy is allowed to escape within arbitrarily short time at some fixed time measured by a clock. The weight of the box is measured before and after the photon has escaped with arbitrary accuracy. Thus the time of departure of the photon from the box, and the energy loss of the box, both can be measured with arbitrary accuracy. Bohr’s answer to the flaw in Einstein’s argument goes like this. Let us assume that the box is hung on a spring balance. The loss of energy .E is equivalent to a loss of the mass .m = E/c2 . This will give extra force .mg by the spring when photon escapes. This force will provide an impulse (change in momentum) .p = T mg in time T while the pointer of the balance moves by a distance x. We are supposed to find the loss in energy by noting the pointer position. That measurement will involve an indeterminacy .x, which is restricted by the uncertainty relation .xp ≥ h. We are not measuring p, but T . In a gravitational field, clocks at two places .x apart run slowly by a redshift factor equal to potential difference .gx divided by .c2 [3]. Thus, .

gx T = 2 , T c

or

Tg =

T c2 . x

(10.1)

So far so good. Then Bohr says .p “must obviously be smaller than the total impulse”: .

h ≤ p < p = T m g x

(10.2)

which, on substituting the value of T g from (10.1), implies h < T (mc2 ) = T E.

.

10.2

(10.3)

Mandelstam-Tamm Relation

10.2.1 A Characteristic Time for a Time-Dependent Observable The relation of Mandelstam and Tamm [4] is formulated in the Heisenberg picture. Given a state .ψ, the ratio of the spread .ψ A(t) of an observable .A(t) to the expectation value of its rate of change defines a time characteristic of the observable. The product of this time and the energy spread in .ψ satisfies an uncertainty relation. Recall the standard uncertainty relation between two observables A and B. If .[A, B] = iC, then for any state .ψ (ψ A)(ψ B) ≥

.

1 |Cψ |. 2

(A proof is given in Chap. 14 of this book.) We use the notation .Aψ = (ψ, Aψ) and .(ψ A)2 = (A − Aψ )2 ψ for any A.

146

10 Time-Energy Uncertainty Relation

In the Heisenberg picture, let .A(t) be an observable, then i h¯

.

dA = [A(t), H ]. dt

The uncertainty relation between .A(t) and the Hamiltonian H for the Heisenberg picture state .ψ is ψ A(t) ψ H ≥

.

1 h¯ |[A(t), H ]ψ | = |dA/dtψ | 2 2

Therefore, if we define a characteristic time for the observable .A(t) in the state .ψ as τψ (A(t)) =

.

ψ A(t) |dA/dtψ |

(10.4)

then, τψ (A(t)) ψ H ≥

.

h¯ . 2

(10.5)

Example: A Free Particle For a free particle, .H = p2 /2m, and if we choose .A = x, then .dx/dt = p/m is independent of time because p commutes with H . In this case .ψ x(t) is a measure of the size of the wave packet represented by the state vector .ψ at time t. The characteristic time .τψ (x(t)) for the position observable .x(t) in a state .ψ is approximately the time the wave packet will move its own width .ψ x(t) as .p/m is the average velocity. The wave packet will spread in general and the inequality above will become worse. Exercise 10.1 Calculate the above inequality as a function of time for .H = p2 /2m, .ψ a Gaussian wave packet, and .A = x(t).

10.2.2 Half-Life of a State Continuing the discussion of the Mandelstam-Tamm relation in the Heisenberg picture, let .|α, 0 be an eigenstate with eigenvalue .α: .A(0)|α, 0 = α|α, 0 of the time-dependent observable: A(t) = exp(iH t/h)A(0) exp(−iH t/h). ¯ ¯

.

Then, |α, t = exp(iH t/h)|α, 0 ¯

.

10.2 Mandelstam-Tamm Relation

147

is the eigenstate of .A(t) with the same eigenvalue: A(t)|α, t = A(t) exp(iH t/h)|α, 0 = α|α, t. ¯

.

Let .P (t) = |α, tα, t| be the projection operator to the state .|α, t. Then, for the (time independent) state .ψ of the system P (t)ψ = ψ|α, tα, t|ψ = |ψ|α, t|2 ≡ pα

.

is the transition probability from .ψ to .|α, 0. The probability .pα is time-dependent pα = pα (t), but we are leaving that as understood for simplifying notation. As 2 .P (t) = P (t), .

[ψ P (t)]2 = P (t)2 ψ − P (t)2ψ = pα (1 − pα ).

.

Therefore, the characteristic time for the projection operator .P (t) from (10.4) is τψ =

.

√ pα (1 − pα ) . |dpα /dt|

The Mandelstam-Tamm inequality can be written as    dpα  1 1 2 √  = . ≤ ψ H.  h¯ τψ dt pα (1 − pα )

(10.6)

We take the special case when the state vector happens to be the eigenstate of .A(0): with .ψ = |α, 0 and .pα (0) = 1. From its maximum value, the probability .pα decreases when t increases. Therefore, we can use the inequality above as .



1 dpα 2 ≤ |α,0 H. √ dt h¯ pα (1 − pα )

As .|α,0 H is independent of time, we can integrate from 0 to some .t > 0 by choosing .pα (t) = cos2 θ (t) so that θ (t) − θ (0) =

.

t |α,0 H. h¯

Or, for .pα (0) = 1, θ (0) = 0, so that  t θ (t) = cos−1 ( pα (t)) ≤ |α,0 H. h¯

.

148

10 Time-Energy Uncertainty Relation

The probability .pα (t) for .ψ to “retain .α” (that is to have a non-zero probability for making a transition to .|α, t) reaches zero when .θ (t) = π/2: t |α,0 H ≥

.

π h. ¯ 2

(10.7)

The time t can be interpreted as the “total survival time” of the eigenvalue .α. If we are interested in the “half-life,” that is, the time by which the probability to have eigenvalue .α reduces from 1 to .1/2, then t1/2 |α,0 H ≥

.

10.3

π h. ¯ 4

(10.8)

Krylov-Fock Survival Probability

N. S. Krylov and V. A. Fock [5] discuss the law of decay of a “quasi-stationary” state. The transition probability (also called the survival probability) for a state to remain the same as it was at the beginning can be calculated as a function of time. Let .|ψ(0) be the state in Schrodinger picture at time .t = 0. We can expand it in a basis in which energy E is diagonal along with other commuting variables .β needed to form a complete set: |ψ(0) =



.

dE a(E, β)|E, β,

E , β |E, β = δ(E − E)δβ β .

β

In general, there may be isolated, discrete energy levels in the basis also, but we assume that the initial vector .|ψ(0) does not depend on them, and depends only on the continuous part of the spectrum. At time t the state becomes  .|ψ(t) = (10.9) dE a(E, β) e−iEt/h¯ |E, β, β

and the transition amplitude for it to be in the initial state is  A(t) = ψ(0)|ψ(t) =

.

 ≡

⎛ ⎞  dE ⎝ |a(E, β)|2 ⎠ e−iEt/h¯ β

dE ρ(E)e−iEt/h¯

(10.10)

where

.ρ(E) is the positive density of states for .ψ(0) and it is absolutely integrable as . dE ρ(E) = 1. The transition probability is .P (t) = |A(t)|2 .

10.4 Wigner’s Uncertainty Relation

149

For a state to decay, that is, for its survival probability to vanish, .A(t) should approach 0 as .t → ∞ in (10.10). But that is guaranteed because of RiemannLebesgue lemma. For a “quasi-stationary” state, a reasonably sharp density of states concentrated about .E0 , like ρ(E) =

.

1 π (E − E0 )2 + 2

the integral in (10.10) can be extended from .−∞ to .+∞, by neglecting .ρ(E) beyond the region where it is significant. Then it can be evaluated in the usual way as a contour integral in the lower half plane collecting the residue of the pole at .E0 − i so that A(t) = e−iE0 t/h¯ e− t/h¯

.

(10.11)

with the probability showing an exponential decay P (t) = e−2 t/h¯ .

.

(10.12)

This is just the well-known relation between the time t by which the probability falls of to .1/e from 1 and the width of the energy . of the initial state as t =

.

h¯ . 2

(10.13)

Of course, that is an approximation and the exponential law does not hold in actual cases. Immediately after .t = 0 and also toward .t → ∞, .P (t) obeys laws different from the exponential decay. It is only in the middle range that exponential is a good approximation.

10.4

Wigner’s Uncertainty Relation

10.4.1 Definition of t and E To understand the idea behind Wigner’s version of time energy-time uncertainty relation, [6] we can use a single non-relativistic free particle in one dimension as an example. Let .|ψ(t) = exp(−itH /h)|ψ be a state of the particle in the Schrodinger ¯ picture, and .x|ψ(t) its representative in the coordinate representation. We can think of a moving wave packet with .|x|ψ(t)|2 dx as the probability of finding the particle within x and .x + dx at time t. How does .|x0 |ψ(t)|2 change with time for some fixed value of .x = x0 ? Suppose the particle is moving from the left to right. In the remote past when the particle is away from this fixed value .x0 , the probability .|x|ψ(t)|2 dx is almost

150

10 Time-Energy Uncertainty Relation

zero. It increases as the particle nears the point .x0 and again goes to zero after the particle has gone past the point. The maximum occurs around the time .t0 when the particle has just arrived near .x0 and crosses this point. To avoid mathematical inconvenience of carrying infinitesimal dx around, it is better to choose a state vector .|φ whose wave function .x|φ is concentrated around the chosen point .x0 . .φ is a fixed vector representing the fixed point .x0 . Define a quantity .t representing the spread in time for which the particle is near the fixed .x0 by

∞ (t) = 2

.

2 2 −∞ |φ|ψ(t)| (t − t0 ) dt

∞ . 2 −∞ |φ|ψ(t)| dt

(10.14)

Here .t0 is some fixed time representing origin of time. It will drop out in the end because of time translational invariance. As .φ|ψ(t) is a time-dependent moving wave packet, there will be a spread in energy in .ψ(t). If .|E is the energy basis in the subspace of particle with positive momentum (we are only dealing with the particle moving from left to right) with the normalization .E |E = δ(E − E), then we can write φ|ψ(t) = √

.

1 2π h¯





dE χ (E)e−iEt/h¯

(10.15)

0

with its inverse 1 χ (E) = √ 2π h¯





.

−∞

dt φ|ψ(t)eiEt/h¯

(10.16)

The uncertainty in energy is now defined as (.E0 is the energy average)

∞ (E) = 2

.

0

dE |χ (E)|2 (E − E0 )2

∞ . 2 0 dE |χ (E)|

(10.17)

Note that .E defined here is not the energy spread of .|ψ(t), but that of φ|ψ(t), the amplitude or overlap of the wave packet with the chosen state concentrated near .x0 . In fact

.





φ|ψ(t) = φ|

.

dE |EE|ψ(t)

0





=

dE φ|EE|ψe−iEt/h¯ ,

0

so that χ (E) =

.

 2π h¯ φ|EE|ψ.

10.4 Wigner’s Uncertainty Relation

151

As .φ and .ψ are normalized state vectors, .χ (E) goes to zero as .E → ∞. Moreover, χ (0) = 0 which follows from the fact that if .χ (0) were not zero, .φ|ψ(t) will not go to zero faster than .∼ 1/t as .t → ∞.

.

10.4.2 Calculation of t We will substitute .φ|ψ(t) from (10.16) into the expression for (10.14). To do that we first note that  ∞  ∞ 2 . dt|φ|ψ(t)| = dE|χ (E)|2 −∞

0

and morevover, 



.

−∞  ∞

=

−∞

1 2π h¯

=

dt|φ|ψ(t)|2 (t − t0 )2 dtφ|ψ(t)∗ (t − t0 )φ|ψ(t)(t − t0 ) 





−∞



dt



dE χ (E )∗ eiE t/h¯ (t − t0 )

0





dEχ (E)e−iEt/h¯ (t − t0 )

0

We change the integration variable from t to .t1 = t − t0 ; the expression becomes .



1 2π h¯



∞ −∞



dt1





dE χ (E )∗ eiE t0 /h¯ eiE t1 /h¯ t1

0





dEχ (E)e−iEt0 /h¯ e−iEt1 /h¯ t1

0

Using

eiE t1 /h¯ t1 = −i h¯

.

∂ iE t1 /h¯ e , ∂E

and

e−iEt1 /h¯ t1 = i h¯

∂ Et1 /h¯ e , ∂E

followed by integration by parts and integration over .t1 , we get: 



.

−∞



∂ ∂ χ (E)∗ eiEt0 /h¯ χ (E)e−iEt0 /h¯ ∂E ∂E 0 2   ∞  ∂χ0   , dE  = h¯ 2 ∂E  0

dt|φ|ψ(t)| (t − t0 ) = h¯ 2

2

2



dE

where χ0 (E) = χ (E)e−iEt0 /h¯ .

.

152

10 Time-Energy Uncertainty Relation

Therefore, as .|χ (E)| = |χ0 (E)|, and denoting the derivative with respect to E by a prime (t) = 2

.

∞ 2 2 0 dE|χ0 (E)| h¯ ∞ 2 0 dE|χ0 (E)|

(10.18)

Let .χ0 = α exp iβ , then |χ0 |2 = |α + iαβ |2 = (α )2 + α 2 (β )2

.

and (t) = 2

.

∞ 2 2 2 2 0 dE[α (E) + α β (E) ]

∞ h¯ 2 0 dE|α(E)|

which can be reduced by omitting the .β 2 term. This does not disturb .(E)2 because that depends on .|χ |2 = |χ0 |2 = α 2 already.

10.4.3 The Product (t)2 (E)2 To summarize, we need a lower bound on the product:

∞ (t) (E) = h¯

.

2

2

2

dEα 2

0∞ 2 0 dEα

 ∞ 0

dE α 2 (E − E0 )2

∞ 2 0 dE α

 (10.19)

which is a functional of a single real function .α(E). We note that the dependence on the time .t0 has dropped out which is a consequence of the time translation invariance. If we regard .α(E) as a wave function square integrable in the space .E : (0, ∞), then the uncertainty product looks analogous to the usual momentumposition product with .α playing the role of position and .α that of momentum. Except that the coordinate E is limited to the half-line because the energy has a lower bound. In particular, we must have .α(0) = 0 At this stage Wigner uses the variational principle to obtain an Euler-Lagrange’s equation for minimizing .(t)2 (E)2 . We cannot go into that involved calculation, but the reader can follow that from the reference to Wigner’s article. We just note that we can use some typical functions for .α(E) to convince ourselves that the product is of the order of .h. ¯ For example, take an .α(E) which is like a triangular bump located about .E0 as shown in (Fig. 10.1).

10.5 Notes

153

Fig. 10.1 Example of a Wigner’s .α(E) function

The bump has width .2a, (a < E0 ) and height b, and the function .α(E) is α(E) = b(1 − |E − E0 |/a)

for |E − E0 | ≤ a

.

and zero elsewhere. Also .α (E) is .b/a for .(E0 − a < E < E0 ) and .−b/a for .(E0 < E < E0 + a). Then,  .

α(E)2 dE =

2 2 b a, 3

 α(E)2 (E − E0 )2 dE =

1 2 3 b a , 15



α (E)2 dE =

2b2 , a

leading to  tE = h¯

.

3 . 10

It is interesting that the width (or height) of the bump does not determine the lower bound. Exercise 10.2 Try some functions .α(E) which are square integrable in .(0, ∞) and vanish at .E = 0, preferably peaked around .E0 , and check numerically how small one can make .tE.

10.5

Notes

Limitations of space have prevented us from a whole range of topics related to the question of time in quantum mechanics. Fortunately there are two recent volumes [7] covering all aspects of the this subject. The reader is particularly encouraged to read the introduction by Muga, Mayato, and Egusquiza in Vol. 1. For deviations from exponential decay law mentioned toward the end of Section 10.3, see recent papers [8] where earlier references can be found.

154

10 Time-Energy Uncertainty Relation

References 1. A mathematically rigorous modern proof of Pauli’s theorem, apart from many other related issues can be found in M. D. Srinivas and R. Vijayalakshmi, Time of ‘occurence’ in quantum mechanics. Pramana 16, 173 (1981) 2. Albert Einstein—Philosopher Scientist. The Library of Living Philosophers, vol. VII (P. A. Schilpp, New York 1969). Contribution by Niels Bohr 3. This is a well known result. See for example, section 1.3 in P. Sharan, Spacetime, Geometry and Gravitation (Birkhauser, Basel, 2009) 4. L. Mandelstam, I.G. Tamm, J. Phys. USSR, 9, 249 (1945) 5. V. Fock, N. Krylov, Uncertainty relation between time and energy. J. Phys. USSR 11, 112 (1947) 6. E.P. Wigner in Aspects of Quantum Theory, ed. by A. Salam, E.P. Wigner (Cambridge University Press, Cambridge, 1972) 7. G. Muga, R.S. Mayato, I. Egusquiza, Time in Quantum Mechanics—Vol. 1, 2nd edn. Lecture Notes in Physics, vol. 734 (Springer, Berlin 2008). G. Muga, A. Rauschhaupt, A. del Campo, Time in Quantum Mechanics—vol. 2. Lecture Notes in Physics, vol. 789 (Springer, Berlin, 2009) 8. M. Mazhiashvily, Phys. Rev. D,102, 076006 (2020). Jimenez, Kelkar, arxiv:1809.10673v2 (2019). 2108.10957v1 (2021)

Fock Spaces

11

Abstract

A concise, but complete introduction to Bosonic and Fermionic Fock spaces is given in the most general setting. The general, basis-independent picture is quite simple, and the choice of bases can be made more easily from the general formulas.

11.1

Introduction

Starting from the days of Planck and Einstein, quantum theory has always been the theory of emission or absorption (creation or annihilation) of quanta. Even the theory of a single non-relativistic particle in the presence of a potential .V (x) is actually a many-particle theory of interaction with a scalar field .V (x) restricted to a one-particle subspace. The Hilbert space of n identical particles (or n physical systems) is the product space .Hn = H ⊗ · · · ⊗ H (n factors), where .H is the space for one particle. By including a one-dimensional Hilbert space .H0 = {0 } spanned by a unit vector .0 , called the vacuum or the ground state, we define the general Fock space, .F, as an infinite direct sum of the sequence of spaces F = H0 ⊕ H ⊕ H2 ⊕ · · · ⊕ Hn ⊕ . . . .

.

(11.1)

Anti-linear mappings called annihilation operators .Af for each .f ∈ h map .0 to zero and any .Hn to .Hn−1 . Their adjoints .A†f map .0 to .f ∈ H and generally .Hn into .Hn+1 are called creation operators. Physically occurring states of identical particles exist in either the symmetric subspaces .Hsn ⊂ Hn for bosons or in the antisymmetric subspaces .Han ⊂ Hn for fermions. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Sharan, Some Unusual Topics in Quantum Mechanics, Lecture Notes in Physics 1020, https://doi.org/10.1007/978-3-031-35962-0_11

155

156

11 Fock Spaces

Restriction of .Af to these subspaces provides annihilation operators .af for bosons, and .bf for fermions, as well as their adjoints, the creation operators .af† , and .bf† . Commutation (and anti-commutation) properties of these operators are derived and discussed in this chapter. The algebra of creation-annihilation operators is fundamental to quantum many-body theory and quantum field theory.

11.2

Space for Many Particles

Let .H be the Hilbert space of a particle (or a physical system). We will use the term “particles” for systems, for simplicity. The physically realizable states of two such particles are in the product space .H2 = H ⊗ H which is the space constructed from linear combinations of tensor products like .f ⊗ g of two vectors .f, g ∈ H. Note that .f ⊗ g and .g ⊗ f are two distinct vectors. The tensor product has the following properties. Let .f, g, h ∈ H be any vectors and a a complex number (f + g) ⊗ h = f ⊗ h + g ⊗ h,

.

(af ) ⊗ g = a(f ⊗ g),

f ⊗ (g + h) = f ⊗ g + f ⊗ h

f ⊗ (ag) = a(f ⊗ g).

A general vector in .H2 will be not be a decomposable product like .f ⊗ g, but rather, a linear combination of such vectors. Vectors in .H2 which cannot be written as a tensor product of two vectors are called entangled. For a rigorous definition of tensor product, see Sect. 12.9.1. The inner product in .H2 is defined simply as the product of respective factors for decomposable vectors (f1 ⊗ g1 , f2 ⊗ g2 ) = (f1 , f2 )(g1 , g2 ).

.

On general vectors, the inner product can be calculated by using linearity. For example, .

(af1 ⊗ g1 + bf2 ⊗ g2 , cf3 ⊗ g3 + df4 ⊗ g4 ) = a ∗ c(f1 ⊗ g1 , f3 ⊗ g3 ) +a ∗ d(f1 ⊗ g1 , f4 ⊗ g4 ) + b∗ c(f2 ⊗ g2 , f3 ⊗ g3 ) + b∗ d(f2 ⊗ g2 , f4 ⊗ g4 ).

For n particles there is the tensor product space Hn = H ⊗ H ⊗ · · · ⊗ H,

.

n factors.

The inner product for that space is also defined just as for .H2 , as a product of inner products of individual factors (f1 ⊗ · · · ⊗ fn , g1 ⊗ · · · ⊗ gn ) = (f1 , g1 ) · · · (fn , gn ).

.

11.3 Fock Space for Bosons

157

The vectors .f ⊗ g and .g ⊗ f are unrelated, but if the particles are identical then, indeed, .f ⊗ g and .g ⊗ f represent the same physical state, and they will be related by a symmetry transformation which interchanges the states of the two particles f ⊗ g = U12 (g ⊗ f ).

.

From Wigner’s theorem, .U12 should be a unitary operator. Exercise 11.1 Define a linear operator .U (P ) for a permutation P on decomposable vectors in .Hn as U (P )(f1 ⊗ · · · ⊗ fn ) = fP (1) ⊗ fP (2) ⊗ · · · ⊗ fP (n) .

.

Show that (1) .U (P ) is unitary, and (2) .U (P1 )U (P2 ) = U (P1 P2 ). Thus, the .U (P ) form a representation of the permutation group. More generally, the states of n identical particles will be related by unitary transformations .U (P ) corresponding to each of the .n! permutations of n particles. Permutations form a group because a permutation of n objects followed by another permutation is again a permutation. From the general principles of quantum mechanics, it follows that permutations of identical particles should be realized as a unitary representations of this group: Only the simplest one-dimensional irreducible representations of the permutation group are ever realized for an assembly of identical particles.

And we should be thankful to nature for that! There are two possibilities for these one-dimensional representations. There are states in which there is no effect of the permutation operator .U (P ). These are the symmetric states. Or, there are those in which there is a change of sign depending on whether the permutation is even or odd: the antisymmetric states. These two types correspond to the two classes of particles: bosons for symmetric and fermions for antisymmetric states.

11.3

Fock Space for Bosons

11.3.1 Fock Space Fs The states of two identical bosons lie in the symmetric subspace .Hs2 = (H ⊗ H)s ⊂ H ⊗ H. This is the space such that the effect of the operator .U12 (which exchanges the two factors of a decomposable vector) on vectors of this subspace acts like identity. Similarly, linear combination vectors of higher tensor products n .H = H ⊗ · · · ⊗ H (n factors) can be constructed which are symmetric under any permutation of the factors.

158

11 Fock Spaces

Introduce a special one-dimensional Hilbert space H0 = {c0 |c = a complex number}

.

spanned by a unit vector .0 (0 , 0 ) = 1

.

which represents a vacuum or zero-particle state. The bosonic Fock space .Fs for .H is the Hilbert space constructed from the direct sum of (1) the one-dimensional Hilbert space .H0 spanned by the unit vector .0 , (2) the one particle space .H (or .H1 ), and, (3) all the symmetric subspaces .(H ⊗ H · · · ⊗ H)s of tensor products of .H. 2 N .Fs = {0 } ⊕ H ⊕ Hs ⊕ · · · ⊕ Hs ⊕ · · · . (11.2) In particular, vectors belonging to spaces with different number of particles are orthogonal. A vector in .F ∈ Fs is thus a sequence of vectors F = (F0 = c0 , F1 = f, F2 , . . . , Fn , . . . )

.

where .F0 = c0 a multiple of .0 , .f ∈ H, and .F2 ∈ Hs2 and so on. The inner product in the space .Fs between .F = (c0 , f, F2 , . . . ) and .G = (d0 , g, G2 , . . . ) is defined as (F, G) = c∗ d + (f, g) + (F2 , G2 ) + · · ·

.

(11.3)

which is basically a sum over all the inner products in the subspaces of the same number of particles. The inner product in the symmetric subspaces .Hsn is just the inner product derived from the larger space .Hn of which it is a subset.

11.3.1.1 Symmetrizer A general vector of .Hsn can be obtained by symmetrizing a vector of .Hn . Let P be a permutation, that is, a one-to-one mapping of numbers .1, 2, . . . , n among themselves. Then on decomposable vectors define .S(n) as S(n) (f1 ⊗ f2 ⊗ · · · ⊗ fn ) =

.

1  (fP (1) ⊗ fP (2) ⊗ · · · ⊗ fP (n) ), n!

(11.4)

P

where the sum is over all the .n! permutation mappings P . As all other vectors in .Hn are linear combinations of decomposable vectors, the definition can be extended to the whole of .Hn by linearity.

11.3 Fock Space for Bosons

159

S(n) is clearly self-adjoint as we can see directly. Note that

.

.

(f1 ⊗ · · · ⊗ fn , S(n) g1 ⊗ · · · ⊗ gn )  = (1/n!) (f1 ⊗ · · · ⊗ fn , gP (1) ⊗ · · · ⊗ gP (n) ) P

= (1/n!)



(f1 , gP (1) ) · · · (fn , gP (n) ).

P

Now, if .P −1 is the inverse mapping or permutation, then the product above can be rearranged as (fP −1 (1) , g1 ) · · · (fP −1 (n) , gn )

.

and therefore the sum over all P can be converted to a sum over all .P −1 (because they are the same set) (f1 ⊗ · · · ⊗ fn , S(n) g1 ⊗ · · · ⊗ gn ) = (1/n!)



.

(fP −1 (1) , g1 ) · · · (fP −1 (n) , gn )

P −1

= (S(n) f1 ⊗ · · · ⊗ fn , g1 ⊗ · · · ⊗ gn ). Therefore (f1 ⊗ · · · ⊗ fn , S(n) g1 ⊗ · · · ⊗ gn ) = (S(n) f1 ⊗ · · · ⊗ fn , g1 ⊗ · · · ⊗ gn ), (11.5)

.

showing the self-adjoint nature of .S(n) . n n .S(n) is moreover a projection operator. It maps .H onto .Hs , and an application by .S(n) on an already symmetric vector does not change that vector, that is, 2 S(n) = S(n) .

(11.6)

Fsn = S(n) F n .

(11.7)

.

Thus we can write .

Exercise 11.2 Show that .S(n) is self-adjoint. Solution:  Alternative to the direct proof above. We can write .S(n) = (1/n!) P U (P ) and use the fact that .U (P )’s are unitary, and, as a group representation, both .U (P ) and .U (P −1 ) = U (P )−1 are present in the sum. Then −1 = U (P ) + U (P )† if P and .P −1 are distinct. If .P −1 = P , then it .U (P ) + U (P ) is already self-adjoint (in addition to being unitary).

160

11 Fock Spaces

11.3.1.2 Annihilation Operator on General Fock Space F n Let .f ∈ H be a vector representing a one particle state. The annihilation operator .Af corresponding to f is defined on .0 , .H, and (un-symmetrized) tensor products as follows : for any .g, f1 , . . . fn ∈ H Af 0 = 0.

(11.8)

.

Af g = (f, g)0 , . √ Af (f1 ⊗ · · · ⊗ fn ) = n(f, f1 ) (f2 ⊗ · · · ⊗ fn ).

(11.9) (11.10)

√ Thus, .Af maps an n particle state into an .n − 1 particle state. The factor . n is standard and is put for simplifying formulas later on. The operator .Af is antilinear in f as follows directly from definition. For any .f, g ∈ H and any complex number c Acf = c∗ Af .

Af +g = Af + Ag

.

(11.11)

11.3.1.3 Annihilation Operator on Fsn The operator .Af has the nice property that it maps a symmetric n-particle state into a symmetric .(n − 1)-particle state. For example, S(3) (g1 ⊗ g2 ⊗ g3 ) =

.

1 (g1 ⊗ g2 ⊗ g3 + g1 ⊗ g3 ⊗ g2 + g2 ⊗ g3 ⊗ g1 3! +g2 ⊗ g1 ⊗ g3 + g3 ⊗ g1 ⊗ g2 + g3 ⊗ g2 ⊗ g1 )

so that .

1 √ [ 3(f, g1 )(g2 ⊗ g3 + g3 ⊗ g2 ) 3! √ √ + 3(f, g2 )(g3 ⊗ g1 + g1 ⊗ g3 ) + 3(f, g3 )(g1 ⊗ g2 + g2 ⊗ g1 )]

Af S(3) (g1 ⊗ g2 ⊗ g3 ) =

1 = √ [(f, g1 )S(2) (g2 ⊗ g3 ) + (f, g2 )S(2) (g1 ⊗ g3 ) + (f, g3 )S(2) (g1 ⊗ g2 )]. 3 This allows us to define .Af restricted to vectors of .Hsn (for each n) as a separate operator .af acting within symmetric Fock space .Fs   af ≡ Af  , .

.

Fs

af 0 = 0, . af g = (f, g)0 , .

(11.12) (11.13) (11.14)

11.3 Fock Space for Bosons

161

af S(n) (g1 ⊗ · · · ⊗ gn ) = n  1 √ (f, gi )S(n−1) (g1 ⊗ · · · ⊗ g/i ⊗ · · · ⊗ gn ), n

(11.15)

i=1

where the symbol .g/i shows the absence of the factor .gi in the product. The operator .af is anti-linear like .Af . For any .f, g ∈ H and any complex number c af +g = af + ag

.

acf = c∗ af .

(11.16)

11.3.1.4 Creation Operator on F n Creation operators do the opposite. They map an .(n − 1)-particle state into an nparticle state. Define for any .f ∈ H, the creation operator .A†f on decomposable states A†f 0 = f, . (11.17) √ A†f (g1 ⊗ g2 ⊗ · · · ⊗ g(n−1) ) = n(f ⊗ g1 ⊗ g2 ⊗ · · · ⊗ g(n) ). (11.18) .

As the notation shows the operator .A†f is the adjoint of .Af because .

(g1 ⊗ g2 ⊗ · · · ⊗ g(n−1) , Af (f1 ⊗ f2 ⊗ · · · ⊗ fn )) √ = n(f, f1 )(g1 ⊗ g2 ⊗ · · · ⊗ g(n−1) , f2 ⊗ · · · ⊗ fn ) √ = n(f, f1 )(g1 , f2 ) · · · (g(n−1) , fn )

On the other hand, .

(A†f g1 ⊗ g2 ⊗ · · · ⊗ g(n−1) , f1 ⊗ f2 ⊗ · · · ⊗ fn ) √ = n(f ⊗ g1 ⊗ g2 ⊗ · · · ⊗ g(n−1) , f1 ⊗ f2 ⊗ · · · ⊗ fn ) √ = n(f, f1 )(g1 , f2 ) · · · (g(n−1) , fn )

so that .

(g1 ⊗ g2 ⊗ · · · ⊗ g(n−1) , Af (f1 ⊗ f2 ⊗ · · · ⊗ fn )) = (A†f g1 ⊗ g2 ⊗ · · · ⊗ g(n−1) , f1 ⊗ f2 ⊗ · · · ⊗ fn )

162

11 Fock Spaces

proving that .A†f in the adjoint of .Af . Note that the operator .A†f is linear: for any .f, g ∈ H and any complex number c Af +g = Af + Ag

.

Acf = cAf .

(11.19)

11.3.1.5 Creation Operator on Fsn Whereas the annihilation operator maps symmetric vectors to symmetric vectors, the creation operator does not do so. For example, in √ † .A (f1 ⊗ f2 + f2 ⊗ f1 ) = 3(f ⊗ f1 ⊗ f2 + f ⊗ f2 ⊗ f1 ) f the right-hand side is not symmetric in the three vectors. We can remedy this by forcing symmetrization afterward. Define the operator .af† on symmetric vectors in .Fsn af† 0 = f, .

(11.20)

.

af† = S(n+1) A†f , on vectors in Fsn .

(11.21)

We will see that .af† is the adjoint of .af . We note here the linearity of .af† : for any .f, g ∈ H and any complex number c af +g = af + ag

.

acf = caf .

(11.22)

11.3.1.6 af† Is Adjoint of af The simplest way is to use the symmetrizing operators and act on decomposable vectors. We write both the annihilation and creation operators as .

af = S(n−1) Af S(n) , acting on Fsn.

(11.23)

af† = S(n) A†f S(n−1) acting on Fsn−1 .

(11.24)

The .S(n−1) in the expression for .af is actually unnecessary because it already maps symmetric vectors into symmetric vectors. But it must be kept to show explicitly the adjoint nature of .af and .af† . Written like this, the adjoint nature of .af† is obvious because .S(n−1) and .S(n) are self-adjoint, and we have shown .A†f to be the adjoint of .Af . (af )† = (S(n−1) Af S(n) )† = S(n) A†f S(n−1) = af† .

.

11.3 Fock Space for Bosons

163

11.3.2 Commutation Relations We show that .[af , ag ] = 0 and .[af† , ag† ] = 0. 2 =S We use expression (11.23) and (11.15) for .af and remember that .S(n) (n) af S(n) (g1 ⊗ · · · ⊗ gn ) = S(n−1) Af S(n) (g1 ⊗ · · · ⊗ gn )

.

n  1 √ (f, gi )S(n−1) (g1 ⊗ · · · ⊗ g/i ⊗ · · · ⊗ gn ), n i=1

and so, use (11.23) again for .ag and for .(n − 1) this time .

ag af S(n) (g1 ⊗ · · · ⊗ gn ) =  1 (g, gj )(f, gi )S(n−2) (g1 ⊗ · · · ⊗ g/j ⊗ · · · ⊗ g/i ⊗ · · · ⊗ gn ). √ n(n − 1) j =i i=1 n

The expression for .af ag has the same form with i and j interchanged. Since the sum is over all distinct pairs of these indices, the two expressions are the same and for all f and g [af , ag ] = ag af − af ag = 0.

.

(11.25)

By taking the adjoint of this equation, it also follows that [af , ag ]† = ag† af† − af† ag† = 0.

.

We prove next that .[af , ag† ] = (f, g) for any .f, g ∈ H. Notice that the product .af ag† maps .Hsn−1 into itself .

af ag† S(n−1) g1 ⊗ · · · ⊗ gn−1 √ = naf S(n) g ⊗ g1 ⊗ · · · ⊗ gn−1 = (f, g)S(n−1) g1 ⊗ · · · ⊗ gn−1 +

n−1  (f, gi )S(n−1) g ⊗ g1 ⊗ · · · ⊗ g/i ⊗ · · · ⊗ fn−1 1

(11.26)

164

11 Fock Spaces

On the other hand, ag† af S(n−1) g1 ⊗ · · · ⊗ gn−1

.

 1 = ag† √ (f, gi )S(n−2) g1 ⊗ · · · ⊗ g/i ⊗ · · · ⊗ gn−1 n−1 i n−1

=

n−1  (f, gi )S(n−1) g ⊗ g1 ⊗ · · · ⊗ g/i · · · ⊗ gn−1 i

Therefore, the commutator .[af , ag† ] = af ag† − ag† af is .(f, g) times the identity [af , ag† ] = (f, g).

.

11.4

(11.27)

Fock Space for Fermions

Just as there is a symmetric subspace .Hsn ⊂ Hn , there is an antisymmetric subspace n n .Hs ⊂ H for each .n ≥ 2.

11.4.1 Fock Space Fa Recall that every permutation of n objects can be built up by a number of “transpositions,” which are just the interchange of two specific objects without disturbing others. A permutation is called even or odd according to the number of transpositions it has. Note that the inverse .P −1 of a permutation P is even or odd accordingly as P is even or odd. That is because if P is built up by a sequence of transpositions, the inverse is built up by the reverse sequence of the same transpositions. We define the anti-symmetriser as A(n) (f1 ⊗ f2 ⊗ · · · ⊗ fn ) =

.

1  (−)P (fP (1) ⊗ fP (2) ⊗ · · · ⊗ fP (n) ). (11.28) n! P

Here the symbol .(−)P is just the real number .±1 depending on whether the permutation P is even or odd. The anti-symmetriser, like the symmetriser, is also a projection operator. Firstly, it is Hermitian, which can be shown just as we did for the symmetriser, only this time, we must remember that the even or odd nature of .P −1 is the same as that of P .

11.4 Fock Space for Fermions

165

Secondly, .

A(n) A(n) (f1 ⊗ f2 ⊗ · · · ⊗ fn ) 1  = A(n) (−)P (fP (1) ⊗ fP (2) ⊗ · · · ⊗ fP (n) ) n! P

1  (−)P (−)P (fP P (1) ⊗ fP P (2) ⊗ · · · ⊗ fP P (n) ) = 2 (n!) P ,P

In the double summation on the right-hand side, if we fix P , then the summation over .P can be done because as .P goes over all permutations, so does .P P because of the group property. Moreover .P P has the even or odd nature accordingly as the product .(−)P (−)P . Therefore for each fixed P .

1  (−)P (−)P (fP P (1) ⊗ fP P (2) ⊗ · · · ⊗ fP P (n) ) = A(n) (f1 ⊗ f2 ⊗ · · · ⊗ fn ), n! P

is the same, and there are .n! different values of P . This proves that A2(n) = A(n) .

.

(11.29)

The subspaces of antisymmetric vectors can be defined in complete analogy with the symmetric subspaces: Han = A(n) Hn .

.

(11.30)

11.4.1.1 Annihilation and Creation Operators on Fa Just as the annihilation operator .Af defined in Sect. 11.3.1.2 when restricted to the symmetric subspaces defined the annihilation operator .af for bosons, restriction of .Af to the antisymmetric subspaces gives the annihilation operator .bf for fermions.   bf = Af 

.

Han

.

Its action is best understood with examples. Take, for instance, bf A(3) g1 ⊗ g2 ⊗ g3

.

=

1 Af (g1 ⊗ g2 ⊗ g3 − g1 ⊗ g3 ⊗ g2 + g2 ⊗ g3 ⊗ g1 3! − g2 ⊗ g1 ⊗ g3 + g3 ⊗ g1 ⊗ g2 − g3 ⊗ g2 ⊗ g1 )

=

1 √ [(f, g1 )(g2 ⊗ g3 − g3 ⊗ g2 ) − (f, g2 )(g3 ⊗ g1 − g1 ⊗ g3 ) 2! 3

(11.31)

166

11 Fock Spaces

+ (f, g3 )(g1 ⊗ g2 − g2 ⊗ g1 )] 1 = √ [(−1)0 (f, g1 )A(2) g/1 ⊗ g2 ⊗ g3 + 3 (−1)1 (f, g2 )A(2) g1 ⊗ g/2 ⊗ g3 + (−1)2 (f, g3 )A(2) g1 ⊗ g2 ⊗ g/3]. This suggests the general formula (for .n ≥ 2) bf A(n) . g1 ⊗ · · · ⊗ gn = 1  (f, gi )(−1)i−1 A(n−1) g1 ⊗ · · · g/i ⊗ · · · ⊗ gn−1 √ n n

(11.32)

i

The example and the formula show that just as .Af transforms symmetric states to linear combinations of symmetric states, similarly, it transforms antisymmetric states to linear combinations of antisymmetric states. There is no harm in writing bf = A(n−1) Af A(n)

.

(11.33)

acting on the antisymmetric subspace .Han . Taking the adjoint of Eq. (11.33) bf† = A(n) A†f A(n−1) ,

.

(11.34)

acting on the antisymmetric subspace .Han−1 . bg† A(n−1) f1 ⊗ · · · ⊗ fn−1 =

.



nA(n) g ⊗ f1 ⊗ · · · ⊗ fn−1

(11.35)

This formula shows how .bf† acts on the antisymmetric subspace.

11.4.2 Anti-commutation Relations We can use the formula (11.32) to derive formulas analogous to the commutation relations of the bosonic annihilation and creation operators. For two annihilation operators, we see that (omitting the symbol .⊗ to simplify writing) .

bg bf A(n) g1 · · · gn = 1  (f, gi )(−1)i−1 bg A(n−1) g1 · · · g/i · · · gn−1 √ n n i

11.4 Fock Space for Fermions

167

 1 = √ (f, gi )(g, gj )(−1)i−1 (−1)j −1 A(n−2) g1 · · · g/j · · · g/i · · · gn−1 n(n−1) i j i

   1 = √ (−1)i+j −2 (f, gi )(g, gj ) A(n−2) g1 · · · g/j · · · g/i · · · gn−1 n(n − 1) all pairs j i

When we interchange the order of .bf and .bg in the above formula, the names of indices i and j can also be interchanged, so that there is an overall minus sign because the two terms with sum on .j < i and .j > i will have opposite signs. Thus bf bg = −bg bf .

.

(11.36)

The anti-commutator in the Fock space is denoted by {bf , bg } ≡ bf bg + bf bg = 0.

.

(11.37)

Taking the adjoint, the anti-commutator for the creation operators is seen to be zero too {bf† , bg† } = 0.

.

Moreover, for any .f, g ∈ H, .

bf bg† A(n−1) f1 ⊗ · · · ⊗ fn−1 √ = nbf A(n) g ⊗ f1 ⊗ · · · ⊗ fn−1 = (f, g)A(n−1) f1 ⊗ · · · ⊗ fn−1 +

n−1  (−1)i+1 (f, fi )A(n−1) g ⊗ f1 ⊗ · · · f/i · · · ⊗ fn−1 1

(11.38)

168

11 Fock Spaces

while bg† bf A(n−1) f1 ⊗ · · · ⊗ fn−1

.

= bg† √

=

1 n−1

n−1 

(f, fi )(−1)i A(n−2) f1 ⊗ · · · f/i · · · ⊗ fn−1

i

n−1  (f, fi )(−1)i A(n−1) g ⊗ f1 ⊗ · · · f/i · · · ⊗ fn−1 i

This is similar to the symmetric case except for the signs .(−1)i+1 and .(−1)i , the first one having one more power of .(−1) because the creation operator had increased the number of factors in the tensor product by one. This leads to the anti-commutation relation : {bf , bg† } = (f, g).

.

11.5

(11.39)

Orthonormal Bases in Hns and Hna

11.5.1 Orthonormal Basis in Hn Let .{ei }∞ 1 be an orthonormal basis in .H : (ei , ej ) = δij

.

(11.40)

An orthonormal basis in .Hn can then be defined as {ei1 ⊗ · · · ⊗ ein }

i1 , . . . , in = 1, . . . , ∞

.

(11.41)

with (ei1 ⊗ · · · ⊗ ein , ej1 ⊗ · · · ⊗ ejn ) = δi1 j1 . . . δin jn

.

(11.42)

11.5.2 Orthonormal Basis in Hns Start with the basis (11.41) above for .Hn . Its projection on the symmetric subspace is S(n) ei1 ⊗ · · · ⊗ ein =

.

1  eP (i1 ) ⊗ · · · ⊗ eP (in ) n! P

(11.43)

11.5 Orthonormal Bases in Hsn and Han

169

The norm square of the projection is

S(n) ei1 ⊗ · · · ⊗ ein 2 = (S(n) ei1 ⊗ · · · ⊗ ein , S(n) ek1 ⊗ · · · ⊗ ekn )

.

= (ei1 ⊗ · · · ⊗ ein , (S(n) )2 ek1 ⊗ · · · ⊗ ekn ) = (ei1 ⊗ · · · ⊗ ein , (S(n) )ek1 ⊗ · · · ⊗ ekn ) 1  = (ei1 ⊗ · · · ⊗ ein , eP (k1 ) ⊗ · · · ⊗ eP (kn ) ) n! P

The right-hand side is zero unless the indices .i1 , . . . in are some permutation of the indices .k1 , . . . , kn . Moreover, some of the indices .i1 , . . . in may be equal or repeated. Let .i1 , . . . , in be actually r different types of indices .j1 , . . . , jr repeated .n1 , . . . , nr times, respectively, with .n1 + · · · + nr = n. Symbolically {i1 , i2 , . . . , in } = {j1 , n1 ; . . . ; jr , nr }.

.

Then, in the final expression above for . S(n) ei1 ⊗ · · · ⊗ ein 2 in the sum over all permutations P , applied the set .k1 , . . . , kn , the inner product contributes 1 if these are in the same order as .i1 , . . . , in . Now, all those permutations which permute repeated indices among themselves will all contribute. There are therefore .n1 !n2 ! · · · nr ! such non-zero inner products. Therefore

S(n) ei1 ⊗ · · · ⊗ ein 2 =

.

n1 ! . . . nr ! , n!

and the vectors can be normalized as 1

e(j1 , n1 ; . . . ; jr , nr ) = (n!/n1 ! . . . nr !) 2 S(n) ei1 ⊗ · · · ⊗ ein .  1 = √ eP (i1 ) ⊗ · · · ⊗ eP (in ) n1 ! . . . nr !n! P

.

(11.44) (11.45)

forming an orthonormal basis, two vectors being orthogonal unless they have the same type of indices repeated the same number of times. Since a vector in .Hsn can be obtained from the vacuum by successive application of creation operators, we can also write the basis vectors in a more useful form. Let † .a be the creation operator corresponding to the unit vector .ei from the orthonormal i basis in .H ai ≡ aei ,

.

ai† ≡ ae†i ,

170

11 Fock Spaces

then ai†n · · · ai†1 0 = ai†n · · · ai†2 ei1 √ = 2ai†n · · · ai†3 S(2) (ei2 ⊗ ei1 ) √ √ = 2 3ai†n · · · ai†4 S(3) (ei3 ⊗ ei2 ⊗ ei1 )

.

= ··· √ = n!S(n) (ein ⊗ · · · ⊗ ei1 ) Therefore, ei1 i2 ...in ≡ e(j1 , n1 ; . . . ; jr , nr ).

.

1

= (n!/n1 ! . . . nr !) 2 S(n) ei1 ⊗ · · · ⊗ ein .

(11.46) (11.47)

= √

1 a † · · · ai†1 0. n1 ! . . . nr ! in

(11.48)

= √

1 (a † )n1 · · · (aj†r )nr 0 n1 ! . . . nr ! j1

(11.49)

It is important to realize that only one of the vectors out of .ei1 ...in and its permutations over indices .i1 . . . in can be in the basis, because they are all equal. For example, .eij = ej i if .i = j , so we should keep only one of them in the basis. One way is to list them as ei1 ≤i2 ≤···≤in .

.

Exercise 11.3 Find the action of .ak = aek and .ak† = ae†k on orthonormal states .ei1 i2 ...in .

11.5.3 Orthonormal Basis in Hna The orthonormal basis for the antisymmetric subspace is much simpler compared to the symmetric case. .A(n) ei1 ⊗ · · · ⊗ ein is actually zero if any of the indices is repeated. The orthonormal basis can therefore be chosen 1

ei1 ,...,in ≡ (n!)− 2

.

 (−)P eP (i1 ) ⊗ · · · ⊗ eP (in ) P

for all different indices .i1 , . . . , in .

(11.50)

11.6 Occupation Number Representation

171

Notice that for any P eP (i1 ),...,P (in ) = (−)P ei1 ,...,in ,

(11.51)

.

so that in order to label only the independent basis vectors, we must use some convention like .i1 < i2 < · · · < in . With this in mind, we write the orthonormal basis in .Han as 1

ei1