176 67 106MB
English Pages 487 [506] Year 2006
Fourth Edition
LINEAR ALGEBRA AND ITS APPLICATIONS
Gilbert Strang
THOMSON ™
BROOKS/COLE
and
Algebra
Linear
Its
Fourth Applications,
Gilbert
Katherine
Assistant
Editor:
Editorial
Assistant:
Marketing Manager: Marketing Assistant:
Brayton
Jennifer Velasquez
Cover Designer:
Print
Art Director:
Editorial
Vann
Bryan
Production:
Janet
Hill
Boes
Thomson
used
trademarks
Image:
Compositor:
herein
license.
under
are
Thomson 10 Davis
Laurie
Albrecht
Laurel Harkness
Interactive
Composition Corporation Donnelley/Crawfordsv
Printer:
R.R.
Higher Education Drive
Belmont, CA 94002-3098 USA
RIGHTS
ALL
covered or
by
used
copyright form or by
in any
electronic,
No part of this work may be reproduced
RESERVED.
the
hereon
means—graphic, including photocopying,
any
mechanical, web distribution,
or
information
recording, taping, storage
and retrieval
manner—without
in any other systems, the written permission of the or
Asia
(including India) Learning 5 Shenton Way #01-01 UIC Building Singapore 068808 Thomson
Australia/New
publisher.
Thomson
in the United
Printed
234567
States
of America
09 08 07 06 05
Zealand
Learning
102 Dodds
Street
Southbank,
Victoria
Australia
3006
Australia
Canada For
more
information
about
contact
us at:
_
Learning Academic
Thomson
our
products,
Resource
Center
permission to use material from this product submit a request online at http://www.thomsonrights.com.
Thomson Thomson
Library
questions about permissions by e-mail to thomsonrights @thomson.com. be submitted
Student
Learning, Inc. All Rights Reserved. Learning WebTutor™ is a trademark of Learning, Inc.
Thomson
of
Congress
Edition:
ISBN
Number: Control
0-03-010567-6
5G4
UK/Europe/ Middle Learning High Holborn House 50/51 Bedford Row
2005923623
East/Africa
Thomson
London
WCIR
United Latin
© 2006
MIK
text
additional can
Road
Toronto, Ontario Canada
or
Any
Nelson
1120 Birchmount
1-800-423-0563 For
Thomson
4LR
Kingdom America
Thomson
Learning
Seneca, 53 Colonia 11560
Polanco Mexico
D.F. Mexico
Spain (including Portugal) Thomson Calle
ICC
CoonleyAICC
Judith
Cover/Interior
Brooks/Cole, a part of The Thomson the Star logo, and Brooks/Cole Thomson, Corporation.
© 2006
Cover
Brett
Claudeanos
Lisa
Buyer:
Manager:
Vernon
Audrey Pettengill
Editor: Rozi Harris, Designer: Kim Rokusek.
Iilustrator:
Project Manager,
Senior
Text
Tom Ziolkowski
MarketingCommunications Senior
Editor:
Production
Holloway
Leata
Strang Permissions
Ramin
John-Paul
Editor:
Acquisitions
Edition
Paraninfo
Magallanes, 25 28015 Madrid, Spain
lableofLontents RICES
Chapter 1
A
Introduction
1
The
3 Geometry of Linear Equations An Example of Gaussian Elimination Matrix Notation and Matrix Multiplication Triangular Factors and Row Exchanges Inverses and Transposes 45 58 Special Matrices and Applications Review Exercises: 65 Chapter 1
11
19 32
~
2 Chapter
a
&
we
ye
#
gre,
Bee
ie
pw
Vector
69 Spaces and Subspaces 77 Solving Ax =0 and Ax =b Linear Independence, Basis, and Dimension The Four Fundamental 102 Subspaces 114 Graphs and Networks Linear
Transformations
Review
Exercises:
125
Chapter
Chapter 3
92 ca
2
137
147
Orthogonal Vectors and Subspaces Cosines and Projections onto Lines 160 Projections and Least Squares Orthogonal Bases and Gram—Schmidt The Fast
Review
Fourier Transform Exercises: Chapter 3
188
198
Chapter 4 201
Introduction
Properties of Formulas
the Determinant
203
for the Determinant
210
Applications Review
220
of Determinants
Exercises:
Chapter
4
230
141 ©
152
174
Iv
Table of Contents
233
5.1.
Introduction
5.2
245 Diagonalization of a Matrix Difference and Powers A* Equations Differential 266 Equations and e“’ 280 Complex Matrices 293 Similarity Transformations Review Exercises: 307 Chapter 5
5.3
5.4 5.5
5.6
6.1 6.2
6.3
Points
318 Tests for PositiveDefiniteness Singular Value Decomposition 331
6.4
Minimum
6.5
The
7.1
Introduction
7.2
Matrix
7.3
Computation of Eigenvalues
7.4
Iterative
Principles
FinitéElement
N
Linear
8.2.
The
8.3.
The Dual
8.4
Network
8.5
Game
382 392 40]
408
SUM,AND PRODUCT OFSPACES
FORM
= 422 428
Exercises
474
Factorizations 476
Glossary MATLAB
Teaching
Index
482
Linear
367
377
Models
Selected
Algebra
in
a
352
359
for Ax =D
Theory
to
346
Fe
Problem
THE JORDAN
Matrix
Method
Inequalities Simplex Method
INTERSECTION,
Solutions
|
339
Number orm.and-Condition
Methods
8.1
311
351
Burst
FE Appendix
and Saddle
Minima, Maxima,
254
Codes
Nutshell
481 488
= 415
So many Revising this textbook has been a special challenge, for a very nice reason. people have read this book, and taught from it, and even loved it. The spirit of the book could never change. This text was written to help our teaching of linear algebra keep up with the enormous importance of this subject—-which just continues to grow. One step was add new problems. Teaching certainly possible and desirable—to for all these years required hundreds of new exam questions (especially with quizzes going onto the web). I think you will approve of the extended choice of problems. The questions are still a mixture of explain and compute—the two complementary approaches to learning this beautiful subject. I personally believe that many more people need linear algebra than calculus. Isaac Newton in the 21st century might not agree! But he isn’t teaching mathematics (and maybe he wasn’t a great teacher, but we will give him the benefit of the doubt). Certainly the laws of physics are well expressed by differential equations. Newton needed calculus—quite right. But the scope of science and engineeringand management (and life) is now so much wider, and linear algebra has moved into a central place. have not yet adjusted the balance May I say a little more, because many universities toward linear algebra. Working with curved lines and curved surfaces, the first step is always to linearize. Replace the curve by its tangent line, fit the surface by a plane, and the problem becomes linear. The power of this subject comes when you have ten |
1000 variables, instead of two. You might think I am exaggerating to
variables,
or
in mathematics.
different
Not
directions.
get 3u and 4w, and is in the
plane
same
at all.
The we as
This
subject begins is to take
key step
plane. If I draw v and w on this page, (and beyond), but they don’t go up from the In the language of linear equations, I vector b lies in the same plane as v and w.
keep going a little into linear algebra. If the columns
of
a
with
linear
their
vectors
two
for
more
vectors
to are
their
solve
(1, 2, 3) and
=
cu-+dw
= =
w
We
OO
RQ
&
=
b
in to
vector
new
filling
in the
fill the page
exactly
of three-dimensional
matrix:
matrix
are
course
pointing multiply
That
+ dw
cv
basic
page. can
convert combinations v
w,
We
combinations.
combinations
a
and
v
add to get the particular combination 3v + 4w. we v and w. When we take all combinations,
whole
I will
“beautiful”
the word
use
=
when
the
vectors
(1, 3, 4), put them into the
of those
To find combinations
columns,
the matrix
“multiply” fi
combinations
Linear
combinations
Those
(For these
plane,
three
have
vector
a
that
columns,
two
we
fill
+ dw
cv
is
to
4]
rT
2
3
3
4
4 d
PL
=-c¢|2|/
|{|3|l.
+d
4
3
column
matrix.
of the
space
(2,5,7) is on that have three equations to solve: if
To decide
a
(c, d@):
vector
a
1
it the
plane.) get right. So
space
components
1]
call
We
space.
by
we
2
b
=
d=2
c+
C
2
3
3
4
|
If the 7
of I
Now
v
or
may
get
may
not
have
then
b won’t
lie in the
will have
equations
equations to produce a solution. They always
Here
and
rows
four
are
algebra
b
m
hard.
column
Linear
about
combinations
want
1s the heart
of the central
(all combinations
linear moves
of the
(one for each
have
of linear
come
so
be
helpful
back
with
you
can
of the lectures
is available
wh The
on
goals
of the
n
=
b.
vectors
equations
It’s not
totally
easy,
columns).
course.
the web
pages
connected and
to this
book.
I
access
homeworks
and current
of the course,
and
exams
with
solutions.
and
Linear
a
Videos
The
course
page
very
So many messages will make free use
hope you directly http://web.mit.edu/18.06, which is continually that is taught every semester. Linear algebra is also on MIT’s http://ocw.mit.edu, where 18.06 became exceptional by including Here is a part of (which you definitely don’t have to watch...). encouragement,
Pp
am
Ax
solution.
conceptual questions. (audio is now included for eigenvalues). Algebra Teaching Codes and MATLAB problems. of the complete course (taught in a real classroom).
Interactive
I
a
the web:
schedule
Lecture
the
start
suggestions and
everything. updated for the course OpenCourseWare site what
can
to mention
You
of
videos
be
not
row
I will stop here,
It may
w.
ideas:
of the rows). space (all combinations of independent columns) (or rows). The rank (the number Elimination (the good way to find the rank of a matrix).
4.
and
(in the column
row). Those
algebra.
2. 3.
will
equations steadily to
columns
a least-squares
The The
plane—it
v
solution.
no
1.
space
7.
=
(2,5, 7) does lie in the plane of
=
rows.
m
still
of columns
interplay too
b
vector
and
We
space.
still
but it’s not
The
columns
n
We
The
3c +4d
describe the first part of the book,
can
in m-dimensional
space).
2c+3d=5
means
7
any other number, and w, and the three
A has
matrix
|5
=
to
changes
combination The
to you.
the solution
I leave
|i
Java
demos
has become
optimistic
about
link to the class, and a resource for the students. the potential for graphics with sound. The bandwidth for a
valuable
Vil
Preface
voiceover active
is low, and
experiment),
and
the
worldwide
will
find
these
all the
course
students useful
Student
Solutions
solutions
to the
Instructor's
teaching
for each
fundamental
problem
problem
Ax
what
downloaded.
quick review (with hope professors and
I
is to make
helpful. My goal can provide.
pages material
this book
as
I
Student
The
Manual provides
Solutions
in the text.
Manual
0-030-10568-4
chapter
and solutions
Ax
b has
=
looks
for
The
Instructor’s
to all of the
Solutions
problems
in
Manual
has
the text.
rt
1
2).
2
3
3
4
I |
that the three first find
rows
two
2 not
are
learn
us
A has
for
Ax
=
independent
first from
examples.
does
have
A crucial
not
You
A.
second
is
course
that
see
columns.
independent
3. A
either.
combination
of
wonderful
theorem
The third
1 and
rows
of linear
must
row
2 will
combination
Elimination
The
part of this
can
quickly (I didn’t). In the end I had that the right combination 2 times row 2, minus row uses
discover
matrices
square columns.
:
equals column independent
Some
rows.
that
when
Ax
wl
column
plus
solution
b and
=
means.
of
ay
Ax
are
independent eigenvectors.
that most
A=|]1
Column
a
“independence”
I believe
might
web
problems
problems
ix
=
_
the
be
can
a
of the Course
The first
as
lectures
This offers
available.
freely
0-495-01325-0
odd-numbered
notes
two
to learn
full
Manual
Solutions
structure The
with
possible
as
is
FlashPlayer
is the
and
algebra
lie in the
produce to
use
1.
same
says
plane 3. You
row
to elimination
natural
a matrix way to understand by producing a lot of zero entries. So the course starts there. But don’t stay there too long! You have to get from combinations of the rows, to independence of the rows, to “dimension of the row the row space.” That is a key goal, to see whole spaces of vectors: space
and the column
A further
the
vector
new
space and the nullspace. goal is to understand how the matrix Ax. The whole
transformations linear
simple
of vectors
space
from
When
acts.
and
those
particular matrices, matrices, algebra: diagonal orthogonal matrices, come
is
moves—it
A
multiplies
“transformed” are
A.
the foundation
triangular matrices,
|
matrices.
it
produces by Special x
stones
of
symmetric
|
The
are special too. I think 2 by 2 matrices eigenvalues provide terrific of 4 can the Sections information 5.1 that and 5.2 examples give. eigenvalues are worth careful reading, to see how Ax Here is a case in which small Ax is useful. matrices allow tremendous insight. Overall, the beauty of linear algebra is seen in so many different ways: of those
matrices
=
i.
‘Visualization.
and
projection
Combinations
of vectors.
of vectors.
Perpendicular
vectors.
Spaces Four
of vectors.
fundamental
Rotation
and reflection
subspaces.
aL
lbedatl
EAT
aedetialetie
ee
Preface
2.
Basis and dimension Independence of vectors. transformations. Singular value decomposition and the best
of
a
vector
Gram—Schmidt
to
Abstraction.
Linear
Elimination
Computation. orthogonal vectors. 3.
to
Eigenvalues
to
produce
zero
entries.
differential
solve
space.
basis.
and difference
produce
equations.
b has too many equations. DifLeast-squares solution when Ax Applications. ference equations approximating differential equations. Markov probability matrices ). (the basis for Google!). Orthogonal eigenvectors as principal axes (and more... 4.
=
To go further with those applications, may I mention the books published by WellesleyCambridge Press. They are all linear algebra in disguise, applied to signal processing
differential
If you look that linear algebra
equations and scientific computing (and even at http://www.wellesleycambridge.com, you Will see part of the reason is so widely used.
and partial
preface, the book will speak for itself. The emphasis is on understanding—I try to explain this
After
book
real mathematics,
about
examples
to
teach
what
not
endless
drill.
You will
than
rather
In class,
I
see
am
GPS).
the to
spirit right This
deduce.
constantly working
away. is a with
need.
students
enjoyed writing this book, and I certainly hope you enjoy reading it. A big part of the from working with friends. I had wonderful help from Brett Coonley pleasure comes and Erin Maneri. Robinson and Cordula They created the TRX files and drew all the figures. Without Brett’s steady support I would never have completed this new edition. Earlier from Steven Lee and Cleve Moler. help with the Teaching Codes came in the book; MATLAB and Maple and Mathematica are Those follow the steps described All can be used (optionally) in this course. I could have added faster for large matrices. to that list above, as a fifth avenue to the understanding of matrices: “Factorization” I
| L,U,P] [Q,R] [S,E]
=
=
=
lu(A
for linear
qr(A) to eig(A) _ to
make find
giving thanks, I never a special chance That was example is an inspiration for In
And
I
thank
the reader
equations q the columns
eigenvectors forget
to
the
thank
my
first
orthogonal eigenvalues.
and
dedication
parents
for
of this so
many
textbook,
unselfish
years
gifts.
ago. Their
my life.
too,
hoping
you like this book.
Gilbert
Strang
baussianElimination 1.7 This
book
The most
number
begins with the central problem important case, and the simplest, of equations. We have n equations
The unknowns to
solve these
are
question
i.
Hiimmimation
eliminates
x
2y
=
3
Two
unknowns
4x
+
Sy
=
6.
two
ways,
and y. I want
x
the second
y
=
and y are six numbers x
4 times
Subtract
know
2.
the and
we
4 times
Determinants
equations. There I hope you will
The
describe
to
those
to use
Proceeding carefully, and it does:
starting
+
the first
equation,
to solve
2. Then
from
that
(x
solution
must
be
allow
me
a
y
formula to
write
]
G
4
ON
1 4
and y also plus 5 times
x
—1)
=
=
2
and
the numbers
the second
determinants, 1, 2, 3, 4, 5, 6.
equation.
(2)
—
equation
1x +
2y
gives
x=-l.
solve
the second
equation.
(y
2) equals 6.
=
(and also x). It is directly:
7
This
—3y 6,
the first
4
2:
=
for y:
equation
depends completely
for y it down
MN}
from
one
1x+2(2)=3 check
n
the system.
equation
and it leaves
comes
x
by
with
)|
elimination
determined
1) A(equation
—
Back-substitution
work
unknowns,
n
Ix
(equation 2)
Immediately we
in
equations. equals the
of unknowns
the number
equations
is how
from
is when
linear
algebra: solving
Two
equations. Certainly
The
of linear
=== 7
7
on a
those
“ratio
=
3:
(3) This
should
six numbers
in
of determinants”
(4)
Chapter
That
Elimination
and Gauesian
Matrices
1
could
little
a
seem
They gave the
mysterious, unless
answer
same
determinants
stay with
the other
compute
y
(which
unknown,
already know
you
2, coming from
=
don’t
we
plan
the
be
will
by
2 determinants.
of —6 to —3. If
ratio
same
do), there
to
2
about
a
similar
we
formula
to
x:
WD bo ON GN
|
KO ON
oe)
Se
PO]
Se)
=.
(5)
(mM
Let much
me
It would find
use use
that
two
is
1000
approaches, looking
to real
ahead
when
problems
n
is
size in scientific very moderate computing). The truth is of the determinant formula for 1000 equations would be a total disaster.
larger (1 direct
that
those
compare =
the million
a
numbers
the left sides
on
(Cramer’s Rule) in Chapter 4, but
formula
we
We will
but not
correctly, want
a
efficiently. good method
solve
to
equations in Chapter 1. is Gaussian That good method Elimination. This is the algorithm that is constantly 3 is used to solve large systems of equations. From the examples in a textbook (n close to the upper limit on the patience of the author and reader) you might not see much 2. Certainly difference. Equations (2) and (4) used essentially the same steps to find y in equation (3) than the ratio in (5). For larger faster by the back-substitution x came is absolutely no question. Elimination n there wins (and this is even the best way to 1000
=
=
determinants).
compute
:
deceptively simple—youwill master it after a few examples. It will become the basis for half of this book, simplifying a matrix so that we can of the algorithm, we want to explain four understand it. Together with the mechanics deeperaspects in this chapter. They are: The idea
is
of elimination
a nineequations lead to geometry of planes. It is not easy to visualize dimensional space. It is harder to see ten of those planes, plane in ten-dimensional intersecting at the solution to ten equations—but somehow this is almost possible. Our example has two lines in Figure 1.1, meeting at the point (x, y) (—1, 2). that picture into ten dimensions, has to Linear algebra moves where the intuition imagine the geometry (and gets it right). and the n to matrix x as a vector We move notation, writing the n unknowns b. We multiply A by “elimination an to reach matrices” upequations as Ax U. Those per triangular matrix steps factor A into L times U, where L is lower
Linear
1.
=
2.
=
hy
by
SS ~~ PSN
Nt ;
x+2y
R23
x
Pt
=6
4x+5y olution
(x, y)
Figure
hy
=
1.1
4x
(—1, 2) The
example
Parallel: has
one
+
NS
=3
_ Whole
(Singular
cases
Nee
Pa
4x
8y =6
No solution solution
x-+2y=3
/
have oe
none
line or
+
8y
=
of solutions too
many. .
12
The
1.2
I will write
triangular. right time:
down
Factorization First
have
we
Every
matrix
In most
cases
A and its
has
elimination
and the system Ax down—either the
=
the
look
The matrix
if 8
lines
innocently
4
subtracts
multiplication. has
inverse
an
is
easily fixed
Ix
+
2y
=
3
4x
+
8y
=
6.
the first
(6)
will break
our
times
U.
the method
5 in
replaces
case
parallel
cases
in the wrong order, which don’t have a unique solution.
equations
at the
A~'.
inverse
an
written
were
will appear
Singular still
has
for
the rules
goes forward without difficulties. b has one solution. In exceptional
by exchanging them, or That singular case
Elimination
and
vectors
A‘. This matrix
equations
Two
and
matrices
them
explain
=Ltimes
=
transpose
a
and
example,
our
3
Equations
F | r t| E 3
A=
introduce
to
factors for
of Linear
Geometry
.
example: 7)7
from
equation
the second.
But
at the result!
(equation 2) This tions.
has
singular case (Change 6
solution.
no
12 in the
to
have any value.) When
can
4(equation
—
Other
example,
singular
down,
—6.
=
have
cases
infinitely
will
and elimination
breaks
elimination
0
1)
lead
to find every
want
we
to
many 0 = 0.
soluNow
y
possible |
solution. We need
arough count of the number of elimination steps required to solve a system of size n. The computing cost often determines in the model. A hundred the accuracy equations require a third of a million steps (multiplications and subtractions). The computer can do those quickly, but not many trillions. And already after a million error could be significant. (Some problems are sensitive; others are steps, roundoff not.) Without trying for full detail, we want to see large systems that arise in practice, and how they are actually solved. |
this chapter will be an eliminationalgorithm that is about efficient as possible. It is essentially the algorithm that is in constant in a tremendous use variety of applications. And at the same time, understanding it in terms of matrices—the The
final
coefficient final
matrix
factors
this book
The way
result
EF for elimination
A, the matrices
L and
and this
of
U—is
an
foundation
essential
and for the
P for
row
exchanges, and the hope you will enjoy
theory.
I
with
two
course.
to understand
this
equations, recognizing I hope you Nevertheless
subject
that
will
you
is
by example.
could
give
solve
Gauss
a
2x
them
We
begin
without
a
course
extremely humble in linear algebra.
chance:
—-y=1
x+y=5. We
can
look
at that
as
system by
rows
or
by columns.
We want
to see
them
both.
Elimination
and Gaussian
Matrices
1
equations (the rows). That is the lis do it quickly. The equation 2x can we most familiar, and in two dimensions y 1, represented by a straight line in the x-y plane. The line goes through the points x y =landx S y 0 (and also through (2, 3) and all intermediate points). The second -—-1and 5 produces a second line (Figure 1.2a). Its slope is dy/dx equation x + y concentrates
approach
first
The
on
the
separate
=
—
=
=
=
=
=
it
the first line at the solution.
crosses
of intersection
The That
point point x
2 and y
=
A
(0
|
Y
oe
3 will
=
—
lies
on
both
lines.
It is the
be found
soon
by
only solution
2
A
5)
Lat?)
aaa
Ken =@3)
ees
(0, G9
(a)
Lv
Lines
Figure
meet
1.2
The
equations
Row
(452)
(—1,1)
(2,1)
=
column 1 :
at
is)
+ y=
x
=
(b)
2,y=3
picture (two lines)
of the linear
2
form
x
combine
2 and
with
3
picture (combine columns).
approach looks at the columns really one vector equation: Column
Columns
and column
second are
+8
-
°
)
(column 1) (column 2)
_
Sg
(5,0)
equations.
“elimination.”
1
y=
to both
system. The
two
separate
1]
—|
H | i HI +y
=
of the column vectors on the left side that problem is to find the combination produces the vector on the right side. Those vectors (2, 1) and (—1, 1) are represented are the numbers x and y that multiply by the bold lines in Figure 1.2b. The unknowns 1 The whole idea can be seen vectors. in that figure, where 2 times column the column 2. Geometrically this produces a famous is added to 3 times column parallelogram. it the of our correct side vector on the (1,5), equations. produces right Algebraically confirms that x 2 and 3. The column y picture forward to More time could be spent on that example, but I would rather move 3. Three equations are still manageable, and they have much more n variety: The
=
=
=
2u+ Three
planes
4u
vt —
6v
—2u+7v+2w=
5
w= =
—2
(1)
9.
study the rows or the columns, and we start with the rows. Each equation a plane in three dimensions. The first plane is 2u + v + w describes 5, and it is in Figure 1.3. It contains the points sketched (2,0, 0) and (0, 5, 0) and (0, 0, 5). It is determined by any three of its points—provided they do not lie on a line. It 10 would be parallel to this one. Changing 5 to 10, the plane 2u + v-+ w contains (5, 0,0) and (0, 10, 0) and (0, 0, 10), twice as far from the origin——whichis
Again
we
can
=
=
1.2
The
5
Geometry of Linear Equations
al Qu+vu+w
—
4u
6v
—
=
—2
(sloping plane)
=5
(vertical plane)
N
(1,1,2)
®
with point of intersection solution third plane
=
=
of intersection:
line
first
planes
two
u
Figure
1.3
the center to
The
point
row
picture:
u
0,
=
v
three
0,
=
itself, and the plane 2u + The
value. 4u
=
second
The
3,
or
plane
coefficient
v
is 4u of
w
the intersection
+
—
w
6v
of the second
case
=
=
u
from
three
linear
equations.
0.
=
is zero,
the extreme
even
w
intersecting planes
=
the plane parallel Changing the right side moves 0 goes through the origin. —2. It is drawn vertically, because take any w can a plane in 3-space. (The equation but this remains 0, would still describe a plane.) The figure shows
plane with equations;
the first.
That
intersection
is
a
line.
In three
1. it will require n in n dimensions requires two Finally the third plane intersects this line in a point. The plane (not drawn) represents 2. the third equation —2u + 7v + 2w the line atu 1,v= 1, w 9, and it crosses That triple intersection point (1, 1, 2) solves the linear system. n How does this row picture extend into n dimensions? The n equations will contain a “plane.” It is no longer a two-dimensional unknowns. The first equation still determines plane in 3-space; somehow it has “dimension” n 1. It must be flat and extremely thin within n-dimensional space, although it would look solid to us. If time is the fourth dimension, then the plane ¢ 0 cuts through four-dimensional live in (or rather, the universe we universe space and produces the three-dimensional as it was att it is the 0, which is also three-dimensional; 0). Another plane is z planes will intersect! ordinary x-y plane taken over all time. Those three-dimensional 0. We are down to two dimensions, and the They share the ordinary x-y plane at t next plane leaves a line. Finally a fourth plane leaves a single point. It is the intersection point of 4 planes in 4 dimensions, and it solves the 4 underlying equations. I will be in trouble if that example from relativity goes any further. The point is that of equations. The first equation produces an linear algebra can operate with any number The second plane intersects it (we hope) in (n 1)-dimensional plane in n dimensions.
dimensions
a
line
—
_
=
=
—
=
=
=
=
—
=
oter 1
a
Matrices
and Gaussian
smaller
set of “dimension
This
columns.
to the
all goes well, every the end, when all n
Assuming by one. At
It is
Zero.
point, it lies
a
plane (every new planes are accounted all the planes, and its new
on
It is the solution!
equations.
n
Combinations
and Linear
Vectors
We turn
all
satisfy
Column
2.”
—
dimension
has
for, the intersection coordinates
n
the dimension
reduces
equation)
Elimination
time the vector
equation (the
equation
same
(1)) is
as
[
form
Column
4)
u|
2 three-dimensional
Those
are
whose
coordinates
vector,
and vice
can
we
arrow
That
versa.
|-6]
+v
|
The vector
vectors.
as
Descartes, point. We can
of the
the coordinates b
can
(5, —2, 9),
=
choose
|-2|
=
b is
or
we
the arrow,
or
the
with
identified
point
is matched
space
to a
geometry into algebra in a column, or the vector
write
it
represent
the
(2)
turned
who
can
=).
9
|
|
in three-dimensional
the idea of
was
list its components from the origin. You
2
5]
6
|O}
+:w
7
|
5, —2,9. Every point
are
with
by working
column
rd]
1]
2]
point,
geometrically by numbers.
the three
or
it is
an
In
probably easiest to choose the six numbers. when the components are listed horizontally, and We use parentheses and commas (with no commas) when a column vector is printed vertically. What really square brackets is addition matters of vectors and multiplicationby a scalar (a number). In Figure 1.4a addition, component by component: you see a vector six dimensions
al
wy
-~
Vector
addition
O; 0
5 Y
b=
|—2
i 4
“T
ana
9|
=
0
oO oN a
L_
—2|
=
linear
combination
equals
b
9
a
a +
| |
Nee
|
|
=
_
|
5
0
9
van
L
|
@
Ld
=
5
0
j—-2) +
+
al
a
0
| |
“
-
2
|
10} =2
10
.
Ib
oN
2(column 3)
|
4
;|—2
|
|
“
|
|
2
1
3
—2
7
5
-
|
a
columns
5
—
1 + 2
0 0
(a) Add
vectors
Figure
1.4
along The
(b) Add columns
axes
column
picture:
linear
combination
1 + 2 +
of columns
(3 + 3)
equals
b.
The
1.2
In the
right-hand figure
would
have gone
in the
there
is
a
Geometry
7
Equations
(and if it had been
2
multiplication by
ofLinear
—2 the vector
direction):
reverse
Pd
1)
2
Multiplication
by
210)
scalars
—2/0;
|0},
=
—2 O}.
=} ©
in the
Also
of the basic called
right-hand figure operations; vectors
linear
a
is
are
ideas
of the central
one
combination
1}
4)
—2 Equation (2)
asked
for multipliers
u,
v,
w
that
-_
pe
di
|-6)
+1 |
ae
eal
fea
|-2).
=
2. |
the
is
The result
5
1/0}
+2
7
produce
added.
both
uses
equation:
our
1!}
—
—
|-4
It algebra.
and then
solves
2
Linear
of linear
multiplied by numbers
and this combination
combination,
2)
4
2
right side
|
Oo
b. Those
numbers
of the columns. combination They also They give the correct gave the point (1, 1, 2) in the row picture (where the three planesintersect). With n into n dimensions. Our true goal is to look beyond two or three dimensions in equations in m unknowns, there are n planes in the row picture. There are n vectors the column picture, plus a vector b on the right side. The equations ask for a linear combination of the n columns that equals b. For certain equations that will be impossible. is to study the bad one. the good case Therefore Paradoxically, the way to understand we look at the geometry exactly when it breaks down, in the singular case. are
u
Row
The
1,
=
v
=
picture:
1,
w
=
2.
Intersection
of
planes
| Column
picture: Combinationof columns
Singular Case
again in three dimensions, and the three planes in the row picture do not intersect. What can go wrong? One possibility is that two planes may be parallel. The 11 are inconsistent—and 5 and 4u + 2v + 2w equations 2u +v+w parallel In no shows an two 1.5a end solution dimensions, view). planes give (Figure parallel can be But three planes in three dimensions lines are the only possibility for breakdown. in trouble without being parallel. The most common difficulty is shown in Figure 1.5b. From the end view the planes form a triangle. Every pair of planes intersects in a line, and those lines are parallel. The
Suppose
we
are
=
=
SXXA*
two
parallel planes
no
(a) Figure
1.5
line
intersection
of intersection
Singular
cases:
no
solution
for
(a), (b),
or
(d),
planes parallel
(d)
(c)
(b)
all
an
infinity
of solutions
for
(Cc).
1
and Gaussian
Matrices
Elimination
parallel to the other planes, but it is parallel plane is not (2,5, 6): corresponds to a singular system with b
third This
line of intersection.
to their
=
w=2
utvu+
No
solution,
as
in
1.5b
Figure
2u
+3w=5
3u+v+4w
(3) 6.
=
right side that fails: 2 +5 + 6. 1. Thus 0 impossible statement Equation 1 plus equation as Gaussian will systematically discover. the equations are inconsistent, elimination When the Another singular system, close to this one, has an infinity of solutions. 0. Now the 6 in the last equation becomes 7, the three equations combine to give 0 third equation is the sum of the first two. In that case the three planes have a whole line in common (Figure 1.5c). Changing the right sides will move the planes in Figure 1.5b parallel to themselves, and for b (2, 5,7) the figure is suddenly different. The lowest plane moved up to meet the others, and there is a line of solutions. Problem 1.5c is still The
first
left
two
sides
add
up to the third. On the 2 minus equation 3 is the
=
=
=
singular, but
it suffers
now
from
too
solutions
many
instead
of too
few.
parallel planes. For most right sides there is no solution (0,0, 0)!) there is a whole plane of (Figure 1.5d). For special right sides (ike b the three parallel planes move to become over the same. solutions—because What happens to the column picture when the system is singular? It has to go wrong; on the left side of the equations, and the question is how. There are still three columns we try to combine them to produce b. Stay with equation (3): The
extreme
is three
case
=
Singular Three
Column
case:
in the
columns
Solvable
only
picture same plane that plane
for b in
ry
1
+wl3}/=b.
uj2)|+v{i0|
3
Py
4
1
4)
|
is that those possible; for b (2, 5, 6) it was not. The reason lie in a plane. Then every combination is also in the plane (which goes three columns through the origin). If the vector b is not in that plane, no solution is possible (Figure 1.6). That is by far the most likely event; a singular system generally has no solution. But For b
=
(2,5, 7) this
was
=
3 columns
3 columns
in
in
a
plane 6 in
(a) Figure
1.6
no
solution
Singular
plane
(b) infinity cases:
b outside
or
inside
the
plane
with
of solutions
all three
columns.
a
plane
The
1.2
there
is
How
do
find
a
v
—1,w
in
picture know
we
—2. Three
is in the
plane
The
vector
can
Figure
1.6b
b
adds
that
2 and 3.
1
Only
(2,5,7) column 3—so (1,0, 1) is a solution. 0. So there (3, —1, —2) that gives b the row picture.
We
did. That
the
If
n.
columns
is that
is
a
n
planes
fact of mathematics, have
lie in the
If the
knew the columns
we
same
columns—it
1.5c. is to
answer
it is
u
=
3,
3. Column
is column
1
1
plus
multiple of thecombination
any
of solutions—as
line
produce
Figure
One
plus independent.
know
we
from
|
would
combine
give
to
computation—and it or common, infinitely
in
point plane.
ways
in
too
are
to
column
twice
of
not
no
add
can
there
case
calculation,
some
are
of the
plane
is a whole
=
The truth
columns
two
same
2
column
picture plane?
row
After
to zero.
equals
is in that
=
the
to
lie in the
columns
column
times
of columns
plane
corresponds
the three
of the columns
combination =
columns
that
of the columns. In that be combined in infinitely many
lie in the
b does
the three
many solutions; b. That column
=
that
chance
a
9
Geometry of Linear Equations
because
zero,
remains
true
rows
in dimension
points,
many
the then
the
n
picture breaks down, so does the column picture. That brings out the difference between Chapter 1 and Chapter 2. This chapter studies the most important there is one solution and it has to be found. problem—the nonsingular case—where or In none. Chapter 2 studies the general case, where there may be many solutions both cases a decent we cannot continue without notation (matrix notation) and a decent Start with elimination. we algorithm (elimination). After the row
eXeICises,
1.
For the
lines) and the vector (4, 4) on 2.
Solve
+ y = 4, 2x 4, draw the row picture (two intersecting 2y column picture (combination of two columns equal to the column
equations
find
to
=
x
—
the
right side).
combination
a
of the columns
that
b:
equals
u—-v—-w=b; '
Triangular
bo
v+w=
system
w
3.
Describe
(Recommended) andu+w+z=4andu+w a
point
or
included? 4.
Sketch
Find these
fourth
a
three
What
set?
empty
an
lines
u-++v+w+z planes in four-dimensional space). Is it a 2 (all if the fourth plane u is the intersection =
that leaves
equation and decide
if the
with
us
equations
by
2 system
=
line
6 or
—1 is
solution.
no
solvable:
are
=2
x+2y
3
bz.
of the three
the intersection =
=
y=?
x—-
y=. What
happens
hand sides 5.
Find
two
x-+y-+z-+t
if all
right-hand
that allows
points =
on
1
the three
sides lines
are
to
the line of intersection in four-dimensional
zero? intersect
Is there at
the
of the three space.
any same
of
choice
nonzero
right-
point?
planes
t
=
0 and
z
=
0 and
6.
(2,5,7), find (1, 0, 1) mentioned
b
When
Givetwo
Give two
be solved.
(u,
sides
addition
in
equation (4)
to
b
to
(2, 5, 7) for
=
in addition
sides
right-hand
more
w)
v,
from
different
the
the text.
in
right-hand
more
solution
a
=
solution
can
Elimination
and Gaussian
Matrices
r1
b
to
which
equation (4)
(2, 5, 6) for which
=
be solved.
it cannot
Explain why
the system u+
w=2
v+
u+2v+3w=1 v+2w=0
singular by finding a combination What value should replace the last
of the three
is
have solutions—and .
The column
what
is
for the
picture
exercise
previous
column the
10.
11.
as
combination
a
(Recommended) Under (2,y3) lie on a straight These
equations
is there
what
left lie in the
condition
on
have
to
all the solutions
are
=
yy,
13.
the two
Draw
14. Fortwo
x
=
4y
+
of the
in two pictures
equationsin
planes
three
column
and
row
for the
unknowns
=
pictures.
equations y, z, the
x,
.
The
solution
Find a point x —y+z=4.
equations in column picture
unless
with
the vector z
Find
=
2 the
on
unknowns
two
is in on
x
—2y = 0,x picture
row
y,
side is
right-hand
the intersection z
=
a
line of the
0 and
a
third
+ y
6.
=
willshow picture
(2
is in
row
combination
planes x + point halfway
3) (two
or
|
picture shows space. The equations
the
-dimensional
the
point with
and
x
0,
=
=
review
a
are
linear
a
=0
+ay
(lines planes) in (two or.three)-dimensional space. The column or three)-dimensional space. The solutions normally lie on a
16.
of
=0
+2y
or
no
values
0. For which
=
y
(1, y2),
line of solutions?
x
linear
four
b is
w) if
v,
the points (0, y1),
do
Yasy3
=
13-15
For
(u,
7, find the equation for the parallel line through x 0. Find the equationof another line that meets the first at x 3, y I. with
Starting
Problems
15.
the third
plane by expressing
same
the solution
2x
y
What
first two.
ax
12.
2|
line?
certain
are
whole
a
the
is
(0, 0, 0)?
vector
zero
to
{3] =b.
1
on the
of
equations
1
{2} +w
+v
0) columns
(singular system)
1]
ui1l|
that the three
the
allow
1.
=
of the solutions?
one
1
Show
the
on
zero
up to 0
adds
that
equations right side to
four have
of y+
3z
between.
=
6 and
1.3
17.
first of these
The
An
the second
equations plus
x+
Example of Gaussian
11
Elimination
the third:
equals
z=2
y+
z=3
x+2y+ 2x+3y+2z=5. first two
The
if x, y,
z
infinitely 18.
solutions
many
Move
the third
plane
three
equations
have
in Problem
17 to
a
L, but the third plane doesn’t 19.
In Problem
because
case’ that
20.
4
the third
(2,3, 5).
=
4
Normally
“planes”
When the
row
for
is
only possible
I is added
If (a, b) is (b, d). This
a
the column
see
F ‘|
IfA=
In these The
equations,
column
how
has
of the
form
combinations
two
for
b
the line
along a
“singular
of the column
(4,6, c)ife=__
=
can
1, 1, 1) produces b
(3, 3, 3, 2)? What
=
c,
and d
are
dependent
in
+ 0, show that (a,c) is a multiple of it a challenge question. You could use The question will lead to: related. then
rows
dependent columns.
it has
(multiplying w) is the equations immediately gives what
the third
Now the
9.
=
equation 2, which of these are changed: the planes picture, the coefficient matrix, the solution?
b,
a,
3y +2z planes meet
at a space meet Normally 4 colcombine to produce 6. What combination
multiple of (c,d) with abcd is surprisingly important: call
first to
2x +
solving?
you to
solutions.
.
space
are
Find
.
in four-dimensional
y, z, t
x,
equation picture,
numbers
23.
is
column
(1, 0, 0, 0), 1, 1, 0, 0), (1, 1, 1,0), (,
equations
equations have
(1, 1, 2) and (1, 2, 3) and (1, 1, 2). This is
are
This
The
line, because
line.
that
in four-dimensional
vectors
of
22.
b
give
umn
21.
17 the columns
parallel plane
that
.
not? The first two
solution—why
no
contains
along a line. The third plane two equations then they also (the whole line L). Find three
planes meet satisfy the first
column
same
as
solution
the
right
side b.
for
(u,
w)?
v,
|
6u
8w=8
+ 7+
4u+5v+9w
=9
2u—-2v+7w=7..
AN EXAMPLE
1.3
The way
to understand
OF GAUSSIAN ELIMINATION elimination
is
by example. 2u+ 4u
Original system
We v+
—
in three
begin
5
w=
6v
=
Gaussian
certainly
problem
is to find
elimination. not
because
the
(Gauss
is
unknown
values
recognized
of this invention,
which
of u,
—2
(1)
9.
—2u+7v+2w= The
dimensions:
v,
and
w,
and
we
shall
apply
but the greatest of all mathematicians, probably took him ten minutes. Ironically,
as
Elimination
and Gaussian
Matrices
r1
frequently used of all the ideas that bear his name.) The by subtracting multiples of the first equation from the other equations. two u from the last eliminate equations. This requires that we
method
it is the most
—1 times
the
equation from the second first equation from the third. 2au+
Equivalent
system
5
vtw=
8v
—
2w
—
—12
=
2 is the
coefficient
underneath
numbers
it,
first pivot.
Elimination
to find
the
out
is
pivot for the second stage of elimination equation. A multiple of the second equation will equations (in this case there is only the third one) so equation to the third or, in other words, we
elimination
The
the second
—1 times
is
process
now
complete,
subtracted
be
from
to eliminate
as
into the
We add the second
v.
the third.
v+
direction:
5
w=
~8v-—2w=-12
system
pivot
ignore the first the remaining
now
in the “forward”
least
at
is —8. We
from
equation
2u+
Triangular
the
constantly dividing right multipliers.
The
(c) subtract
(2)
14.
Su+3w= The
is to
goal
the first
2 times
(a) subtract (b) subtract
The
starts
(3)
©
.
Ilw=
system is solved
This
u
second
the
into
Substituting
bottom
backward,
equation,
we
top. The
to
find
v
1. Then
=
from
row
in
the
Substitute
order:
reverse
beneath.
rows
pivots 2, —8, 1. It subtracted multiples the “triangular” system (3), which is reached newly computed value into the equations that
It each
waiting.
are
Remark
One
side
right-hand step,
every
so
good as
we
an are
way
to write
extra
column.
down
the forward
There
left with
the bare
5
[2
is
no
1
2
-6
4
done
2
is the
end
arrangement, also
0 7
}-2 At the
1
on
0
9}
need
guarantees
that
to
u
copy
and
v
to include
and
w
and
the =
at
minimum:
-8
-2
8
5]
2d
1
|O
triangular system,
which the
-2
steps is
elimination
pie
[2
-12 3
0
14)
right-hand side—because
both
sides
are
there
-2
-8
JO
ready for back-substitution. operations on the left-hand
1
1
oO
5] -12}.
1
2]
may prefer this side of the equations are You
together.
takes most of the effort. We use multiples larger problem, forward elimination below the first pivot. Then the second column the first equation to produce zeros the second out below cleared pivot. The forward step is finished when the system triangular; equation n contains only the last unknown multiplied by the last pivot. In
is
=
is called back-substitution. process elimination To repeat: Forward produced the
solved
is
2. equation gives w the first equation gives
last
|. This
=
of each
of
2.
a
1.3
An
Example of Gaussian
Back-substitution
the
the
and
yields the complete solution in last unknown, then solving for the next to last, We need be zero. By definition, pivots cannot
The Breakdown Under
what
to
13
Elimination
with
opposite order—beginning eventually for the first. divide by them.
of Elimination
circumstances
could
down?
break
the process
Something must go wrong nonsingular case. This may algorithm working.But the
in the
singular case, and something might go wrong in the a little premature—after all, we have barely got the sheds light on the method itself. possibility of breakdown The answer is: With a full set of n pivots, there is only one solution. The system is and back-substitution. But if a zero nonsingular, and it is solved by forward elimination in a pivot position, elimination has to stop—either temporarily or permanently. appears The system might or might not be singular. seem
If the first coefficient
other that
equations
is zero,
in the upper left corner, the elimination will be impossible. The same is true at every intermediate in
of
u
from
the
stage. Notice
in that place pivot position, even if the original coefficient was not zero. Roughly speaking, we do not know whether a zero will appear until we try, by actually going through the elimination process. In many cases this problem can be cured, and elimination can proceed. Such a system still counts as nonsingular; it is only the algorithm that needs repair. In other cases a breakdown is unavoidable. Those incurable systems are singular, they have no solution or else infinitely many, and a full set of pivots cannot be found. a
zero
can
appear
a
Nonsingular (cured by exchanging equations 2 and 3) UMN
w+
it shesOper mptccmitactthn aztec
vt
meets
__
wz
ut
2u+20+5w=
__
=
system is
=
20+4w
V+
ll
2v
__
=
and back-substitution
triangular,
now
Ut
.
3w
—
4u+6v+8w The
vtw=_.
W=__
+ 4w
__
=
3w =
will
solve
it.
Singular (incurable) ur
Ut
WS
utot
__
2u+2v+5w=
There
is
equations and 4w 3w w
=
=
exchange
no
themselves
7, there 6 and 4w =
2, but the first
with
(There without
won’t
this
equation
a full
elimination. be
changing
can
third
singular
cannot
case
decide
exchanges pivots. Chapter row
set of
The 3w a
that
avoid
in the second
zero
pivot position.
If the last two equations are 3w may be solvable or unsolvable. is no solution. If those two equations happen to be consistent—as
1.5 will discuss
changes produce
3wW= __
——>
equations
8—then
=
Section
of
w=
can
still eliminate
has
both
an u
infinity
and
of solutions.
We know
6
=
in that
v.
when
the system is not singular. Then 2 admits the singular case, and limps the 4w, and
The
we
will call 3 the second
the
ex-
forward pivot.
pivot.) For the present we trust all n pivot entries to be nonzero, the order of the equations. That is the best case, with which we continue.
Elimination
and Gaussian
Matrices
r1
The Cost of Elimination
question
other
Our
require, for
elimination take
is very n
practical. How many separate If n equations in n unknowns? the elimination.
place in carrying out to predict the number the moment, ignore
our
Since
operations does large, a computer is going to
arithmetical is
all the steps
known,
are
should
we
of
operations. right-hand sides of the equations, and count only the operations on the left. These operations are of two kinds. We divide by the pivot to find When we do this out what multiple (say 2) of the pivot equation is to be subtracted. in the the terms subtraction, we continually meet a “multiply—subtract” combination; from another equation. pivot equation are multiplied by @,and then subtracted we call each and each division, Suppose multiplication—subtraction, one operation. to find the multiple @, In column 1, it takes n operations for every zero we achieve—one the 1 rows underneath entries along the row. There are n and the other to find the new n* n operations. (Another first one, so the first stage of elimination needs n(n 1) is this: n? All entries need to be changed, except the n in the first approach to n* n row.) Later stages are faster because the equations are shorter. is down to k equations, only k* k operations are needed to When the elimination below the pivot—by the same clear out the column reasoning that applied to the first stage, when k equaled n. Altogether, the total number of operations is the sum of k* k be able
For
the
—
=
—
—
—
|
—
—
of k from
all values
over
| to
n:
n(n + 1)(2n + 1)
(?+.--+n*2)—-(4+---4n
Left side
=
n(n
+
1)
7
5
;
ne
—n
3 Those
Substituting elimination
If
n
formulas
standard
are
n
take
no
large,
a
can
is at all
| andn
=
and
on.
so
Then
good
is
Back-substitution
(a division
steps
or
two
of the first
steps
estimate
for
100
=
or
by the the total
elimination
for back-substitution
also acts
a
third
are
and the first
numbers
the
the number
considerably faster. The last pivot). The second to
n
into
about
and few of the coefficients
If the size is doubled, eration
=
for the sums 2 and n
s(n° —n),
formula of
million
a
the
cost
squares. forward
steps: is
of operations zero,
n
$n.
is
multiplied by 8. is found in only one oprequires two operations,
last unknown last unknown
is 1 + 2+.---+n.
the
right-hand side (subtracting the same multiples correct of the as on the left to maintain equations). This starts with n 1 subtractions first equation. Altogether the right-hand side is responsible for n* operations—much Forward
on
—
less than
the
n?/3
Rightside Thirty system
years of order
on
the left.
The
total
for forward
and back
is
[mM—1l)+(@—2)+---+1)4+[14+2+---4+nJ
ago, almost n could not
would have every mathematician be solved with much fewer than
=n’.
guessed that a general n°/3 multiplications. allow for all possible
to demonstrate theorems even it, but they did not (There were methods.) Astonishingly, that guess has been proved wrong. There now exists a method that requires only Cn°®2’ multiplications! It depends on a simple fact: Two combinations
An
1.3
of two can
in two-dimensional
vectors
be done
in 7. That
lowered
This
would
space the
Example of Gaussian
8
multiplications,
but
they
log, 8, which is 3, to log, 7 * 2.8. find the smallest possible power of
from
exponent
to take
seem
15
Elimination
tremendous
discovery produced activity to n. The exponent finally fell (at IBM) below 2.376. Fortunately for elimination, the C is so large and the coding is so awkward that the new constant method is largely (or interest. The newest entirely) of theoretical problem is the cost with many processors in parallel.
Problems
1-9
1. What
are
multiple
elimination
about & of
equation
by 2 systems.
2
on
be subtracted
1 should
2x+3y 10x After
this elimination
pivots.
two
numbers
The
V2. Solve
the
3.
multiple
| and
11
upper triangular influence
have no
2?
II.
=
the
down
step, write
equation
I
=
+9y
from
on
those
and
system
circle
the
pivots.
x. triangular system of Problem1 by back-substitution, y before Verify that x times (2, 10) plus y times (3, 9) equals (1, 11). If the right-hand side changes to (4, 44), what is the new solution?
What
of
equation
2x
to
this elimination
(—6, 0),
V4. What multiple
£ of
new
equation
solution? 1 should
be subtracted +
ax
by
6.
equation
2?
f
=
formula
first
second 5.
from
=g.
cx+dy The
0.
=
step, solve the triangular system. If the right-hand side changes
is the
what
3?
equation
—4y =6
—x+5y After
from
be subtracted
2 should
Choose
pivot is a (assumed nonzero). Elimination produceswhat pivot? What is y? The second pivot is missing when ad =
right-hand side which gives no What are gives infinitely many solutions.
Choose
a
a
coefficient
side g that makes
b that makes it solvable.
Find
and another
solution
3x
+2y
6x
+4y
two
of
=
10
=
2x
4x+8y
by
=
right-hand
side which f
_.
solutions +
be.
those solutions?
this system singular. Then two
for the
16
= g.
in that
choose
singular
case.
a
right-hand
and Gaussian
Matrices
r1
7.
Elimination
numbers
For which
a
(a) permanently, and (b)
down
break
elimination
does
tem-
porarily? +3y
ax
—-3
=
6.
4x+6y= for
Solve 8.
For
and y after
x
numbers
three
which
In each
exchange?
the second
fixing
k does
elimination
is the number
case,
breakdown
of solutions
9.
What
test
many
solutions
on
b; and b2 decides will
they
have?
6x 10-19
Problems 10.
Reduce
study elimination
3
on
this system to upper
—2y
=
db,
4y
=
b>.
—
by
triangular 2x
4x +
3
by
+
two
11.
the
Apply
elimination
pivots.
Solve
by
7y +5z
2x
4x
List
12.
the three
Which
forces
d
number
singular)
operations:
row
a
d?Which
for that
for
row
operations:
z,
x.
y,
and back-substitution
to
solve
3
=
z=7
—-Sy+
2x
How
0.
3y
—
solution?
20
=
back-substitution
(circle the pivots)
a
&
z=
—2y+2z= Circle
allow
systems (and possible failure).
form
3y
+
a row
00?
or
equations picture.
two
the column
3x
1
by
= —6.
these
Draw
or
is fixed
Which
6
+ky
whether
0
exchange.
a row
down?
break
kx+3y= 3x
by
y-—3z=5.
—-
Subtract
___
times
row
from
__
row
__.
exchange, and what is the triangular system d makes this system singular (no third pivot)?
(not
row
2x+5y+z=0
4x+-dy+z=2 y-z=3. 13.
Which
number
In that
singular
b
leads
case
later
find
to
a row
solution
a nonzero
x
+
Which
exchange? by
x-2y—z= yrz=
b
leads
to
a
missing pivot? |
x,
=
z.
y,
0 0 0.
1.3
14.
(a) Construct form
3
a
a
and
(b) Construct
3 system
by
An
that needs
two
solution. —
row
that needs
a row
exchange
down later, 15.
If
2 are
1 and
rows
exchange)?
the same,
If columns
how
1 and 2
far
can
17.
—
Construct but
rows
with
b
Which
3
a
by
2 and =
3
infinitely many
Ox +
in elimination.
zero
6y +z
(allowingrow
How
2.
=
coefficients
the left-hand
on
solutions
many
to your
side, system
(0, 0, 0)?
=
this
system singular and which Find the solution that has z
solutions?
but breaks
4x+4y+z=0
example that has 9 different
q makes
keep going,
2x+2y+z2=0
(1, 10, 100) and how many with b
number
triangular
a
missing?
-~y+z=0
3 become
to
get with elimination which pivot is
4Ax+y+z2=2 16.
to reach
you
the same,
are
2x-y+z=0 2x
exchanges
17
:
3 system
a3 by
Example of Gaussian Elimination
right-hand side
t
gives
it
1,
=
=1 =6
X+4y—2z x+7Ty—6z
3y+qz=t. 18.
(Recommended)It two
is
impossible Explain why.
solutions.
for
a
of linear
system
equations
(a) If (x, y, z) and (X, Y, Z) are two solutions, what is another (b) If 25 planes meet at two points, where else do
19.
Three planes can The system is
fail to have
singular
if
Problems 20.
Find
20-22 the
up to 4
move
pivots
and
3 of A is
row
equation that can’t be solved
intersection
an
if
by
the solution
x
4 and
for these y
z
the
pivots?
back-substitution
List
subtracted from another.
the three
parallel. a
third
t=0
2t
=
2, 1 pattern
5, or
the
—
1, 2, —1 pattern, what
to solve
4u+5v+ are
Find
=
—
2u + 3v
What
rows.
are
=
If you extend Problem 20 following the 1, is the fifth pivot? What is the nth pivot?
2u—-
planes
x.
Z+
and
one?
—2y—z=],
y+2z7+
Apply elimination
exactly
four equations:
X+2y+
22.
two
no
of the first two
n by
have
meet?
point, when
a
+y+z=O0andx
2x +
21.
they
to
=
0
w=3
v—3w
=5.,
Operations in which
a
multiple
of
one
row
is
23.
For
Elimination
and Gaussian
Matrices
ri
the system w=2
u+tvu+
u+3v+3w=0
what
Solvethe
24.
triangular system after
is the
forward
v
—u+2v—-
w
side
may carry the right-hand at the end). until the solution
elimination
to the
fifth
a
as
0
=
0
w+2z=5.
—~—
You
=
z=0
v+t2w-—-
—
Apply
is the solution?
and what
elimination,
system and find the pivots when 2u—
25.
=2,
3v+5w
u+
column
(and
omit
writing
6
v+w=-—l.
u—-
zero arises in the pivot position, exchange that equation for of v in the third equation, in place it and proceed. What coefficient to —1, would make it impossible to proceed—and force elimination
a
Solve by
the system
elimination
of two
the
intersect
at
equation
which
Find
27.
three
nently,
each
graph representing solution.
after
arises
of
values
a
6y
equation
Also, add
one
below
of the present break down?
O
y=
3x + a
the
equations
x-
Draw
Z
w,
v+t+w=-—2
3u+3u-—w=
26.
v,
system ut
When
u,
one
as
18.
=
a straight line line—the
more
in the x-y plane; the lines graph of the new second
elimination.
for which
elimination
breaks
down, temporarily
or
perma-
in v=]
au+
4u+av
=
2.
'
Breakdown ‘at the last
28.
True
at the
first step
can
be fixed
by exchanging
rows—but
coefficient
(it begins
not
breakdown
step.
orfalse:
(a) If the third equation starts with
multiple
of
(b) If the third multiple of (c) If the third
equation
equation 1 will be equation has zero equation 2 will be equation contains
2 will be subtracted
a
zero
from
subtracted as
its second
subtracted Ou and from
equation
coefficient
with
then
no
Ov) then
no
Ou)
3.
(it contains
equation 3. Ov, then no multiple from
equation 3.
of
equation 1
or
29.
the
(Very optional) Normally
of two
multiplication
(a +ib)(c
id)
+
(ac
=
and Matrix
Notation
Matrix
1.4
i(be +
+
ac
forming
as
30.
Use
+ b before
a
elimination
to
without
multiplying,
v+
Find
numbers
3.
2u+3u—-4w= fail to
will elimination
a
10
2y
3z
ax
+
ax
tay+4z=b,
+
+ ay
+
az
give
=
bj
=
bz.
three
experimentally the average size (absolute value) of lu(rand(3, 3)). The average pivots for MATLAB’s
third
abs(A(1, 1))
you compute additions, such
7
w=
u+2v+2w=
3
ax
32.
(You may
vt
ut
and
2u+3v—4w= three
do
penalty.)
any
6
w=
ut+t2v+2w=11
For which
can
solve
ut
31.
ad)
i, Ignoring
the four separate multiplications ac, bd, bc, ad. bd and bc + ad with only three multiplications?
involves —
numbers
complex
bd)
—
19
Multiplication
pivots?
the first
and
second
and
of the
first
pivot
from
be 0.5.
should
1.4
equations in full. We can list the elimination steps, which subtract a multiple of one equation from another and reach a triangular matrix. For a large system, this way of keeping track of elimination would be hopeless; a much more concise record is needed. We now introduce matrix the original system, and matrix notation to describe multiplication to describe the operations that make it simpler. Notice that three different types of quantities appear in our example: With
3
our
by
3
example,
Nine
On the v,
w.
on
It is natural
to
able to write
are
out
coefficients
4u
unknowns
Three
right-hand
sides
side is the column
the left-hand
represent
all the
2u+
Three
right-hand
Also
we
side the three
+
6v
—
5
w=
b. On the left-hand coefficients
unknowns
by a
(one
is
x
|
=
side
of which
nine
coefficients
fall
into
Coefficient
three
rows
matrix
the unknowns
are
happens
to be
]
The
v
is
solution
x
=
|
1}. 2
and three
A=
columns,
producing
2
1
4
—6
—2
7
«11 0}. 2
a
3
by
u,
zero).
vector:
w
The
(1)
9
u
The unknown
—2
=
—2u+7v+2w=
vector
nine
are
v
3 matrix:
Elimination
and Gaussian
Matrices
r1.
equations equals the number of unknowns. If there are n equations in n unknowns, we have a square n by n matrix. More generally, and Then A is rectangular, with m rows we might have m equations and n unknowns. It will be an “m by n matrix.” n columns. are added to each other, or multiplied by numerical Matrices constants, exactly of a are—one at time. In cases as fact we vectors as vectors entry special may regard with only one column. can be two matrices As with vectors, matrices; they are matrices added only if they have the same shape: A is
a
the number
matrix, because
square
of
—
A + Addition
Ax
form
matrix
Matrix
a
and
=
2/3 6
[4
1)
2
=|6
0|
2
Ol.
4}
jo 8]
in the
simplified
0
Vector
a
with
three
in full, matrix
out
3
|[lL
2]
[1
equations
b. Written
=
[3
EP
FF
4)
the three
rewrite
to
8}
[>
of
Multiplication
2),
1
24 = Oo
Multiplication
We want
21
B
unknowns
times
v, w
u,
equals
vector
vector: |
2
Matrix
Ax
form
Db
=
-6
4
}—-20 The
right-hand
side
side
is A times
x.
This
the column
vector
v|
2]
fw
of Ax
2
column
1}
1
second
A. The
component
matrix
equation
9
|
|
ju} |.
The
(2)
x:
7
Rowtimes
|—2].
=
“inhomogeneous terms.” The left-hand be defined exactly so as to reproduce the of from “multiplying” the first row comes
will
multiplication
7
QO
of
vector
The first component
original system. A into
b is the column
5|
1|]ful
1
of the Ax
=
[2u+v+w]
=
[5].
=
(3)
w
product Ax is 4u b is equivalent to
the second
6v + Ow, from
—
the three
simultaneous
row
equations
of in
equation (1). it
column
is fundamental
single number. This words, the product of a vector) 1s a 1 by 1 matrix:
produces
In other
column
times
Row
all matrix
to
number
a
1
by
is called
matrix
n
From
multiplications. the inner product
(a
vector) and
row
two
of the two an
vectors.
1 matrix
by
n
vectors
(a
1
Inner
[2
product
1]
1
|1]
= [5].
=[2-1+1-141-2]
ins meme
This
time.
that the
confirms
There
are
Each
row
products
two
when
proposed solution ways to multiply a
of A combines A has three
by
rows
=
matrix
with
x
1
6]
3
0
|
}1
1
4] [0
to
give
(1, 1, 2) does satisfy the first equation. A and a
a
vector
component
x.
of Ax.
way is a row at a There are three inner
One
rows:
1 Ax
x
[2
7
r1-2+1-5+6-0.
5|=13-24+0-54+3-0]
}1-2+1-544.-0.
=
|
|6l.
7
(4)
Matrix
1.4
That
more
found
all
is
Ax
a
Ax
by
is twice
picture”
of linear
solution
has
column
equations. to do the
The column
rule
If
the
course
--
2. It
b has
(5)
7|
|
corresponds
row
picture
and
we
to
with
agrees
the
“column
7, 6, 7, then the
components
be used
and over,
over
that
(and
we
A:
it for
repeat emphasis: as. ine . equation olecolumns:
are the Thecoefficients
(A? times A),
the powers
é
cos
A(@, + 62) from A(@,)A(@.) 6). What is A(@) times A(—@)?
that
Verify sin(6;
|
=
identities
the
for
B*, B?, C?, C?.
and
cos(O,;
What
are
02) and
+
A*, B*, and
C*9
Nl]bole and Nl Nr
| 4 | TOTTI
|
22-31
Problems 22.
Write
1
0
B= _|i
the 3
down
elimination
about
are
3 matrices
by
and
c=
An
1
2
72
| in
=
2
2
matrices.
that
produce
these
elimination steps:
| from row 5 times row 2. subtracts (a)-E, 2 —7 times row from row 3. (b) E32 subtracts | 2 and 3. and 2, then rows (c) P exchanges rows
23.
E3)E>,b first, row 24.
22, applying E2, and then £3) to the column Applying E32 before £2; gives Ey; F32b
In Problem =
=
__.
feels
___
from
effect
no
row
__.
(1,0,0) gives When E32 comes
=
__.
£2,, F3,, E32 put A into triangular form
matrices
three
Which
b
U?
r
A=
1
1
01
4
6
1
2
01
2 Multiply
those
Suppose
a33
25.
__..
26.
If you
If every
=
E’s
What
matrix
matrix
one
a33 to
__,
of A is
(1,1, 1). Do a3 by 3 27.
get
7 and the third
change
column
to
row
__
M that
there
of
a
7 times torow
does
is 5. If you is zero in the
multiple example. How
£3; subtracts
E37 F3,E2,A
pivot
|
7 times
and
__.
row
elimination:
pivots
| from
row
U. MA
change a33 to 11, pivot position.
(1,1, 1), then
many
=
are
Ax
=
U.
the third
is
always a multiple produced by elimination?
3? To
reverse
Multiply £3, by R3;:
that step,
is
pivot
of
R3, should
28.
(a) £5, subtracts matrix
M
1 from
row
and Matrix
Matrix Notation
1.4
2 and then
row
P)3 exchanges
but the E’s
these
£)3 will add
3 to
row
31.
4
This
row
1 to
row
3 and
row
| to
row
3
are
the
same
the
at
adds
and then
adds
time
same
3 to
row
3
row
1?
to row
1?
row
—_
YY]
11
0
0
0
by
the M’s
3. What
1?
row
-
i
row
matrices:
OT 0 0
1 from
different.
are
(a) What 3 by 3 matrix (b) What matrix adds (C) What matrix adds
Multiply
3. What
=
=
30.
2 and
rows
P)3E>, does both steps at once? (b) Py3 exchanges rows 2 and 3 and then £3, subtracts row matrix M EF; Py3 does both steps at once? Explain why 29.
29
Multiplication
©
4
0
needs
elimination
which
ore
1
0
—1
0
1
4 matrix
and
ai
a"
©
1
ore
0
7
1
LY ||
|—1
E>, and E32 and E43?
matrices
r
241
~1
~0O
-106
32—44
32.
these ancient problemsin
Write
about
are
and
creating a
2
2
1 2
matrices.
multiplying by
form
2 matrix
(a) X is twice as old as Y and their ages add (b) (x, y) (2, 5) and (3, 7) lie on the line =
Ax
b and solve
=
them:
to 39.
y
=
mx
+
Find
c.
m
and
c.
cx’ goes through the points (x, y) (1, 4) and and (3, 14). Find and solve a matrix equation for the unknowns (a, b, c).
33.
The
34.
Multiply
parabola
y
these
=
+ bx +
a
matrices
=
in the orders 0
1
0
b
0
1
of B
are
the
E=lla (a) Suppose all columns because
each
and FE
EF
0
1
35.
0
O-1
0
Problems
07
[°
2-1
rt F=1|0
E’:
0 O| 1 O|.
1.
Oc Then
same.
is FE times
one
and
If E adds
37.
The
38.
If AB
39.
A is 3
| to
row
first component the third component
these
=
I and
by 5,
matrix
2 and
row
BC
F
of Ax is
of EB
all columns
adds
2
row
})
to row
J,
use
=
the associative
B is 5
BA
AB
the same,
law to prove
ABD
is 3 are
DBA
of EB
rows
are
1, does EF equal FE? +++
by 3, C is 5 by 1, and D operations are allowed, and what
all
that
+ GinX,. G1xX1 + ayjx; of Ax and the (1, 1) entry of A’. =
are
;
(b) Suppose all rows of B are [1 2 4]. Show by example not{1 2 4]. Itis true that those rows are 36.
(2,8)
by
A
=
Write
for
C.
1. All entries
the results?
A(B+
formulas
C).
are
1. Which
of
and Gaussian
Matrices
ri
40.
What
the first
multiply
find
to
of AB? 4 of AB?
3, column 1, column
row row
1 of CDE?
the
matrices) Choose
3
do you
of AB?
row
the entry in the entry in
(3 by
matrices
or
column
the third
(a) (b) (c) (d) 41.
columns
or
rows
Elimination
only
B
matrix
that for every
so
A,
|
(a) BA= 4A. (b) BA=4B. (c) BA has rows (d) All rows of 42.
True
BA
same
as
and
unchanged.
of A.
1
row
2
row
false?
or
If AB and BA If AB and BA
A is
then
necessarily
are
defined
then
are
defined
then
square. B are square. AB and BA are square. A and
BthnA=I.
IfAB=
If A is
the
are
If A? is defined
(a) (b) (c) (d) 43.
1 and 3 of A reversed
by
m
how
n,
multiplications
separate
many
involved
are
(a) A multiplies a vector x with n components? (b) A multiplies ann by p matrix B? Then AB is m by n. (c) A multiplies itself to produce A?? Here m
when
p.
=
44.
To prove
suppose
that
(AB)C that C has only
AB has columns Then
(AB)c
A(BC),
=
cy
column
one
Ab, +--+
+
c,Ab,
Linearity gives equality of those for all other
45-49
Problems 45.
Multiply
AB
two
of C. Therefore
__
column—-row
use
using ‘1
c
the
column
with
entries
Ab, and Bc has
Ab;,...,
=
use
columns
one
sums,
(AB)C
=
times
46.
47.
and
block
3
14
Draw
is
the cuts
really
a
fF,| [4c+ in A and
block
Cybn)
=
A(Bc).
The
A(Bc).
is true
same
multiplication.
—
bw
multiplication separates matrices into blocks make block multiplication possible, then itis allowed. and confirm that block multiplication succeeds.
=
+c,0,.
2
-
Block
cA Bl
=
+--+:
i
1)
C
+
+--+
(AB)c A(BO).
ap=|24]|7 3 ]=|2/[3 2
c,b;
rows:
0]
=
column
and
multiplication
Cy:
c,,...,
equals A(cyby
Db, of B. First
b;,...,
vectors
20)
and
B and AB to show
multiplication
(a) Matrix A times columns of (b) Rows of A times matrix B.
to find AB:
B.
how
X
X
Xx
|x
x
|x|
px
x
each
If their
(submatrices). Replace these
| x}
of the four
x’s
by
rx
x
tx
x
[x
x
shapes numbers
x
|
multiplication
xh,
x rules
48.
(c) Rows of A times columns (d) Columns of A times rows
of B.
Block
elimination
multiplication
|
5b
la
I
—CA"'
A~'A
When
to find
0|/A
Bl
|/A B
IT}/C
D}~
{0
—B
|Ax
|x|
2b
Sst
]
Ax; If the solutions
x1,
x2,
solutions
three
and
Ax»
and
Axz3
{Oj}.
=
SL
0 the columns
x3 are
Ax
|
0
{1
=
|
Question 51
in
(0,0, 1), solve
=
b:
sides
0
|0
0 If the
By. Use blocks
imaginary part
2
typ
Alright-hand ==
b when
=
b
of are
a
(1, 1,1) and
=
x,
is AX?
X, what
matrix
=
x.
(0,1, 1) and
(3, 5, 8). Challenge problem: What
=
S:
—By| real part
51.
x3
complement”
—
2?
33.
the “Schur
=
A
52.
I, multiply the first block
=
—1, the product (A + iB)(x + iy) is Ax +iBx +iAy separate the real part from the imaginary part that multiplies 7:
With to
b
tile b= [0|
Elimination for a2 by 2 block matrix: row by CA™! and subtract from the
i*
produces
a
secondrow,
50.
1
column
on
OO;
1
Mla 49.
31
Multiplication
of B.
that
says
and Matrix
Matrix Notation
1.4
is A?
all matrices
Find
|
b
a
; 1
A=
34.
A
1;
fl
]1
1
F1 F1 =
A.
(1, 7)
to
(n, 1).
A (how many rows?) multiplying the 8asamatrix +3y +z+5t vector (x, y, Z, t) to produce b. The solutions fill a plane in four-dimensional The plane is three-dimensional with no 4D volume.
2
=
by
2 matrix
What
matrix
by P;
and then
Write
the inner
has
one
row.
The
.
58.
from
2x
Write
56. What
37.
satisfy
If you multiply a northwest A and a southeast matrix matrix B, what type of matrices AB and BA? “Northwest” are mean zeros below and above the and “southeast”
antidiagonal going 55,
;
that
In MATLAB vectors
x
P; projects the
Py projects
multiply by Py,
The
solutions
columns
of A
notation,
write
and b. What
you get
to are
Ax
=
(
only
0 lie
y,
z)
as
a
ona
the commands
that define test
axis to
x
whether
Eel
If you
).
multiplication perpendicular to the
not
Ax. vector
space. A and the column
the matrix or
space.
produce (x, 0)? multiply (5, 7)
matrix
-dimensional
in
would
the
onto
produce (0, y)? ) and (
to
(1, 4,5) and (x,
command
“Ey
(x, y)
the y axis
onto
of product
vector
column
Ax
=
b?
A
ter
1
Matrices
59.
The
and Gaussian
If you
ones(4,1), what ones(4,4) times 61.
the 4
multiply
A* v?
is
add to 15. The first
is the
vector[1
1
not *
2
+
1] times
the outputs
by 3 identity
the 3
from
A
and
the
* v
and v'
*
v?
A, what happens? A
ones(4,4)
=
column B
If you multiply what is B * w?
needed.)
ones(4,1),
v
eye(4)
=
=
+
and and columns 9. All rows 1, 2,..., be 8, 3, 4. What is M times (1, 1, 1)? What
entries
could
row
[3:5]’produce
=
are
matrix
M with
diagonals row
v *
(Computer
zeros(4,1)
=
w
4 all-ones
by 3 magic matrix
a3
Invent
by
v
What
(3, 4, 5).
needed!) If you ask for
not
and
eye(3)
=
vector
and the column
(Computer
A
commands
MATLAB
matrix
60.
Elimination
M?
15 We want
look
to
starting point
was
again
at
elimination,
the model
to
system Ax 2 -6 7
/-2 there
Then
Po
Step Step Step
)
-
1. Subtract
2)
*
times
2
—1 times
the
3.
Subtract
—1 times
the
an
of matrices.
The
Of]
juj=|-2)=2
2]
5
[w})
@
©.
|
9
|
rico "|
Ey,
equation from the second; %2 *(~ ® first equation from the third, second equation from the third. -._ ,¢-j)).
the first
Subtract
was
fu
CARD
2.
The result
1)
steps, with multipliers 2,-1,-—-1:
elimination
three
were
in terms
means
b:
1
4
Ax=|
it
what
see
=
equivalent system
Ux
=
c, with
a new
coefficient
matrix
U:
0%
Lt?
r
2
—-8
Ux=!10
Upper triangular
11]
matrix The
new
A into
took
Start
U is upper right side U. Forward
with
Ux
=
c
The
with
c
was
derived
elimination
entries from
the
amounted
below
lv
/—-12!
=
{|
(2)
=c.
2
the
diagonal are zero. original vector b by the same to three row operations:
steps that
A and b; 3 in that
Apply steps 1, 2, End
triangular—all
5]
| 1} [wl
—2!
0
|O This
ful
U and
is solved matrices
by
order;
c.
back-substitution.
E for
Here
we
concentrate
on
step 1, F for step 2, and G for step 3
connecting were
A to U.
introduced
in the
_
They are called elementary matrices, and it is easy to see how they a multiple @ of equation j from work. To subtract equation i, put the number —£ into the (i, J)position. Otherwise keep the identity matrix, with 1s on the diagonal and Os the row operation. Then matrix multiplication executes elsewhere.
previous
section.
1.5
The then to
F’, then
U
of all three
result
steps is GFEA
G. We could
takes
also
(and
b
to
Triangular
multiply c).
U. Note
=
Exchanges
33
is the first to
multiply A,
and Row
Factors
that
F
It is lower
f
)
4
AtoU
From
GFE
1
is
from
U To
second.
and the
but the most
1] fd
1 1
=|/—2
1
—2
(3)
|.
1
f-1
1}
1}
A
/
0.)
a.
1
Fd This
Jf
fa
=
that takes
together to find the single matrix triangular (zeros are omitted):
GFE
4
exactly the opposite: How would we get back to A? How can we undo the steps of Gaussian elimination? undo step 1 is not hard. Instead of subtracting, we add twice the first row to the (Not twice the second row to the first!) The result of doing both the subtraction addition is to bringbackthe identity matrix:
good,
ri
of
Inverse
is
important question
subtraction
0
O|f
1
0
o
1f]{|
2
is addition
0
—2 0
/
0
10 1
O|=1/0
0
1f
1
fo
One
operation cancels the other. In matrix terms, one matrix is the If the elementary matrix E has the number —£ in the (i, 7) position, has +¢ in that position. Thus E~!E I, which is equation (4). E~! We can invert each step of elimination, by using and F—'
O
O|
I
Oj}.
0
1] of the other.
inverse then
(4)
E~'
its inverse
=
not
bad to
these
see
the whole
step 3
was
in the
be inverted
before
now,
and
at once,
process
Since
inverses
last
in
see
what
takes
matrix A to
going from
direction.
reverse
section.
the next
come
Inverses
step is
Teverse
.
>
in the
can
substitute
Now
lower three
we
inverse
recognize And
matrices
it has
in the ws
V1 =(Q2 —
The
for U, to
the matrix
a
FG" BE"
E-FOGU=A
UbacktoA GF'EA
triangular.
G~'.I
think
it’s
is to undo
problem
A.
G must
opposite
be the
order!
The
first
to
second
and the last is E~’:
From You
to
U, its matrix |
F~'
final
The
U back
and
see
how
ZL that
takes
special property right order: ~
is that the entries
out
the
that
be
can
—
seen
it is
L, because
only by multiplying
the
po
|=/@
Gaya
(5)
original steps.
to A. It is called
tl
tf,
LU=A.
U back
‘aad
-
tty
knock
the inverses
a
‘el
is
i
|=L.
©
[erent
diagonal are the multipliers € 2, —1, and —1. When matrices are multiplied, there is usually no direct way to read off the answer. Here the matrices come in just the right order so that their product can be down immediately. If the computer stores written number that each multiplier £;;the from row 7, and produces a zero in the multiplies the pivot row 7 when it is subtracted i, j position—then these multipliers give a complete record of elimination. The numbers ¢;; fit right into the matrix L that takes U back to A. special thing
below
the
=
s
Jewith no
ro Li lower oafrows. exchanges Theamultipliers (taken;fromelimination) triangular, mnthediagonal. the. U is A= ce Tr Triangular factorization r
W:
|
iS
€;;
are
the upper. triangular 1matrix,which afterforward appears e ntries of U_are the pivots. = elimination. The diagonal
“below t
diagonal.
i
2
1
A=] 4 needs
(which
Elimination
and Gaussian
Matrices
r1
2
v= |
goes to
‘| 1=[02
A to U there
is the
Lower
A
elimination
(ii)
F subtracts
-
0
I
-
2/=1]1 3. Fl
1
0
0
I
1
1]
O
Ly
of
|
From
rows.
same
triangular
steps on £3; times is the
3. The result
row
back
=A.
additions
of
1)
as
A)
U to
A=
case
this
A
row
| from
are
easy: row
matrix
identity
LU.
=
there
are
rows.
/
|
The
LU
LU.
=
0
L is the
and
identity
into
1
subtractions
are
to
1)
1
A=l|12 p12
(when U
Then
i
be factored
cannot
pivots and multipliersequal r1
From
0
=| }
L
exchange)
a row
0
(with all
1
;
with
(1) E
“
]
0
0
£51 £31
I
0}.
£30
1 |
subtracts
£2; times row | from row 2, 3, and (iii) G subtracts £3 times row 2 from U I. The inverses of E', F, and G will bring =
A:
E™ applied
F~! applied
to
to
G~! applied
to
J
A.
produces
07 i
times
times
1 £31
equals |
The
theses
A
=
in
LU: The
A=
thought the to
algebra to
be
too
particular
U, brings
U
not
position. because
necessary
This
always happens!
of the associative
Note
£39
|.
1 |
that paren-
law.
by ncase
n
The factorization in linear
to fall into
right for the ¢’s E~!F-!G™! were
is
order
0
LU is
so
important that they concentrated
we
must
say more. the abstract
It used
to be
courses
when
hard—but
you have got it. If the last Example 4 allows any see can how the rule works in general. The matrix
I,
=
we
on
side.
Or
missing maybe it was
U instead
of
L, applied
back A: r
1
A=LU |
|
60
IL fy, £31 £39
0]
[rowlofU]
O|F
|row2o0fU
1
row
3 of
U|
—wer —
original
A.
D
The
proof
and
A
is to
and Row
Triangular Factors
1.5
35
Exchanges
the steps of elimination. On the right-hand side they take A to U. On the left-hand side they reduce L to J, as in Example 4. (The first step subtracts £; times (1, 0, 0) from the second row, which removes £2;.) Both sides of (7) end up equal to the same matrix U, and the steps to get there are all reversible. Therefore (7) is correct LU.
=
A
apply
LU is
=
and
crucial,
so
of this
8 at the end
Problem
that
beautiful,
so
section
suggests a second approach. We are writing down 3 by 3 matrices, but you can see how the arguments apply to larger matrices. Here we give one more example, and then put A
LU
=
(A
to use.
with
LU,
=
in the empty
zeros
|
r
J
|
|
{1
| 2
-]
—{
—-1 1
1
—1
7
—{|
-1|/
2 -1
how
That
shows
nals.
This
example
1.7).
The
difference
matrix
A with
from
comes
U with
diagonals has factors L and important problem in differential
in A is
diagoequations (Sec-
L times
difference
backward
a
System
=
Two
two
a
forward
Triangular Systems
practical point about A LU. Itis to solve Ax steps; L and U are the right matrices We go from b to c by forward elimination (this We can and should back-substitution uses (that U). a
serious
the second
of Ax
=
b
L to
equation by
system is quickly solved.
That
is
=
give LUx exactly what
=
a
just
ZL) and
be thrown
go from
we
do it without
Ux
is Ax
good elimination
c
to
away! x by
A:
andthen
Lc, which
of elimination
record
a
b. In fact A could uses
Lo=b
First
than
more
=
Splitting Multiply
|
}
three an
difference
second
;
—1 1
1]
~1
U.
One Linear is
a
1
1]
—~{
2]
;
There
f
4]
—~{|
A=
tion
spaces)
=
(8)
=c.
b. Each
code
will
triangular do:
:
nd
The
separation
Solve
subroutine
solution far below
for the
into
Factor
and
Solve
itsfactors dU).eS L Jand thesolution x) [andbbfind that
means
obeys equation (8): two right-hand side b any new n° /3 steps needed to factor
a
series
of b’s
can
be
processed.
The
triangular systems in n7/2 steps each. The in only n? operations. That is can be found A
on
the left-hand
side.
This
Elimination
and Gaussian
Matrices
r1
is the
—
Xi, _
=b
Ax
A with
matrix
previous
a
FT
x2 —
splits into Lc
=]
+ C9
—
|).
10)
=]
X2
Ux=c __
4
X —
2
=
X3
9
. _
=|
givesx
3
=
yu
2].
4
X42
4
|
For
these
see
how
special “tridiagonal matrices,’ the operation count drops from n’ to 2n. You before cz). This is precisely what happens Lc bis solved forward (c; comes =
during
forward
Remark
I
pivots.
This
Then
elimination.
is
The LU form
is easy
Ux
is solved
c
=
“unsymmetric” Divide
to correct.
(x4 before
the
on
of U
out
backward
diagonal: L has 1s diagonal pivot matrix
a
x3). U has the
where D:
|
|
Jl
| dy Factor
dp
U=
D
out
w/a
ui3/d)
of
a
H23/
.
dn
:
example all pivots exceptional, and normally LU last
In the
The on
row
of LU
That
were
d;
is different
from
LDU
LDU
see
you
divided
was
splitting
by
into LDU
the
pivot
in D. Then
D
case
=
(also written
that
I. But
where L and
that
or
V has
are
treated
Z and
U
U
was
very
LDV).
LDU, of pivots.
A=
LDV, it is understood
or
(9)
1)
|
|
1. In that
=
triangular factorization can be written the diagonal and D is the diagonal matrix
Whenever each
=c.
_
c=
Sives
—c@tam=1 X1
Ux
2
a
1
_
ote
band
1
=]
—Cy
=
2x4 = 1
+
x3
Cy _
-
x N=
37
(1, 1, 1, 1).
=
]
=
X23
—
Le=b
side b
right-hand
1s
U have
1s
the
diagonal— evenly. example on
An
is
a=[ha)=[5aff a}=[5a][! all! toe
has
Remark
the
the 1s on 2
We
may
that the calculations and there
is
a
“Crout
diagonals have
must
of Z and
given
be done
algorithm”
the
U, and the
impression
in that
1 and
describing
—2 in D.
each
elimination
step,
is wrong. There is some freedom, the calculations in a slightly different way.
order.
that arranges
in
pivots
This
Triangular Factors
1.5
There
The
is
final L, D,
good exercise
a
with
and
have
now as
use
happen
a
at
face
to
a
°
Zero
into
This a;,
number
The of
in the middle
occur
0. A
Ned
far been
so
could =
expect
we
It will
calculation.
a
simple example is 0
2
Uu
bi
S ‘| 4 ia
eye
the pivot position
_ =
3u +4
Exchange this in matrix
To express
exchange.
row
It
P has
the
effect
same
unknowns
u
p=
and
v
are
on
we
need
the the
exchanging 1
{i |
and
not
reversed
in
P has the
permutation matrix a single “1” in every row and column. exchanges nothing). The product of
bi
=
matrix
permutation
P that
produces
of I:
rows
0
1//0
3
2
4
(1| E tl=[9I
a=
b, exchanging b; and b2. The
A
3 up
bo
=
2U
0
.
3.
rows
terms, from
comes
Permutation
the
section.
no the coefficient difficulty is clear: multiple of the first equation will remove The remedy is equally clear. Exchange the two equations, moving the entry the pivot. In this example the matrix would become upper triangular:
The
the
main point:
our
in the next
matrices
has
°
in
is
37
Exchanges
Matrices
problem that
pivot might be zero. the very beginning if
U. That
inverse
Exchanges and Permutation
;
to
is
proof
Row We
in the
freedom
no
and Row
new
system
is PAx
=
Pb.
The
exchange.
a row
identity (in some order). There is The most common permutation matrix is P I (it is another permutation— two permutation matrices same
rows
the
as
=
of J get reordered twice. P After I, the simplest permutations
rows
exchange
=
1 has
choice.
n
two
rows.
Other
permutations
(1) permutations of size n. Row 1 choices, and finally the last row has only one choices, then row 2 has n can We display all 3 by 3 permutations (there are 3! = (3)(2)(1) = 6 matrices):
exchange
more
There
rows.
are
n!
= (n)(n
—
1)
---
—
[=
1
Py
=
i
PxPr
=
1
Matrices
r1
There
be 24
will
When
discover
inverses
about
know
we
P~!
fact:
important pivot location
an
of order
matrices
permutation 2, namely
of order
matrices
we
Elimination
and Gaussian
n
and transposes is always the
(the
section
next
permutation
two
A~! and
defines
in the
zero
rO A=1|10
|}d If
d =.0,
the
solution
unique
d into the
pivot.
below
now
to Ax
it
bl
d=0O
=>
0
c¢
a=0QO
=>
nosecond
fl
c=0
=>
no
is incurable
problem
the next
However
(the
e
it is
above
this
and
b. If d is
=
be easy to fix, is a nonzero
two
a
oe
matrix
first
no
third
is
singular. an not zero, exchange P,3 of pivot position also contains
useless). If
is not
a
then
zero
The
nonzero
pivot pivot pivot. There
rows
is no hope for 1 and 3 will move
another
number
The
a zero.
a
a
is
exchange P3
row |
is called
A‘),
P*.
as
same
only
are
possibilities: The trouble may This is decided it may be serious. or by looking below the zero. If there entry lower down in the same column, then a row exchange is carried out. can the needed pivot, and elimination get going again: entry becomes A
raises
4. There
=
for: |
P32
‘Oo
0
1)
0
1
O
0
0
=
I One
1 P
and
=
O|
|0
0
1
0
1
0
|
|
point:
more
O
The
P,3 acts first
P33 P13
e
fi
0
a
b
O
0
will do both
O|
0
0
1
(0
1
Of}{1
exchanges
row
[0
O
1
0
1
O|
0
01
TO =
1
(0
O
O|
Elimination
in
a
Nutshell:
PA
—
point is this: If elimination can be completed with the help of row then we can imagine that those exchanges are done first (by P). The matrix factorization need row exchanges. In other words, PA allows the standard can be summarized in a few lines: elimination U. The theory of Gaussian
even
practice,
if it is not
we
also
exactly
consider zero.
the
rows
in
LU
The main
In
=P.
1 0
had
we
at once:
O 1
known, we could have multiplied A by P in the first place. With the right order PA, any nonsingular matrix is ready for elimination. If
Co
|
0
1
Po3P3A
|
permutation P73 P;3 =
and
rd =
exchange when the original pivot Choosing a larger pivot reduces the roundoff a row
is
exchanges, PA will not into
near
error.
L times
zero—
1.5
You
have
elimination
subtracts
with L. Suppose suppose it exchanges
1. Then creating £2, advance, the multiplier
will
rd
1
1]
1
1
3};->
100
A=j1
exchange
P=1!0
}O
0
0]
0
1
1
OF
2 and
rows
1 in PA
=
}O 1 and
=
1
f1
£2;
0
O|
1
O
0
1]
L=j;2
and
3
6/=U.
0
2]
is done
in
(10)
2:
=
and
PA
k],
In
exchange
2,
row
1)
11
6
f3,
1 from
row
3. If that
21/5/10 3
39
Exchanges
LU.
=
11]
1
0 8] LU—but now
recovers
Tl
£3;
to
change
5
|2 row
and Row
be careful
to
=
That
Triangular Factors
MATLAB, A({[r :) exchanges row k with row 7 below has been found). We update the L and P the same way. sign = +1:
matrices
it
LU.
=
(where
(11)
the kth
At the start,
P
=
pivot J and
A([r k], : ) A({k r], = ); L({r k],1:k~—1) = L([k r],1:k~1); P([r k], =) P([k r], = ); =
=
sign
—sign
=
“sign” of P tells whether the number of row exchanges is even (sign = +1) or odd (sign = ,—1). A row exchange reverses sign. The final value of sign is the determinant it does not depend on the order of the row exchanges. of P matrices A good elimination L and U and P. Those To summarize: code saves in A—and that originally came carry the information they carry it in a more usable b reduces form. Ax to two triangular systems. This is the practical equivalent of the The
and =
calculation
do next—to
we
Problem Set 1. When
2.
What
is
an
triangular
upper
matrix
=
A™'b.
full set of
pivots)?
from
subtract
3 of A? Use the
row
form
A=
will be the
Multiply
pivots?
the matrix
L
=
Will
©
)
Re )
|2 1
3.
nonsingular (a
2 of A will elimination
row
l
What
x
1.5
multiple £32 of
factored
A=! and the solution
matrix
the inverse
find
a row
E~!F-'G~!
l
exchange in
be
required?
equation (6) by
GFE
in
equation (3):
—
1
0
O
2
1
O
1
—]
times
1
also in the
O
O|
—2
1
O}.
1
i
1
|
|
Multiply
1
opposite order. Why
are
the
answers
what
they
are?
Matrices
r1
Elimination
and Gaussian
(4,Apply
elimination
to
|; ;
A=
and
3
A=j|1
down
elimination,
(6 E? and E® and E~
4 4
4
}1
3] Ux
triangular system
which
c
=
8) appears
if
FGH and HGF
products
[2]
3
3]
5
7]
9
8]
ful Ju]
}21.
=
5|
[w
if 0
1
E
the
1
A=|1
and
1
1
for
Ax=|0
Find
1
the upper
2
Find
rt
11
1
1
A into LU, and write
Factor after
3
1
2
U for
L and
the factors
produce
| 1
=
!
(with upper
triangular
omitted)
zeros
|
;
Ti
Bl 0
|
2
F=10
0
0
(a) Why
are
of
third
The
1 and 2
rows
Which other
(a) Under
of U
row
from
the
1 of
rows
of U subtracted
off and not
rows
—
|
0
2 row
third
(of U!):
A £3;(row
9
0
comes
3 of
row
3 of
rule rows
A
U)
—
1, of A
by
£32 (row 2 of U).
of A? Answer:
Because
by
the
as
same
£3;(row 1 of U)
=
for matrix
is the
conditions
O Ax
0
O|
1
O
-1 =
of Lc
band
Ux
with
0
0]
—1
1
Of
-1
U?
‘1 0
dz||0 Lc
0 1
-—1].
0
1
b:
=
0
fe] Jeo}
-1
=
1] [es]
|0|
=b.
HL
n*/2 multiplication—subtraction
steps
to solve
=c?
many steps does elimination matrix A? 60 coefficient
How
(row 3 of UV).
of A.
rows
dy
1
(a) Why does it take approximately each
1
+
3 of Z times
row
fd,
6bstarting
O
the
this
TTL
|
=
£39(row 2 of U)
following product nonsingular?
|
the system
with
similarly
A=|-l
(b) Solve
+
multiplicationmakes
of LU agree
what
sa
1
by
1]
U=
row
(b)
0
|
a
(b) The
10.
0
3 of
pivot row is used, equation above is the
time
The
LU)
=
0
T=1q
0
I)
(Second proof of A subtracting multiples row
P|
1
C=l021
01
}0 .
|
use
in
solving
10 systems
with
the
same
60
11.
Solve
without systems,
triangular
two
as
‘ha
0] 1 0] {0
(10 12.
a you factor A they be the same factors
Would
13.
into
could
How
Solve
by elimination, u+
—2u
4v
+ 2w
8v
+
—
Write
14.
_
and
same
list.
LDU
factorizations
PA
=
'o A=
16.
Find
4
a
by
of elimination
17.
The
4
whether no
u
19. Which
0
=
utvtwe=l.
required?
them) for
(and check
1
1]
0
1
1
A=1|/2
and
WR 3 4) that
form
A
=
LPU
three
requires
2 1] 4 2]. Fl 1 1, to reach
exchanges
row
exchanges rows only
case?
Comparing
the end
1
1]
0
2} 6
3 with
PA A
=
at the end:
1 =PU=
0 0)
0
0
1
|
and
—-w=2 numbers
a,
b,
lead
| A=jia O |
exchanges?
to row
2
O|
8
3
b
5
and
(0
Uu
Which
and whether
ut+uv
—~w=0
u
c
and
-Q
u—v
orm0 3
0
vt+t+w=
v—-w=—_d0
=2
1 1] 6l.
2] now
LPU).
following systems are singular or nonsingular, solution, one solution, or infinitely many solutions:
v
r1
©
1J, the multipliers
in Box
LU
=
the
v—-w=2 u—
v+w=0
¨o
1
stay in place (£2; is 1 and £3; is 2 when
have
triangular?
=
is L is this
Decide
lower
necessary:
and
r1 f1o1é421i 1 A=/]1 3}/72L4'A=1/0 25 8 (0
18.
{2!
lw}
x
32
=
{oO}.
times triangular
upper LU?
—2
permutation matrix (whichis U /).
less familiar
What
1]
=
=
the
35>Findthe
0
2
I. Identify by 3 permutation matrices, including P also permutation matrices.The inverses satisfy Pp'=1
are
on
1
when
rows
=
are
inverses, which are
A=
‘alien
4] [u 2] jv]
4
all six of the 3
down
their
asin
w=
matrices
permutation
product UL,
3w
v+
Which
17 {0
exchanging
ome
~
[2
LUx={1
LU to find A:
multiplying
-
10
41
Triangular Factors and Row Exchanges
1.5
make
]
+w-l.
the matrix
4
l =
A= At 6
they
singular?
ri
20-31
Problems 20.
step subtracted
£5, times
1 to
row
f2;
b to
=
times
=
row
for that
triangular system [} c to give. Ux L multiplies
L times
letters,
x=C:
5
1
1
5
12
7
O
1
2]
1 from
row
1
2. The
step is L [3] to get
reverse
{|x
the
LDU).
=
F |
triangular
a
A
(andalso
1
2. The matrix
row
LU
=
y=5 y=2
x+
=7
x+2y
this
[i 5|x
changes
y=5
That
factorizationA
the
compute
elimination Forward x+
21.
Elimination
Matrices and Gaussian
=
step adds
reverse
=
.
Multiply In
=
=
(Move
to
x
+
3
elimination
Forward
by 3)
z=5
y+
11
a triangular
Z=5 yt2z=2
X+
yt
x+2y+3z=7
x+3y+6z=
Ax = D to
changes
X+
Ux
c:
=
z=5
yt
y+2z=2
2y+5z=6
2.
z=
2 in Ux = c comes from the original x + 3y + 6z = 11 in equation z Ax = b by subtracting £3; = ___times equation 1 and £3. = __ timesthe final [1 3 6 11]in[A b] fromthe efinal 5] fl 11 equation 2. Reverse that to recover 1 O 2 and 1 cl]: 2jin[U 2] and[O [0
The
=
el
—
Row3o0f[A b] notation
In matrix 22. What
23.
the 3
are
Check
that
What
two
E3)E,A
c
=
this is
U?
(£3; Row
1+ £32Row 2+
multiplication by
by 3 triangularsystems (5, 2, 2) solves the first matrices
elimination =
=
Multiply by
Lc
A=
)
What
three
U?
=
E3.F3,E2,A L
matrices
elimination
L and
pivots d, f,
4
0
1
2
24.
4
=5
A = LU is not in a pivot position, appears i in U.) Show directly why these are both
1] [1
|
Which needed
number and
A
c =
leads LU
to
zero
is not
upper triangular form factor A into LU
where
possible! (We impossible:
0
12 in the
possible.
second Which
1]
need
nonzero
on
#— ;
des
112/=le1
Salle allo 4 26.
form
Ej E3/U:
T
LT
e
Olfa
triangular
5], 0
5
0
one?
U:
}3 zero
factor
21?
Problem
second
upper A into LU =
;
A=z=|]2
When
from
solves the
x
put E31, Ex2 En, and Ex E;; E>; to
1
25.
c
A into
Multiply by
E>)E3)| E5). Find
=
=
Le.
=
11 4
|2
Ux
[U c].
of
£3. put A into
E3;' and E3;' to l
24.
Which
3) Row
LU and b
=
b and
=
one.
and
E,,
L. So A
1
f [mn
hi.
if}
pivot position?A row exchange is c produces zero in the third pivot
1.5
Then
position?
a row
Triangular Factors and Row Exchanges
can’t
exchange
and elimination
help T1
Azji2
3 27.
What A
in
L and D for
are
this matrix
4
1).
5
1
A? What
A
is U in
A=1{|0
}0 A and
Ol
fails:
LU and what
=
is the
new
B
are
factorizations
symmetric LDU
and
the
across
4
81
3
9Q}.
0
7
|
diagonal (because
U
say how
4
L tor these
F mn
and
0
triple
matrices:
Q
12
|
their
symmetric
4
B={|4
4). Find
=
related to
is
1 A=
4/1}.
4
0 |
29.
(Recommended) Compute
U for
L and
the
symmetric matrix
A=
30.
Find
four
Find
L and
conditions
on
U for the
a,
b,
d
c,
to
A=
get
LU with
four
pivots.
matrix
nonsymmetric
rer
A=
s
Ss
Cc
Uf
d|
c
Find
31.
the four
conditions
Tridiagonal matrices adjacent diagonals.
on
Factor
these 1
O the
triangular
33.
find
safety
Solve
Lc
=
A=
LU and
b to find
r1 L=|1
}1
c.
Then
O
O01
1
0
1
1
except
on
LU and
A=
and
A
b to find
and
solve
solve
Ax
Ux
b
=
=
a
0
a+b
b
b
b+c
c.
Then
solve
2
4
usual.
as
c
to find
r1 U=1!{0
0
Ux
=
and
b=
Circle
c
What
x.
1
1]
1
1
0)
1
¢ to
and
the two
find
x:
2
[ih
when
was
see
you
A?
4 and
|
pivots.
diagonal
0
=|)
u
four
LDV:
=
A=J|ja
2] =
LU with
the main
a
and |
get
A=
O
L=|| 1
t to
r,s,
O]
systemLe 1
For
into
1 1
d,
c,
entries
zero
A=|]12
Solve
b,
a,
have
ri
32.
U
LDU?
=
2
28.
c¢
43
b=
}{5|.
6
it.
r1
34.
Matrices
Elimination
and Gaussian
B have
If A and
in their
A=
x
xX
xX
X
x
x
x
O and
QO
x7
x
x
0
QO
x
x
left 2
the upper
36.
37.
38.
If A has
(Important) Starting produce
from
column
are
by 3
a
chol(pascal(5)) exchanges in [L, U]
to
find
the
is
factors
LU
A=
A
tic; A b; Problems
40-48
40.
are
There
to
41.
are
the
ten.
toc
O
x
x
0
XX
xX, what
the
pivots 3)? Explain why. are
B
(which
pascal(5).
three
ec
11.
1
1
42.
position
some
43.
(Try
P>
order. this
the
are
=
solves
side b when
n
rand(800,9). Compare for 9
=
800. Create
the times
from
right sides).
matrices.
permutation
permutations
of
of the 1 in each
row.
Give
permutation matrices, so is P; P,. This examples with P; P, ~ P,P; and P3P,
question.)
make
tations
Row
many other =
102
odd.
are
If P,; and
and
|
many
103
to
pivots?
=
and
for
pattern!
right-hand
new
rand(800,1) and A B;
x
exchanges will permute (5, 4, 3, 2, 1) back to (1, 2, 3, 4, 5)? How exchanges to change (6, 5, 4, 3, 2, 1) to (1, 2, 3, 4, 5, 6)? One is even and the is odd. For (n,..., 100 and 101 are even, n 7), show that n 1) to (1,...,
How
zero
(1, 2, 3, 4), with an even number of exchanges. (1, 2, 3, 4) with no exchanges and (4, 3, 2, 1) with two exchanges. of writing each 4 by 4 matrix, use the numbers Instead 4, 3, 2, 1
12 “even”
the other
give
about
are
Two of them
List
b=
and tic;
toc
for each
difference
rand(800) and
=
still
0
}O the time
x
impossible—with
Az=13
Estimate
O
of MATLAB’s
Pascal’s
12
39.
are
pivots 2, 7, 6, add a fourth row and column pivots for M, and why? What fourth row fourth pivot?
triangular lu(pascal(5)) spoil c
x
3 and column
the
numbers
For which
x
Be
row
the first three
=
O
(without
A with
as
x
exchanges,
3 matrix
produce 9
x
row
B
Use
(Review)
zeros
x
no
2 submatrix
are
to
sure
which
x,
|
with
pivots 2, 7,6
by
What
M.
by
|
|
35.
marked
positions
U?
L and
factors
in the
nonzeros
P;
of
Which
A P lower
permutation makes PA triangular? Multiplying
upper A on
still has =
the
of J in
rows
P,P3.
triangular? Which permuright by P, exchanges
the
A.
) A=]1
)
44,
by 3 permutationmatrix permutation P with P4 + I. Find
a
3
with
P?
=
I
(but
not
P
=
J). Find
a
4
by
4
Inverses
1.6
45.
46.
of
P that
The matrix
of rotation
If P
is
will
mean
If P has
1s
of
J
an
=
find
a
inverse,
no
matrix
n
y,
=
from
antidiagonal
by
n
that nonzero yeetor and has determinant x
(1, 2)
is another
n
by
matrix
Inverse
A~!Ax
inverses. to
is
An inverse
produce
a nonzero
Our
goals
Ax
vector
exists—andthen
Ax
simple: [f A
then
is
identity and
zero
Aq!
multiply by
you
started:
you
No matrix
x.
is
property
the
of A is written
'b=x.
is
x
A~! would
Then
nonzero.
multiply
that
compute
it and
can
have
Not all matrices
matrix.
Ax and
vector
zero
x.
define
to
are
Q to
=
A is
when
impossible
from
get back
A~! times
P)x = 0. (This
—
PAP.
The inverse
matrix.
n
b=Ax
If
The matrix
x.
=
(n, 1), describe
to
inverse’). The fundamental then multiply by A~', you are back where
A and
J
so
zero.)
“A
(and pronounced
have
v
P has
the
on
challenge
a
=
from
that
a5
1?Find
some
z) to give (z, x, y) is also a rotation matrix. Find axis a (1, 1, 1) doesn’t move, it equals Pa. What is the (2, 3, —5) to Pv (—5, 2, 3)?
multiplies (x,
—
is
permutation, why
permutation matrix,
any
The inverse
Thus
a
P°. The rotation
angle
48.
45
Transposes
P* eventually equal to P so that the smallest power to equal J is P®. (This is question. Combine a 2 by 2 block with a 3 by 3 block.)
If you take powers by 5 permutation
P and
47.
and
inverse
the
to understand
which
and
matrix
matrices
don’t
it, when
use
Aq!
inverses.
have
|
aoe
Te “fhene mostone such B, and Note
I
The inverse
exists
allowed). Elimination Note AC
2
The
solves
matrix
=I.Then
B
=
that
shows
tiplying
A from
if and only if elimination produces n pivots (row exchanges Ax b without explicitly finding A7!.
A cannot
=
Suppose “proof by parentheses”:
this
Note
4
(Important) Suppose inverse.
If A is invertible,
one
there
and
is
only
a
the
left) and
the
same
nonzero
To repeat: No matrix can then Ax = 0 can only have
Ax
vector
x
bring the
=
and
also
(2)
right-inverse
a
bis
such
0 back
zero
J
C
(mul-
matrix.
to
solution
=
B=C.
whichis
BI=IJIC
=
invertible, the
BA
inverses.
left-inverse B (multiplying from the right to give AC J) must be
If A is
an
to
different
two
a
3
have
have
(BA)C_ gives
Note
cannot
=
C, according
B(AC) This
denoted An iby who
to
solution
x
A~'D:
=
that
Ax
=
x. x
=
0.
0. Then
A
Matrices
r1
Note
and Gaussian
5
A
Elimination
is invertible
2 matrix
2by
if
if ad
and only 1
2
ad
number
This is not
ad
Elimination
6
has
matrix
A diagonal
| mA
be
—
of A. A matrix
inverse
an
provided
zero:
¢
~
d
bc is the determinant
—
I
¢|
inverse
(Chapter 4). In MATLAB, the invertibility produces those pivots before the determinant
zero
Note
2
by
bc is not
—
if its determinant
is invertible
is to
test
(3)
a
—c
find
n
pivots.
nonzero
appears.
diagonal
no
entries
zero:
are
]
id If
1 /dy
A=
=
matrices
two
1/d,
last
involved,
are
much
not
AA!
and
dyod
bees,
When
A!
then
_
be done
can
=I.
=
about
the inverse
of A + B.
Instead, it is the inverse of their product might not be invertible. AB which in matrix is the key formula computations. Ordinary numbers are the same: (a + b)' is hard to simplify, while 1/ab splits into 1/a times 1/b. But for matrices be correct—if the order of multiplication must ABx y then Bx Aq!y andx The
might
sum
or
=
B-'A~'y.
Proof
inverses
The
show
To
is the
(AB)(B7!A7!) (B-'A-')(AB) A similar
to
this
saw
change
back
come
L
direction,
first.
comes
Please
the
determines columns
the
equation AA™! of the
of J. In
identity: Ax; a3 by 3 example, =
e;.
=
@;
2
U. In the
was
Since
G
came
backward
last, G~
Method a
column
of A~! is
Similarly Ax,
A times
GFEA
inverted
were
U~!GFE.
be
=
e.
time, that equation multiplied by A, to yield the at
and
a
Ax3
=
e3; the e’s
are
the
A7! is /: Jf
rf
Ax;
the
=I.
FE, F, G
of the inverses.
If it is taken
J.
=
BB
=
matrices
direction,
of A~'. The first column
column
use
CBA},
=
elimination
product
A~! would
that
check
the
was
and
matrices:
more
(ABC)!
when
of order
BT'IB
=
of A—': The Gauss—Jordan
each
first column
or
of ABC
E~!F-'G~—!
The Calculation Consider
B7'A7'AB
=
U to A. In the forward
from =
Notice
of AB, we multiply them how B sits next to B7!:
ABB'A7!=AIA!=AA=1
=
three
with
rule holds Inverse
We
inverse
parentheses.
remove
=
order.
reverse
B-'A7!
that
law to
associative
in
come
=
2
1
4
—-6 7
17
|
OF}
2]
[x1 |
=
xX. x3; |
|e, i
eg
[1 =
e3| |
10
O
0
0]
1
OF}.
0
1
(6)
Thus
have three
we
of
systems
and
Inverses
1.6
all have the
equations (orn systems). They
47
Transposes
same
coefficient
is possible on all right-hand sides e1, e2, e; are different, but elimination systems simultaneously. This is the Gauss—Jordan method. Instead of stopping at U and it continues switching to back-substitution, by subtracting multiples of a row from the This above. rows produces zeros above the diagonal as well as below. When it reaches the identity matrix we have found A~!, The example keeps all three columns of length six: e;, e7, e3, and operates on rows A. The
matrix
Using
the Gauss—Jordan
[A
Method
e3|
€3
Ey
to
Fn
=
2
1
1
1
4
—6
0
0
7
2
0
2 (2
»
pivot =2—-
= —-8—>
pivot This
of
columns.
go from
Creating
|0 —8
—2
0
8
3
2
1
1
0
The
other
1
0O
1
0
1
1
«0
0
1
0}
1
«#1
-2
-2
The
are the
columns
=[U L""'].
same
in U appears (This is the effect
triangular
upper
L~!.
as
elementary operations GFE to the identity matrix.) Now the second half U to I (multiplying by U~'!). That takes L~! to U-!L~! which is A7!. above the pivots, we reach A7!:
zeros
[U L~'] >
Secondhalf 7
0
1
0-4
|0 -8
above
zeros
pivots
«
1
#1
0
1-1
2
0
0
|g
8
Q —4
3
2
oO
O
L-l
1
tf
1
0
O
|0
1
0
O
1-1
>
by pivots—>
divided
At the
last
matrix
in the left-hand
right-hand
2
67
5
12
3
F
2
73
64
5
-#
-#%
F
we
half
the
rows
half became have
must
the
$ —2
by their pivots identity. Since
carried
J into
~—2)
1 2 and A went
A~!. Therefore
—]
__
2
°
divide
step,
3
0
-
e
-1
2-1
r
the
«0
elimination.
2
on
+40
1-1
0
three
L
1
—-2
|0 -8
|
the
applying
will
1
the first-half—forward
completes
the first three
1
0
=[7
Av).
1 —8 and to
1. The
/, the we
coefficient
operations computed the
same
have
inverse.
for the future:
A note
of A~!. The when
the
rows
You
determinant are
divided
can
see
the determinant
is the product
by
the
pivots.
of the
—16
appearing pivots (2)(—8)(1).
in the denominators
It enters
at the
end
r1
Elimination
and Gaussian
Matrices
Remark
I
I admit
that
In
of this brilliant
spite
A~! solves
write
c
be
step. Two
one
U~'c
=
x
of
computation should not form, these time, since we only need back-substitution
produced c). similar remark applies
it.
better:
are
U~!L7~'b. But
=
recommend
I don’t
Ux=c.
and
Leo=b
separatesinto
L~'b and then
=
waste
a
in
and in actual
explicitly form, It would
b
=
A™'b
x=
We could
Ax
computing A7!, triangular steps
in
success
that
note
did not
we
L~' and U~.
matrices for
x
(and forward
substitution A
that
It is the solution
2
Remark
we
to
A~';
want, and
all the entries
not
still take
would
n? steps.
in the inverse.
might count the number of operations required count for each new to find A~!. The normal right-hand side is n?, half in the forward With n right-hand sides e;,..., n°. and half in back-substitution. direction e, this makes to be 4n?/3. After including the n?/3 operations on A itself, the total seems
Purely
result
This
is
a
out
little
of
multiplication A~!b
the
curiosity,
high
too
we
because
of the
elimination
in the e;. Forward
zeros
7 components, so the count changes only the zeros below the 1. This part has only n for e; is effectively changed to (n j)*/2. Summing over all j, the total for forward is to be combined with the usual n? /3 operations that are applied is n3 /6. This elimination to A, and the n(n*/2) back-substitution steps that produce the columns x; of Aw. The final count of multiplications for computing A™~'is n°: —
—
finally
nw count
Operation
nd
n2
+—-+n{—]=n. 3
S
;
2
remarkably low. Since matrix multiplication already takes n° steps, it requires as many operations to compute A’ as it does to compute A~'! That fact seems almost unbelievable (and computing A’ requires twice as many, as far as we can see). Nevertheless, if A~! is not needed, it should not be computed. This
is
count
Remark
3
starting
backward
but other
orders
earlier,
to
second
row
is
operations
that
create
lnvertible
Each
produce possible.
it
virtually full, have already
well
as
whereas taken
the
have
We could
above
a zero
above
zeros
as
near
went
we
pivots. used
below the
all the way forward That is like Gaussian
the second it. This
end
pivot
is not
it has
when
from
elimination, we
were
At that
smart.
zeros
U, before
to
the
time
upward
there
the row
place.
Nonsingular (n pivots)
=
Ultimately question is
to are
calculation
Gauss—Jordan
{nthe
we
want
to
which
know
matrices
are
invertible
and
which
are
not.
This
See the last page of the book! that it has many answers. of the first five chapters will give a different (but equivalent) test for invertibility. so
important
rectangular matrices and one-sided inverses: Chapter 2 and independent columns, Chapter 3 inverts AA™ or ATA. looks for independent rows determinants or nonzero The other chapters look for nonzero eigenvalues or nonzero We want to show pivots. This last test is the one we meet through Gaussian elimination. (in a few theoretical patagraphs) that the pivot test succeeds. Sometimes
the tests
extend
to
1.6
a
A has
Suppose
full set of
of A~!.
for the columns
They
can
exchanges may be needed, but Strictly speaking, we have a left-inverse. Solving AA~!
the columns
We
matrix.
2.
£;; P,;
3.
D
a
to subtract
a
exchange
to
D~')
(or
separate systems Ax; e; or by Gauss—Jordan. Row
n
=
elimination
A~!
of
gives
I has
the
at
determined.
are
A~' with those time solved A7'A
same
columns
is also
I, but
why? A
=
i and
rows
all
7 from
row
To
why, elementary see
i
row
j
by
rows
is
process
is
2 of
multiple
to divide
The Gauss—Jordan
matrix
square
49
Transposes
automatically a 2-sided inverse. Gauss—Jordan every step is a multiplication on the left by an are allowing three types of elementary matrices:
that
notice
1.
of
inverse
I
=
and
that the matrix
show
to
=
1-sided
pivots. AA~! besolved by
n
Inverses
really
a
their
pivots.
giant
sequence
of matrix
multiplications:
(D'...E...P-.-E)A=T. in
matrix
That
(6)
It exists, it equals parentheses, to the left of A, is evidently a left-inverse! the right-inverse by Note 2, so every nonsingular matrix is invertible. The converse is also true: If A is invertible, it has n pivots. In an extreme case that is Clear:
A cannot
of
column starts
on
have to
zeros
produce
invertible
an
matrix
ve Le
‘This matrix
A but breaks
less extreme
a
down
No
have
cannot
invertible.
Therefore
Ann
matrix
by n
one
of
inverse,
an
time?)
2 and
x
x
0
0
xI
0
0
0
x|
no
make
to
then
what
matter
the x’s
are.
third
the whole
of column
1,
reach
we
One
column a
matrix
the
original A was not invertible. Elimination invertible if and only if it has n pivots.
proof
matrix,
more
A becomes
and
by A’.
the ith
fortunately
it is much
Its columns
are
column
If
Transpose
then
time
same
A! is
entry in
row
n
by
is to
use
column
By subtracting that is certainly not gives a complete test: zero.
taken
simpler than the inverse. The directly from the rows of A—the
of A’:
0
2
At the
elimination
Matrix
of A is denoted
transpose row
first
is
Transpose need
pivot
a
xi
dy
10
3
multiply
3: x
|0
never
suppose
x
,
in column
case,
at column
Breakdown
operations (for the multiples of column
ith
of J. In
could
inverse
The
zeros.
fd;
€
We
column
a
of
column
co
)
The
whole
a
m.
the columns The
i, column
final
2
of A become
j of A!
is to
comes
Entries of
4
0
3
Fe |
A=
effect
1
flip from
AT
the
‘then
rows
the matrix row
AT=/1
OJ.
3
4
of A‘. If A is across
j, column
its main
by n matrix, diagonal, and the
an
m
i of A:
(7)
r1
The transpose us back to A. If
of
lower triangular
a
+B)’
A~!? Those
inverse
an
or
matrix
is upper
and then transpose, as A’ + is the same
matrices
add two
we
and then adding: (A AB
Elimination
and Gaussian
Matrices
the result
of A’
The transpose
is the
same
first
as
brings
transposing a product
B". But what is the transpose of formulas
the essential
are
triangular.
of this section:
A
iisUB =ptt, ofAB Thtranspose = NA@ ‘The a a Gi).The ofAT‘is.¢ AM T= ( At). transpose. |
Notice
how
giving B'A'
the order, an
unnatural
column
of AB.
requires first
to the
amounts
B~'A7!.
and
of
of AT
by the
rows
weighted
of
rows
=
On
of
a
product. You
(A7!)!,
for J'
side,
one
how
Inverse
of
(A~')! AT
is
the
reverse
easy, but this one first row of (AB)! is the
of B. This
first column
exactlythe
is
first
row
3
3 E j -
|3
2
3
2
| |
from
AA7!
Transpose
=
3
2
1
side, of
inverse
we
agree.
3
start
cases
was
BT. That
of
row
also
J. On the other
=
see
first
; a;
AB=
B'A'=
the formula
A
(AB)~!. In both
for
one
proof for the inverse multiplication. The are weighted by the
(AB)! and BA!
Transposeto
transposes.
The
So the columns
Startfrom
To establish
the
matrix
with
patience
B™A’. The other
of
(AB)! resembles
for
the formula
we
=
3.
5
|3
5).
3
5
I and
=
know
from
A~!A
=
TJ and
take
part (i) the transpose
A‘, proving (ii):
of A~!
(AD)TAT=I.
(8)
Symmetric Matrices With the
these most
rules established, important class of
we
all.
can
introduce
a
class
special
A symmetric matrix
is
matrix
a
of matrices,
that
probably
equals
its
own
is necessarily square. A. The matrix Each entry on one side of the transpose: At diagonal equals its “mirror image’ on the other side: a;; a;;. Two simple examples A and D (and also A~!): are =
=
Symmetric A
A
symmetric exists
A=
matrices
matrix
it is also
equals (A!)~!; for a is symmetric whenever a symmetric matrix.
; 4
and
D=
F4
and
Av}
l =
-—
_
z i}
be invertible; it could even be a matrix of zeros. But if symmetric. From formula (ii) above, the transpose of A~! always symmetric matrix this is just A~!. A~' equals its own transpose; it need
not
A is. Now
we
show
that
multiplying
any
matrix
R
by R' gives
1.6
Choose is
a
is
a
R*'R
of
R™(R*)',
is
for R'R.
of symmetry i of R) with column
quick proof
of R?
Its i,
that
=|1
produce R™R
not
R.
product R™R
the
the inner
1
2
2
4
=
|
is the
and RR*
opposite order, RR™ is m by m. RR". Equality can happen, but it’s not
=
inner
same
(9)
|
row
i
product, scientific
most
both.
or
|S].
=
In the
n.
of
product
Inmy experience, with R'R or RR!
R end up
matrix
H
R?=
2] and
rectangular
a
product R'R is n by very likely that R'R
The
an
with
start
Toe
R'R.
whichis
j entry is (j,i) entry
(column j of R. The column j with column i. So R'R is.:symmetric. RR‘ is also symmetric, but itis different from R?
problems
R. Then
1
—
symmetric matrix:
square
The transpose
That
Transposes
soll Caf
R, probably rectangular. Multiply R‘ times
any matrix
automatically
Gil
R'R, RA™,and LOL
Products
Symmetric
and
Inverses
if
Even
m
=
it is
2,
normal.
in every subject whose laws are fair. “Each action has The entry a;; that gives the action of i onto 7 is matched symmetry in the next section, for differential equations. Here,
Symmetric matrices appear equal and opposite reaction.” We will
by a;;.
LU misses
this
see
but LDL! captures
the symmetry
it
perieetly intoA - LDU aN Suppose Az_AT can without row¢exchanges factored cebe. Ht
n
~
nto
-
=
a
-
-
o ThenUiis thetranspose. ofL.Thesymmetric factorization A= -LDL*.becomes.
The
of
transpose
triangular
unique (see
with
17), L*
T
L*=UandA=LDL
elimination
When matrices
The lower
on
be identical
must
we
now
have
two
upper triangular. (L’ is is Since the factorization
to U.
1
2
1
T
_
A’,
=
=
lower
ones
Problem
A Since U'D'L'. gives A’ triangular times diagonal times the diagonal, exactly like U.)
LDU
=
of A into
factorizations upper
A
0)
|1
OF
JL
2
T
E sl [a" 0 4 eelferaaae —
_
=
is
applied
stay symmetric
right-hand
as
corner
asymmetric matrix, At elimination proceeds, and we remains symmetric: to
A is
=
a
advantage.
an
work
can
with
The smaller
half the matrix!
Cc
b _
la
ob
oC)
b
d
el
pc
€
FI
b?
+|9
¢-7 e—-—
a
7
from
work both
of elimination sides
of the
is reduced
diagonal,
or
e-=> C2
be 0
The
be
from to store
n° /3 both
to
f-—
n° /6. There LZ and
U.
a
|
is
no
need
to store
entries
Problem 1. Find
Elimination
and Gaussian
Matrices
r1
1.6
Set
(no special system required) of
the inverses
2.
(a) Find
F4
=
P1
A=
»
of the
the inverses
O
2
2
Q
Al
1
(4) (a)
AB
C
=
a
If A is invertible
(b) If A
1
0
0
0
iO
find
AC,
=
example
an
of A’ is B, show
always the
is
as
0
QO}.
1
0,
P*. Show that the 1s
J.
=
find A~! from
prove
quickly
with
AB
that
AC
=
the inverse
that
same
1
PA
C.
B=
but
LU.
=
BAC.
of A is AB.
(Thus
A
is invertible
A?”is invertible.)
whenever
Use the Gauss—Jordan
to invert
method
|
100
|
O
0 Find
2
three
A”
inverses: Show
by
2
A.=]{-1
11,
O
-1
O
fF
—-l},
I
other
than
A
=
J and
0 1]
{0
Azg=
2,
|
2 matrices,
0
1
Ii.
1
bod A
—/J, that
=
their
are
own
TI.
=
E|
A=
that
2
1
A,y=i{1l
7.
P=ji}1
and
for A~!. Also
formula and AB
4 a
=
If the inverse
(4
find
.
‘Oo O
(b) Explain for permutations why P~! in the right places to give PP! are From
ber sono
1
P={|0
—sin@
matrices
permutation
00
3.
cosé
ABS
»
has
inverse
no
by solving
Ax
0, and by failing
=
to solve
salle al=[oah 9.
Suppose
fails because
elimination
is
there
no
pivot 2
A=
3
Show
the third
10.
Find
be invertible.
that A cannot
[0 0
row
‘Oo0
0
2
0
2
0
“=19 400
30
IJ.
=
71 0
0
9
A7!, multiplying A, this impossible?
of
row
Why
5
is
should
give
(in any legal way) of
the inverses
{0
The third
1 O]of A7'A
8
000
|O
3:
6
14
0
.
oe
Missing pivot
in column
of? 0
fr “=
1
_t-3
] |
0
0
0
1
0
0
1
ol? “#8=100 a4 af
0
-2
0
0
-32 1
la
_|ce
10
b
0
O|
d 0 0 O
¢
di
1.6
11.
Give
B such
of A and
examples
case
A7'(A
use
also invertible—and
find
B)B7!
+
formula
a
53
Transposes
that
(a) A+ B is not invertible although A and B (b) A+ Bis invertible although A and B are (c) allof A, B, and A + B are invertible. In the last
and
Inverses
are
invertible.
not
invertible.
B-! + A™! to show that for C~!.
C
=
of A remain true for A~!? invertible, which properties (a) A is triangular. (b) A is symmetric. (c) A is tridiagonal. whole numbers. (e) All entries are fractions (including numbers
B-'
=
+
A!
is
If A is
12.
[7]and
IfA= ,
If Bis
B
that
show
square,
+],and
write
that
means as
the
can
be
B
K'
of
sum
—K.
=
Find
symmetric
a
are
like
2).
B
B' is always
BA’.
and
B™is always symmetric and K
B +
A=
skew-symmetric—which
B=[}
A’B, B™A,AB’,
Be compute
=
(d) All entries
=
these
matrices
matrix
and
a
—
K when
A and
skew-symmetric
matrix. 15.
How
(a)
order How
(b)
(K? 16.
(a) If
=
chosen
in
independently
can be chosen many entries independently in —K) of order n? The diagonal of K is zero!
symmetric
a
matrix
if
A
=
share
the
What
triangular systems
pivots.
same
and
A £,D,U; =
A=
will
give
LDU),
the factorization
A is invertible,
the solution
prove is
that L;
to
Aly
Lz, Dy
=
b?
=
=
Dz, and U;
=
Under
what
conditions
on
their
entries
A=|de
0 0
f 19.
Compute
the
symmetric LDL’ 1 A=
{3
5 20.
Find
the inverse
are
A and
B invertible?
bc
a
=
U). If
unique.
(a) Derive the equation Ly Ly,Dy Di O,U, and explain why one side triangular and the other side is upper triangular. (b) Compare the main diagonals and then compare the off-diagonals. 18.
matrix
skew-symmetric
a
of
LDU, with 1s on the diagonals of L and U, what is the corresponding factorization of At? Note that A and A! (square matrices with no row exchanges)
(b) 17.
entries
many n?
la.
b
0|
Ic
d
Ol.
B=
0;
factorization
0
Oe
of
63S]
1b
12
18
18
30]
a=|f A
and
of I
—y
©
07
Ble= SO Wl Wi = NieNieNie
is lower
rd
21.
22.
—
are
(directly
the inverses
23.
and
of A~!
for the columns
Solve
B(UJ
from
or
=i 4
matrices, show that AB) (J BA)B.
square
from
Start
AB is invertible.
Find
B
If A and
(Remarkable) I
Elimination
and Gaussian
Matrices
—
=
I
if
BA is invertible
—
—
the 2
by
B=
| 4
2
of A, B,C:
formula)
|
and
|; 7
c=
=
y
[1 {yy} {0
20]
10
50|
20
{22
[x]
and
20]
[0
fe] _.
20
S50]
fil
|z|
|
24.
E
that
Show
2
has
If A has
(Important)
by trying
aleloaf
pa 25.
inverse
no
1 +
row
2
row
for the column
to solve
[ol] =
maiome
3, show
row
=
(x, y):
that
A is not
(1, 0, 0) cannot have a solution. (a) Explain why Ax (b) Which right-hand sides (61, b2, b3) might allow a solution (c) What happens to row 3 in elimination?
invertible:
=
26.
2
1 + column
If A has column
solution x (a) Find a nonzero (b) Elimination keeps column third 27.
28.
If the
Find
invertible.
30.
Multiply
31.
(a) What
that
row
row
Find
and you How would
2
by 3. 3. Explain why
a
column
a d c
E has
2, subtract
single 2 to
row
the numbers
row
the
row
Z has
the
a
and
1 to
row
b
that
is the inverse
effect
same
4-1
give
these
as
3, then same
no
a
B. Is the
of each
subtract effect
the inverse
as
matrix
steps? 2 from
row
these
three
reverse
*
| to
4-1-1]
-1
-1
1
-1
and b in the inverse
4
-1|
-1
of 6
~
—
*
bb
_|b ab b
4 eye(5)
—
|b
ba
DI
lb
bb
a!
ones(5,5)?
if ad row
are
4
bc? 1 from
3.
steps?
2.
row
eye(4)
fab
A, B, C
Subtract
row
row
of 5
-1])”°
new
inverse.
an
three
3, then add
row
-1
~1
are
is
there
A~!?
have
cannot
zeros
What
.
| from
matrix
3, add
of
=
[¢ 3] times |_ matrix
you find B~! from
to reach
rows
matrices is invertible, then square for B~! that involves M~! and A and C.
formula with
its first two
exchange
r
What
invertible:
is 3
column
=
not
of three
ABC
=
matrix
a
(b) What 32.
M
product
Prove
0. The matrix
=
1 + column
is invertible
B invertible?
29.
to Ax
3, show that A is
b?
=
pivot.
Suppose A matrix
column
=
to Ax
ones(4,4):
Add
33.
Show
that
34.
There
are
are
35-39
Change
2
ones(4;4) is
—
2 matrices
by
entries
whose
are
J into
A~'
method Gauss—Jordan
the
about
youreduce
as
Os. How
*
ones(4,1).
many
1.0
13
N=) 304
£2
0 solve
to
I}
these
Invert
matrices
lie
JO
O
A
Xy
Tf
1 A=
AA!
=
T:
[1
0
0
0
1
Ol.
10
O
1
|2
=
X3),
Xd
|
|
the Gauss—Jordan
by
1
|
[
0
above and
0 0
bi
‘loa
4
2 1 0 1 O}.
[A
on
0
30
in A. Eliminate
10
0 Gauss—Jordan elimination
A}.
calculating
=|3
by 3 text example but with plus signs JI]to[Z A™!]: pivots toreduce[A
Use
for
141
[A
and
|
0
0
1
3
startingwith
method
‘1 A=j/1
and
1
1]
2
2).
2
3
[A
/]:
|
O
0 39.
Exchange
rows
and continue
|
True
or
false (with
a
1 Gauss—Jordan
with
=|;
|A 40.
counterexample
1
02 >
if false
A7:
to find
0
and
0
ih if
a reason
true):
(a) A4by 4 matrix with a row of zeros is not invertible. (b) A matrix with 1s down the main diagonal is invertible. (c) If A is invertible then A~! is invertible. (d) If A! is invertible then A is invertible. 41.
For
which
three
numbers
c is
this
matrix
invertible,
not ¢
Cl
c
Cl}.
7
C
‘2 A=|c
8 42.
of them
(by row operations):
A to I
[A I]/=|1
38.
and
1s
are
210
37.
A Multiply
invertible:
not
the 3
Follow
the
*eye(4)
sixteen
[A 36.
4
=
invertible?
Problems 35.
A
55
Inverses and Transposes
1.6
Prove
that A is invertible
if
a
+
0 and
|
+ b (find
a
a
Azla
b
a aaa
and why not?
b
b.
the
pivots and A~?):
below
43.
Elimination
and Gaussian
Matrices
er
This
matrix
toa5
by
has
5
inverse.
remarkable
a
matrix”
“alternating
A~! by elimination
Find
its inverse:
and guess
1-1
-
45.
46.
48.
lower
A
no
solution?
=
If b
M~! shows
0
b
QO
0
TT
C
ol
C
D
I
Dt
by
4
to Ax
a
matrix
M~'=J7+uv'/(—v'wu). M-'!
3.
M=]—-UV
and
M7'=1,+U(I, —VU)'V. M!1=A!1+A-'U(W VATU) !VAt.
M=A—UW'V four
identities
and
49-55
Problems Find
A? and A~! and
the 1, 1 block
when
Verify
that
(A7!)!
and
1
case
AB
(AB)' equals B™A'
=
BA
that
A?
=
inverting these
O
ew
for
loc
a=) sk
and also
but those
are
different
possible
but
(A7!)" (U~')! is =
B?:
bg
from
A1A
A’
from
(not generally true!), how do you prove
Ois
matrices:
matrices.
by
(a) The matrix ((AB)~!)™comes (b) If U is upper triangular then Show
(A!)7?
|g 4
mb] In
for transpose
the rules
A=
A.
vIAT).
—
eal
about
are
b has
—-
from
come
A7tuv'’A7/(
A714
=
from
is subtracted
and
=
B
get I:
to
wie
52.
=
tell you that Ax b is found by Ab?
=
when
know) MM
inv(S)
MATLAB
solution to
—
pascal(4). Create inv(A’)* inv(A).
=
2.
The
31.
and test
how does
S
matrix
symmetric
abs(pascal(4,1))
change in A! (useful by carefully multiplying
A
matrices:
block
and
4.
50.
of these
A
4
that
x.
QO
=
0 to show
=
M=I—uv' M=A-—uv'
1
49.
B)x
—
I
ones(4,1),which
=
to
you
rand(4,1),
=
1
order, solve (A
the
part 3
Check
0
-1/
(assuming they exist)
A
triangular
ones(4,4) and
If
0
MATLAB’s
invert
inv(S) to
Use
1
lead
‘
1
0
reverse
the inverses
and check
Find
1-1
A=10
example will
An
invertible.
js not
Pascal’s 47.
of A in
If B has the columns
-1)
1
O
44.
J]. Extend
[A
on
Oisnot
and
that
(B7!)".
B'A?
In what
=
A™B™?
order?
triangular.
possible (unless
A ==
zero
matrix).
53.
(a) The
x! times
vector
row
the column
A times
and
Inverses
1.6
y
57
Transposes
number?
what
produces 0
123
1
_
Aa (b) This (c) This
is the
row
is the
row
x'A x’
times
=
1] times
[0
=
the column
When
34.
the column
Ay
(0, 1, 0).
=
y =
M the result you transpose a block matrix on it. Under what conditions A, B, C, D is the block matrix
55.
Explain why
the inner
(Px)'(Py)
Then
and y
(1, 4, 2),
=
56—60
Problems
=
are
product
of
and
x
says that P'P P to show that choose
x'y
about
y
If
A
—
If
57.
A
symmetric.
a
In matrix
the
loses
certainly symmetric? (d) ABAB.
are
it also
then
exchange, language, PA
row
symmetric?
factorizations.
their
ABA
(c)
—
A! needs
=
and
matrices
=
=
Test
.
=
=
A! and B B', which of these B? B) (a) A* (b) (A + B)(A
56.
=
equals the inner product of Px and Py. I for any permutation. With x (1, 2, 3) to (Px)"y is not always equal x'(P'y).
matrices
symmetric
M7
is
[4 *]
=
needs
column
a
exchange
of A but
symmetry
to
stay the
recovers
___
symmetry. 38.
(a) How many entries of A can be chosen independently, if A A! is 5 by 5? (b) How do L and D (5 by 5) give the same number of choices in LDL"?
59.
Suppose
=
# is
rectangular (m by n)
and A is
symmetric (m by m).
(a) Transpose R'AR to show its symmetry. What shape is this matrix? (b) Show why R'R has no negative numbers on its diagonal. 60.
Factor
these
matrices
symmetric
into
A
LDL’. The matrix
=
D is
diagonal:
r
O|
2-1 a=
The
next
61.
Wires
three
|; 1 problems
A=|,|
and
are
about
[0-1
of
applications
(Ax)!
2
y
x!(ATy).
=
Boston, Chicago, and Seattle. Those cities go between are between cities, the three currents xs. With unit resistances —_
=
—
yBc y=
Ax
is
yos |
|
YBS_
=
.
are
at
voltages
xp,
xc,
in y:
—
1
—-l
OO}
0
1
—l
Fl
O
—1]}
AT y out of the three cities. (a) Find the total currents (b) Verify that (Ax) y agrees with x'(A? y)—six terms 62.
—l/).
2
|-1l
A=
and
“
|xp XC
|}.
[xs
in both.
Producing x, trucks and x2 planes requires x; + 50x, tons pounds of rubber, and 2x, + 50x. months of labor. If the $700 per ton, $3 per pound, and $3000 per month, what are and one plane? Those are the components of Aly.
of steel, 40x,
+
unit
yo, y3
costs
the values
y,,
of
one
1000x2 are
truck
and Gaussian
Matrices
ri
63.
64.
Ax
of steel, rubber, and labor
the amounts
gives
of
Then
(Ax)'y
is the
Here
is
factorization
a new
is
Why
A group inverses
A
from
Start
65.
Elimination
L(U)~
of A into
Its
triangular?
and
in Problem
62. Find
A.
of
symmetric:
L(U')~' Why
U' DU.
times
U'DU symmetric?
is
A~' if it includes
of these
stay in the group.” Which
equals
is all 1s.
diagonal
includes AB
of matrices
A
x
is the value
times
triangular
Then
LDU.
=
x'(A'y)
while
inputs
produce
to
A and
“Products
B.
and
groups? Lower triangular matrices L with 1s on the diagonal, symmetric matrices S, positive matrices M, diagonal matrices P. matrices Invent invertible two matrix more D, permutation groups. sets
are
66.
the numbers If every row of a 4 by 4 matrix contains the matrix be symmetric? Can it be invertible?
67.
Prove
that
of
reordering
no
and
rows
reordering
0, 1, 2, 3 in
of columns
order,
some
transpose
can
can
a typical
matrix. 68.
A square northwest that connects (1,
northwest You
are
or
B is
matrix to
(n, 1).
southeast?
What
7)
to combine
allowed
in the southeast
zero
below
corner,
B™ and B? be northwest
Will
is the
shape of permutations with
BC
matrices?
northwest
=
the usual
the
antidiagonal Will
southeast?
times
L and
U
B-! be
(southwest and
northeast). 69.
Compare tic; inv(A); toc for A rand(500) and says that computing time (measured by tic; toc) =
doubled. 70.
I
A =
=
matrix
71.
random
expect these
rand(1000); B eye(1000); A B. Compare the times for inv(B)
=
the
Do you
in B, while
zeros
with
inv
(Compare
also
Show
L~! has entries
that
the
uses
and
inv(A)
1
L=|*
m.)
A.
space—the nullspace of
vector
73
Spaces and Subspaces
-
s
ofallvectors. ofaa matrix x. “such Thenullspace. thatAx 0. Iti denotedc«onsists of R”. subspace by.N (A). It is a subspaceof R”"just.the.column, space =
is
=
a
was
0 and Ax’ Requirement (4) holds: If Ax 0, then A(x 0 then A(cx) 0. Both requirements (ii) also holds: If Ax is not zero! Only the solutions to a homogeneous equation (b nullspace is easy to find for the example given above; it is as =
=
x’)
0.
Requirement right-hand side 0) form a subspace. The small as possible: +
=
=
fail if the
=
=
_
1
0|
0 1
The first
contains comes
5
4
2
4
||
|Q}.
=
|O
O, and the second equation then forces v equation gives u only the vector (0, 0). This matrix has “independent columns’—a =
=
0. The
nullspace key idea that
soon.
The
situation
is
changed whena
third
is
column
a
of the first two:
combination
—
the vector
1
|5
4
9},
12
4
6
column lies in the plane of Figure 2.1; it is space as A. The new the column vectors started with. But the nullspace of B contains we
column
same
of the two
sum
0
B=
Larger nullspace B has the
1
(1, 1, —1) and automatically contains
Nullspace is
a
any
line
multiple (c,
‘1
0
1]
[ic
5
4
9
c|
2
4
6 Jo
c,
—c):
0] {OQ}.
=
—C
|
-
0 —_
—c. (The line goes through nullspace of B is the line of all points x c, y c, z the origin, as any subspace must.) We want to be able, for any system Ax J, to find 0. C(A) and N(A): all attainable right-hand sides b and all solutions to Ax in the nullspace. We shall The vectors b are in the column x are space and the vectors to generate set of vectors of those subspaces and a convenient compute the dimensions are that them. We hope to end up by understanding all four of the subspaces intimately of and their two to related each other and to A—the column A, space of A, the nullspace perpendicular spaces.
The
=
=
=
=
=
Problem
Set
1. Construct
of the x-y
under
vector
under
scalar
Starting with u and
Hint:
(a)
subset
a
(a) closed (b) closed
(x
2.1
Which The
of the
following
plane of
vectors
plane R?
addition
and
subtraction,
multiplication v, add
subsets
and
of R°
that is
but not
subtract are
but not
under
vector
for (a).
Try
scalar
multiplication.
addition. cu
and
cv
actually subspaces?
(b;, bo, bs) with first component
b;
=
0.
for (b).
(b) (c)
The
b with
of vectors
plane
b;
1.
=
b2b3; 0 (this is the union of two subspaces, the plane 0 and the plane b3 Q). b. of two given vectors (1, 1, 0) and (2, 0, 1). (d) All combinations 0. (01, b2, b3) that satisfy b; bz + 3b; (e) The plane of vectors vectors b with
The
=
=
=
=
—
/3,Describe
the column —-1l
1
a=(4 , 4.
5.
and the
space
and
and
all
both
of
0
smallest subspace of lower triangular matrices? those subspaces?
3
0
|
B=
the
Whatis
of the matrices
nullspace
3
4
,
3 matrices
by
i What is
the
largest to
+ Ze 4 + a=(e-+Ly)4
xt
these
satisfy
0
symmetric
matrices
contained
is
in
eight rules:
oe
suchthatx a-0==X There a unique “zerovector” x _For each x‘thereisa unique. vector-7x«such that.
de=x.
(5 of
subspacethat
is
.
all
0
0
0
c=
that contains
are required andscalarmultiplication
Addition
and
Be
oO
:
a forall.x.
-
aan(~2) =
wa
7
|
aax= exe) =
(a) Suppose addition in R’ adds an extra 1 to each component, so that (3, 1) + (5, 0) equals (9, 2) instead of (8, 1). With scalar multiplication unchanged, which
rules (b) Show
are
that the set of all
equalthe (c)
usual
Let P of the
==
+
(1, yz)
(CX1,CX2), which
real
positive
and x°, is
xy
Suppose(x1, x2) cx
6.
broken?
vector
a
is defined
of the
space. to be
plane in 3-space with equation plane Po through the origin parallel to of the
following
(a) All sequences (b) All sequences
are
What
(x;
0,...)
(x1, X2,...)
with
2y
+ y and
is the “zero
not
+ z
=
P? Are P and
cx
redefined
that include =
x;
0 from
y,). With the
All
6. What
is the
infinitely many zeros. some point onward.
decreasing sequences:
Which of
the
descriptions following ax=
are
correct? XI
111
The
solutions
0
|i ‘1 =1 0
*2
equation
Po subspaces of R*?
—
8.
usual
satisfied?
< x; for each j. x;,; the x; have a limit as j > oo, (d) All convergent sequences: for all j. (e) All arithmetic progressions: x;.; x; is the same (f) All geometric progressions (x1, kx;, k*x1,...) allowing all k and x;.
-(c)
to
vector’’?
of R®?
subspaces
like (1, 0, 1,
+
x
x
+ yz, x2 +
eight conditions are
be the
@7)Which
with
numbers,
x
of
Vector
2.1
75
and Subspaces
Spaces
form
(a) a plane. (b) a line. (c) a point. (d) a subspace. (e) the nullspace of A. (f) the column space of A. Show
that
nonsingular 2 by singular 2 by 2 matrices
that the set of 10.
The
the
matrix
E a”
A=
in this
vector
zero
in the smallest
11.
of
the set
is
a
space,
Ifa
of
the vector
5A,and A
A and
M contains
functions
f(x)
all real functions.
Which 13.
14,
is broken
sum
“zero
vector”
is
rules
that
broken.
Describe
(c) Let
‘1
of the “vectors”
are
x.
f(x) Keep
subspace
0]
0
0.
‘1
1|
and
of the 2
in
by 1
matrices
B=
but not
J?
matrices.
diagonal
nonzero
“vectors”
E JI.
in the vector
be
=
f(g(x)), multiplication cf (x), and
2 matrix 0
to
then
find two
1
0
0
1:
x
[1
1)
+ y
2z
—
that their
oO]
4. The
=
is
sum
1
fo
not
origin (0, 0, 0)
17.
The
of R°? are
is not
in P.
15. What Po is the plane through (0, 0, 0) parallel to the plane P in Problem in Pp and check that their sum is in Po. equation for Pg? Find two vectors
subspaces
the
M that contains
space
and
'
equation
in P and check
is defined
F
F of
space
16.
four types of
are
0 0 ql0 1 k if
(b)
in R° with
vectors
F |
scalar
the usual
,
Write
2 matrices.
—
1
two
by
—A. What
it contain
are
g(x)
(d)
plane
M of all 2
=
no
(x)
1
k |
and
space.
the vector
must
5x
=
3f
0}
P be the
in P! Find
g(x)
==
the smallest
0 0
if
also
Show
space.
4g(x) is the function h(x) multiplying f(x) by c gives the function f (cx)?
combination
The
If the
(a)
15.
rule
x’ and g(x)
=
vector
a
A?
B, (b) subspace Describe a of M that contains (c) subspace 12. The
vector
a
in the space
of M that contains
subspace
a
is not
“vector”
subspace containing
(a) Describe
is not
2 matrices
planes, lines, R? itself,
or
Z containing
is the
only
(O, 0, 0). (a) Describe (b) Describe 18.
(a) The intersection could
be
a
(b) The intersection
probably
subspaces
of R’. types of the five types of subspaces of R‘.
the three
a
of two .
of
planes through (0, 0, 0)
It can’t a
be the
zero
vector
be
but
it
through (0, 0, 0)
is
probably
a
Z!
plane through (0, 0,0) with
but it could
is
a
line
a
SMT (vectors in both (c) If S and T are subspaces of R°, their intersection spaces) is a subspace of R°. Check the requirements on x + y and cx.
sub-
r2
19.
Vector
Spaces
Suppose P vector
20.
True
is
space
for
false
or
M
all 3
=
by
a line through
and L is
plane through (0, 0, 0) containing both P and L a
is either
or
;
(check addition
3 matrices
The smallest
(0, 0, 0).
using
example)?
an
—A) form a subspace. (a) The skew-symmetric matrices in M (with A’ (b) The unsymmetric matrices in M (with A? + A) form a subspace. (c) The matrices that have (1, 1, 1) in their nullspace form a subspace. =
21-30
Problems 21.
the column
Describe
column
about
are
(lines
spaces
C(A)
spaces
B={|0
22.
For
which
and
C=
1]2 OJ.
(OO
OO sides
right-hand
(find
0
condition
a
0]
‘1
2]
0
b;, by, b3)
on
b.
=
matrices:
particular
0]
1
0O} and
A=1!10
equation Ax
the
of these
planes)
or
2]
rf]
and
these
are
systems
solvable?
Ft (a)
|
2
—4
Adding
—2,
two
1
For which
O
0 25.
the
and
same
61) [xy]
1
1
x2);
0
1
men
a
combination
and
systems have
and
|
produces
of the columns
1
C=
a
3
sf
solution?
r1
1
1)
0
1
i
[x;y] xX2|
|O 0) 0]
| b3
2
?
| A
b>
=
1 to column
column
2
Poy
|
|
column
1
B=
2
—4
is also
(b;, bz, b3) do these 1
0]|| [oa bs
2
=
Adding
of
2
; A
vectors
1
B.
Dy
x
1
produces
have
matrices
A=
24.
2
row
4]
|
()
bs|
3
of the columns
C. A combination of A. Which
=|bl.
|
1 of A to
row
rit
by
|x]
4{
8
j—I 23.
OQ] [xy]
64)
x3
[by] by
=
| bs|
|
(Recommended) If we add an extra column b to a matrix A, then the column space Give an example in which the column space gets larger gets larger unless is Ax b and an example in which it doesn’t. solvable when the column Why exactly space doesn’t get larger by including b? .
=
26.
of AB
The columns
are
combinations
of AB is contained example where the column
space
27.
If A is any 8
28.
True
(a) (b) (c) (d)
false
or
by
If
a
column
The column
AB
are
matrix, then its column
not
The column
means:
of
space
A. Give
equal.
space
is
.
Why?
counterexample if false)?
b that
C(A) contains
The
of A and
of A. This
the column
(possibly equal to)
spaces
8 invertible
(with
The vectors
in
of the columns
are
not
in the column
only
the
zero
space C(A) form a subspace. then A is the zero matrix.
space
vector, of 2A equals the column
space
of
A
—
J
equals
space of A. the column space of A.
an
29.
Construct
30.
If the 9
31.
Why
12 system
by
isn’t R?
was
x
matrix
rectangular
a
3
by
Ax
=
b
of
from
can
all solutions
A7'). new
The
1.
Any
contains
the column
b that
make
A 3 We The
4
by
by
Ox
example
=
=
only has
0
Simple,
I admit.
invertible:
y+
There
size. We will write
is
Q,
=
=
and
equation
one
unless
b
many and the
elimination
that
Ax
(multiply
0
=
by
for every b). The and/or the vector
zero
solution
The solutions
x,.
the conditions
need
R™, we
all solutions
down
(so that Ax
space
column
A(x, +4%x,) =).
produce
shows
unknown,
one
0. The
=
solutions.
complete
up to 2
move
b; and 2y + 2z
solution
no
good
b in
every
for b to lie in the column
b,
This
two
of the |
space
=
to Ax
on
=
0.
b is
solvable). possibilities: 1
by
zero
matrix
0.
=
If you z
a
infinitely
is x,
solution
b
Ax, =0
will be
no solution
b has
contains Ox
0x
1 system
pivots.
+ Xn:
contain
=
the
a particular
space doesn’t b solvable.
the conditions
willfind |
Ax
and
§Ax,=—b
=
0
=
than
more
=
When
full set of
a
a solution
b has
B,
=
all vectors:
Jess than
solution
x
to Ax
(not by computing A~').
simplest matrix only
=
in the
x,
Complete 2.
form
nullspace can be added to equations have this form, x x,
vector
all linear
to
U to areduced
solution
one
have
not
may R—the
space is the whole space (Ax appear when the nullspace contains
column space
=__
R??
column
questions
b, then C(A)
for every
immediately. matrix, the nullspace contains
invertible
an
is solvable
possibilities—-U
new
section
For
3 matrix
1) but
(1, 1, 0) and (1, 0, space contains whose column space is only a line.
There was matrices. square invertible That solution was found by elimination
brings
goes onward give. R reveals
column
on
A7~'b.
=
whose
subspace
a
| concentrated
Chapter A
3 matrix
by
(1, 1, 1). Construct
not
and it
3
a
71
and Ax=b
Solving Ax=0
2.2
unless
by 2, =
b,
=
The
solution it’s
more
is
x
x, +
=
interesting.
by usually have 2b,. The column
all
contains
nullspace
x,
=
A
x.
0+
(any x).
The matrix
solution.
no
space
particular
|; |
of A contains
only
is not
those
b’s, the multiples of (1, 2). When
y+z
b, =
contains
A particular solution to 2b, there are infinitely many solutions. 2and 4is x, = 2y + 2z (1,1). The nullspace of A in Figure 2.2 (—1, 1) and all its multiples x, (—c, c): =
Complete solution
=
=
yt
z=2
2y+2z=4
fl
.
is
solved
by
Xp + %n
_ =
—l}
|l-e
Hte| i al 7
er2
Vector
Spaces
solutions
of all
line
Hl
shortest
=
H Az,
nullspace Figure
2.2
The
x
=
+ x,
x,
particular
solution
MATLAB’s
=
zp solution
particular
Ab
Y
~
0
=
of solutions
lines
parallel
Ax,
to
0 and
=
E 1 E| Al =
Form U
Echelon We start
this 3
by simplifying
by
4
matrix, first
U and then
to
further
to R:
—
Basic
A=j|{
example
I
3
3
2
2
6
9
TI.
3
4
—-1
The
pivot
a,,;
1 is
=
this
below
first column
The usual
nonzero.
pivot.
The bad
-3
elementary operations news
in column
appears
will
2
in column
pivot
0) zero
for
a
If
zero.
A were
rectangular matrix,
stop. All twice
nonzero
it is also
below With
a
pivot has entry—intending to
for the second
The candidate
we
can
the second
we
square, must
become
Sow Ww
LS)
OH
6
;
unacceptable. We look below that row exchange. In this case the entry signal that the matrix was singular.
zero:
carry out a this would
expect
in the
2
|0
A-—>
zeros
2:
i
No
produce
trouble
anyway, where the
do is to go on to the next column, from the third, we arrive at U: row
and
there
pivot entry
is
no
is 3.
reason
to
Subtracting
a matrix
Echelon
U
U=
0
0 Strictly speaking, we proceed to the fourth column. A zero is in the third pivot position, and nothing can be done. U is upper triangular, but its pivots are not on the main diagonal. entries of U have a “staircase The nonzero pattern,” or echelon form. For the 5 by 8 case in Figure 2.3, the starred entries may or may not be zero. form U, with zeros below the pivots: We can always reach this echelon 1.
The
2.
Below
3.
Each
pivots each
pivot
pattern,
and
are
the first
pivot
is
a
column
lies to the
right
rows
come
zero
entries
nonzero
of zeros,
of the last.
pivot
in their
rows.
obtained by elimination. in the
row
above.
This
produces
the staircase
Solving Ax=0
2.2
oO
@
k
gq
@
x
x
x
*
*
*
*
*
x
79
Ax=b and
©
*
OF
xX
*
©
*
ox Oro oxOor*x Clem CO1xDOC CooCO.cl OC x
©
*
*
O&O
*
OaOoo CDOOxX oo@x Ooox Ooolx o@x* O 0
|
aU
0
0
val
The entries
2.3
Since A
started
we
of a5
with
meal
8 echelon
by
Co Co
0
0
canal
Figure
©
|
A and
U and its reduced
matrix
U, the
with
ended
~~
form
is certain
reader
R.
Do
ask:
to
we
steps have There is no reason why not, since the elimination not changed. Each step still subtracts a multiple of one row from a row beneath it. The inverse of each step adds back the multiple that was subtracted. These inverses come in the right order to put the multipliers directly into L: have
LU
=
before?
as
Lower
that
Note
Z is square.
2
E=;
triangular
It has the
0
1
O
2
|
of
number
same
0
and
A and
as
rows
A=LU.
U.
only operation not required by our example, but needed in general, 1s row exchange by a permutation matrix P. Since we keep going to the next column when no is no Here is PA = that A is pivots are available, there need to assume LU nonsingular. for all matrices: The
1
|
n
“2 “Forany ;
m byn matrix A thereisa
the second
row
above
zero
best
(the
the
form
3
003
10 0 R
R is rref(A). Of
matrix
=
What is the identity So
contains Ax
will =
echelon
matrix U,
go further than U, to 3, so that all pivots are
can
0
2
3
1
3
0
course
result
reduced
There
is a
full
5
final
result
0 -1
I/=R
1
0
0 0
io on
A. MATLAB
would
0 use
the command
_
R
form of a square set
invertible.
of
again! invertible
matrix?
equalto 1, with the Figure 2.3 shows
pivots,
all
=
0.
|0
The
row.
R:
13
by 8 matrix with four pivots, an identity matrix, in the four pivot rows 0 has quickly find the nullspace of A. Rx a
the
use
echelon form
of elimination
even
a higher
from
—
simpler. Divide pivot row to produce
the matrix
1. Then
0
rref(R)would give
row
make
row
1;
000
the final
a
suchthatPA =LU, :
2
3} —»1001
rref(A) = /, when A is For
we
matrix.
by n
m
by its pivot pivot. This time we subtract we can get) is the reduced row
13
This
an
R. We
comes
a
|
and unit diagonal, Now
P, lower permutation Lwith angular
i
and
the
four same
In that
zeros reduced
R is the
case
above
form
pivot columns. solutions as
and below.
R.It still From Ux
=
R
0 and
Vector
r2
Spaces
and Free Variables
Pivot Variables Our
off all the solutions
is to read
goal
Rx
to
0. The
=
pivots a
.
,
tf.
unknowns that
those
pivots,
correspond and
sou
columns
corresponding to v
two
assign arbitrary y. The pivot variables
to the
with
is
There
is
solution
a
Rx
combination
solutions
special again
PF at
this
to
v
=
0 and
y
=
find all solutions
a
After
2.
Give
for the 3.
form
pivot
Every free of special Within a
Rx
Ax
=
free variable
one
v
1. All solutions to
reaching
solution
complete
(—3, 1, 0,0) has free variables has
call these
in terms
of
variables.
variable solutions
0 is
=
yields
linear
are
the
from
This
is
may and
The
complete
1
|
+y
0
f£ 0 OF
J
0 and Ax
=
(2)
;
—1
£1
special solution special solution (1, 0, ~1, 1) of these two. The best way
other
combinations
=
0. The
solutions:
special solution. produces its own “special solution” form the nullspace—all solutions x
v
(1)
3
—
=v
y
special
we
simply
independent.
|
0)
=
and y:
v
u=—3u+y
O. The
=
Ax
values
w=
y
the
contain
and free variables. 0, identify the pivot variables the value 1, set the other free variables to 0, and solve
the four-dimensional
two-dimensional
1, y
=
to
yields
to Rx
pivot variables,
columns
(or, equivalently,
—y
.
the
O|
up of the free variables, and fourth columns, so
group is made are the second
es
x=
|
group contains first and third
Suppose
determined
y
|
we
OQ}.
wl
of solutions, with v and y free and of two special solutions:
.
look
1
infinity”
all combinations
Please
0
=
variables.
free
Nulispace contains of
0
The
u+3v—y=0 wt+y=0
_
“double
a
pivots.
to
completely
are
=0
Rx
0
One
groups.
and y are free variables. To find the most general solution values
1
pivot variables. The other without pivots. These
the
w are
0
0
ro]
.
}
columns
to
=-1
=|9
Rx
y go into
w,
v,
u,
_
4300
Nullspace of R (pivot columns in boldface)
The
“wh
¥
crucial:
are
Rx
0
=
a
space
of all
subspace—the
possible nullspace
by step to
vectors
of
A.
Ax
x,
In
=
2. The combinations
0.
the solutions
the
to Ax =0
example, N(A)
is
of (—3, 1, 0,0) and (1,0, —1, 1). The combinations special vectors these two vectors produce the whole nullspace. are Here is a little trick. The special solutions especially easy from R. The numbers and 1 lie in the “nonpivot columns” of R. Reverse their signs to find the 3 and 0 and I will put the two special solutions (not free) im the special solutions. pivot variables
generated by
—1
the
from
into
equation (2)
matrix
nullspace
a
N,
so
this neat
see
you
‘-—3 matrix
Nulispace (columns
1 N=
special solutions)
are
not
0
free
hand
variables
side of
determined This has
is
values
1 and
their
0
|
-1]
free
1|
the free
columns
moved
3 and 0 and
—1 and
1 switched
0. When
coefficients
must
be at least
rows
of R reduce
n
than
rows,
m
free
—
to
be
can
variable
zero;
n
>
Since
m.
variables.
but
assigned
any value,
will
be
at
least
what,
leading
some
be free.
must
This
conclusion:
following
least
one
be
must
Q. The A(cx) free variables, the =
has
nullspace
if
variables
20 Tf Ax:ma0has1 thanequations (a:> m),it has at more.unknowns |are.more solutionsthanthetrivialx==0. ‘There - specialsolution:
The
That
sign.
¢
-
There
right-
Suppose a matrix m pivots, there
free
variable
one
the
to
of 4).
most
at
more
even
the
to
hold
can
rows
m
There
matter
no
free
not
equation (2), the pivot variables in the special solutions (the columns the place to recognize one extremely important theorem.
columns
more
free
have
free
~
0
free
pattern:
1]
|
The
81
and Ax=b
Solving Ax=0
2.2
infinitely many solutions, since any multiple cx will also satisfy the line through x. And if there are additional nullspace contains than just a line in n-dimensional more nullspace becomes space. the same “dimension” variables and as the number of free special
solutions. This central
idea—the
the free variables
We count
dimension for the
subspace—is made precise in nullspace. We count the pivot variables
c,and
Ax
of
a
the next
section.
for the column
space!
solving The
case
the
on
Ax
=
b, Ux
=
d
=
0. The 4 0 is quite different from b right-hand side (on Db).We begin with letters b
=
condition—for solutions
For
b to lie in the column
space.
Then
we
operations
row
(b;, b2, b3) choose
b
A must
on
to
find the
(1,5, 5)
=
act
also
solvability and
find all
x.
the
original example
that led from
A to U. The
Ax
result 1
b
=
is
an
=
(by, b2, b3), apply
both
to
Ux
upper triangular system
WwW Ww dv
u
~—
=c
0 0
CO OW WwW
CO
WwW
y
=
operations
c:
west
b,
Uv
Ux
the
sides
=
by 2b, 2b. +
(3)
—
[bs —
,
5b;|
right-hand side, which appeared after the forward elimination steps, is just L~'b as in the previous chapter. Start now c. with Ux It is not clear that these equations have a solution. The third equation is very much in doubt, because inconsistent unless its left-hand side is zero. The equations are 0. Even though there are more unknowns than equations, there may 2b, + 5b; b3 b can be be no solution. We know another question: Ax way of answering the same from the four solved if and only if b lies in the column space of A. This subspace comes
The
vector
c on
the
=
—
=
=
Vector
r2
Spaces
columnsof
A
(not of U!): |
f
A
of
Columns
2|,
“span” the column
Even though
|
third minus
3
|
7].
4
|
their combinations vectors, only fill out a plane in three2 is three times column 1. The fourth column equals the
four
are
space. Column the first. These
dimensional
ones
there
3.
—
2
9],
6],
—I
space
3
3
1
the second
dependent columns,
and
fourth,
are
exactly
the
without pivots.
C'(A) can be described in two different ways. On the one hand, I and 3. The other columns lie in that plane, it is the plane generated by columns and contribute b that satisfy nothing new. Equivalently, it is the plane of all vectors if the system is to be solvable. b; —2b2 + 5b, = 0; this is the constraint Every column we shall see that the vector satisfies this constraint, so it is forced on b! (5, —2, 1) is perpendicular to each column. column
The
space
Geometrically,
If b
belongs
equationin as before.
Ux
c
=
space, the solutions 0. To the free variables
0
is
=
variables
The
of Ax
the column
to
pivot specific example with b3
—
w
and
u
2b,
5b,
+
still
are
and
v
y,
determined
0, choose
=
=
b
are
easy
to
5,5):
(1,
=
find. The last we may assign any values, For a by back-substitution.
b
r
4030320 269
=b
Ax
ri]
:
7)
|5].
=
|
3
-1-3
elimination
Forward
U
produces
the left and
on
1 0
Ux=c
last
equation
is 0
0,
=
2
0
3
3
0
Oj},
=3
u+t3v+3w+2y there
is
a
double
:
expected. Back-substitution
as
3w+3y
Again
the
on
c
infinity
=
or
1
w=
v
right: or =
|3].
0] gives l-y
u=—2—3vu+y.
or
of solutions:
5
y
303
0.0 The
4
and
y
are
free,
u
and
|
_
Pa
_
=)
solution. ‘Complete
|
solution
satisfy a
to Ax
Ax
solution
=
|
0;
0, plus the new x, b. The last two terms with v and to Ax
= 0). Every to Ax = 90:
=
solution
=
Ax
=
1|
J
0
b
is
|
0]
|
1
(—2, 0, 1, 0). That x, is a particular y yield more solutions (because they the sum one of particular solution and =
to
i
|
y
This has all solutions
not:
are
3 0
{uv}
|
w
+ Xnulispace _ -Xcomplete = Xparticular
The
83
and Ax=b
Solving Ax=0
2.2
in
solution
from solving the equation with all free equation (4) comes variables set to zero. That is the only new part, since the nullspace is already computed. When you multiply the highlighted equation by A, you get AXcompiete D + 0. surface—but it is not a Geometrically, the solutions again fill a two-dimensional 0. It is parallel to the nullspace we had before, shifted subspace. It does not contain x by the particular solution x, as in Figure 2.2. Equation (4) is a good way to write the
particular
=
=
answer:
1.
Reduce
2.
With
free variables
3.
Find
the
turn,
is 1.
When
Ax
the
bto
=
=c.
0, find
=
solutions
special Then
equation
Ux
x
Ax
was
but Xparticular 0 solutions, as in equation
pattern,
to Ax
0
=
(or Ux
=
was
to
Ax,
b and
=
Ux,
=.
0). Each free variable, in of special solutions).
0
=
x, + (any combination
=
=
solution
particular
a
x,
Rx
or
=
the
0, the particular solution was not written in equation (2). Now
x
to the
is added
p
It fits the
vector!
zero
nullspace
(4).
Question: How does the reduced form R make this solution even clearer? You will it in our-example. Subtract see equation 2 from equation 1, and then divide equation 2 side, this produces R, as before. On the right-hand side, by its pivot. On the left-hand these operations changec (1, 3, 0) to a new vector d (—2, 1, 0): =
=
-
Reduced
equation
2 1
1
exactly identity the
matrix
n
can
O
0
0
_
the
and free
we
columns
pivot
before
variables.
That
many) has immediately go directly
of
of d
entries
this section,
variables.
free
The
out
Then
ignored.
equation (4). is sitting in
variables r
—
4
(one choice
x, be
summarize
me
pivot
and
in
as
Let
of
solution
2 and
Columns
1}.
>
(5)
wl =
particular
=
_
1,
0
0
Our
u
0-1)
0
=d
Ry
_
P53
If important
|
Pa free have into
0 J
variables u
=
v
y
—2 and
w
0.
=
1,
=
is because
This
x,.
=
the
of R!
working there
are
number
r
reveals example. Elimination are r pivot variables r pivots, there is the rank will be given a name—it a new
the matrix.
Ax= b to.Ux==c andRx:_d,with pivot elimination. reduces “Suppose
9D.
=
r
isr. T helast The rankof those.matrices rowsand r pivot.columns. of U and R are zero,80 here lutiononly.ifthe ii mr of C andd entries J ast ate also.Zero.
-
m
a
2
eg
is solution complete
r
TOWs
3
©
The
—
pt
xy
|
x hasall free 7 solution Oneparticular Xp -
Its pivot.variables pad. arethefirst entries of d,so.RXp variables solutions, with 2Xnarecombinations solutions nullspace. of.n—-Trspecial
~The
r
zero.
onefree.variab
can be qual|to 1.The pivot.variables in that: solution § Special ° foundiin.thecorresponding column of R(withsignreversed). e
You
pivot There
see
the rank
how
m
is crucial.
in the column
columns are
r
—
r
solvability
It counts
the
space. There are conditions on b or
n
c
-
pivot rows in the “row space” and the in the nullspace. r special solutions —
or
d.
Vector
r2
Spaces
W orked
Another The full
picture The
and rank.
3
Example and
elimination
uses
A has rank
4 matrix
by
pivot
is
Ax=b
columns
to find
the column
1x,
+
2x. +3x3+
5x,
=
2x,
+
4x. + 8x3; 4+ 12x,
=
bp
=
b;
3x, + 6x2
7x3
+
13x4
+
b,
4.
cl], to reach a triangular system Ux a solution. on b;, b2, b3 to have Find the condition Describe the column space of A: Which plane in R?? in R*? the nullspace of A: Which special solutions Describe
5.
Find
6.
Reduce[U
1. 2.
3.
The
multipliers
(0, 6, —6) and the complete x,
=
from
d]: Special solutions
c]to[R
the
how
(Notice
to Ax
right-hand
in elimination
are
(6)
c.
=
solution
a particular
Solution 1.
b]to[U
Reduce[A
nullspace,
space,
2:
R and x, from
side is included
2 and 3 and
as
extra
an
+ x,.
d.
column!)
—1, taking [A b]to[U
cl].
/
‘123 2
bl]
5 4
3.6
8
12
7
13
2.
The last
3.
The column
[1
bl|>|0 bs]
3
0
2
0 —2
O
2)b—2b)} —2|b; —3b;
= 0.
5b;
—
That
makes
Ax
Just switch
Ux
=
c
=
=
21b,
0
0
O
0/b3-+
—
2d,
by ~ 5h,
=
0. This
1,
=
x.
so is
=
is in the column
the
for equation O and x,
=
x4
to Ax
Ux
in Rx
0
=
=
the
0,
plane =
x4
1:
0 {i
0
0
—
0
1
anal
0. Elimination 5b, (0, 6, —6), which has b3 + b, with free variables 0: (0, 6, 0). Back-substitute —
=
space.
—2|
1
N=
0
=
=
b
[—2 in
signs
solvable,
=b
—
Back-substitution
b
10
2
—
of A pass this test b3 + bz 5b, (in the first description of the column space). in N have free variables The special solutions
Choose
>
the
shows
Nulispace matrix Special solutions
5.
5b
0
Shy
All columns 4.
;
3
12
0. 0. Then 0 5b; solvability condition b3 + b, is the all combinations of the pivot columns plane containing space of A The Second column all vectors (3, 8, 7). description: space contains
equation
(1, 2, 3) and with b3 + b,
2
takes
=
Ax
=
b to
=
oO
OW free
Oe
Particular
solution
to
Ax,
=
(0, 6, —6)
Xp
free
(0, 6, —6) is (this x,) + (all x,). Inthe reduced R, the third column changes from (3, 2, 0) to (0, 1, 0). The right-hand side c (0, 6, 0) becomes d (9, 3, 0). Then —9 and 3 go into x,: The
6.
complete solution
=
=
=
TY
[Uc
to Ax
0
6 0
12
—> |R d|
=
0
0
01
0
0
2 1
0
0
final
That and
matrix
2 and
| in
nullspace
matrix
1.
Construct
the
@
with
system side.to
A and
3. Find
0
1
|0 1
1
0
complete
4), Carry (“Mx
form
the
(has
steps
same
equations,
solutions
(the
B=|4
0
3
|:
0
2
Find
0.
=
and the
variables,
fk
in the
Change
2
3
5
6].
os
9 solutions:
special |b
ar
_
previous problem
free?
are
all solutions.
solution) when b satisfies form as equation (4).
same
as
Bx
solution.
no
variables
Which
ranks.
1
1
a
but
x,.
their
0 and
=
U, the free
in the
solution
out
special
78
to Ax
b is consistent
=
in the
0
A= _|0
Ax
to find
form,
]
solutions
the echelon
than
and find all solutions
B to echelon
special
unknowns
more
zero
A=
the
0
=
right-hand
Find
2ond numbers
c]). The
d]isrref({A
b]) =rref((U of R have opposite sign the free columns d. N). Everything is revealed by Rx
a
Reduce
[R
85
and Ax=b
Solving Ax=0
2.2
to
b.
=
find the
.
complete
Find
solution
the
of
b:
=
0 2
M=}|,
b=
|,
6 ba:
5.
the
Write
complete
solutions
5.)Describe
x, +
=
x,
the set of attainable
these systems,
fo
right-handsides 1
b
0
0
Se
1
23) the constraints
nation). What 7.
Find
the value
is the
of
as
in
equation
(4):
c
on
rank, and that makes
b that
(in
space) thecolumn
for
rb | be
by finding
oa
sief-—] Baal fef-Gh
pe 4
x
ioe
{
turn
bs | by
the third
,
equation
into
0
=
0 (after
particular solution?
a
it
possible
u+
to solve
Ax
v+2w=2
2u+3v—~
w=2=5
3u+4v+
wc.
=
b, and solve
it:
elimi-
er2
Vector
8.
Spaces
Under
conditions
what
b; and b2 (if any) does Ax
on
A= {fl
Find
9.
in the
vectors
two
(a) Find
the
>
0
4
0
3
+
o=
of A, and the
nullspace
solutions
special
2
to
Ux
1 000 ;
solution?
1b
a
_
to Ax
=
b.
U to R and repeat:
0 =4) *|" 0 1 2) 177}= Jol. 0 of |% 2
Ux=|0
a
complete solution
0. Reduce
=
b have
=
3
J
L
fx)
is
(b) If the right-hand side solutions? 10.
Find
a2
by
3 system
Ax
b whose
=
complete
solution
oa
Ps
j{2| Ll
3
a
by
13.
no
give
minus
the total
The number
of columns
minus
the number
The number
of Is in R.
rows
nonzero
4 matrix
of all Is.
4 matrix
with a;;
4 matrix
with
R for each
of these
2
variables
is the
nullspace
=
=
all
of
of free
and the
solution
no
x,. (Therefore
of A?
rows.
columns.
of these
special
Al
first, the reduced I
F
10
O
~~
N
b3.
x,?
number
B=[A
3
matrix
=
matrices:
(—1)". (—1)/.
0
come
b; + by
but
4 6
R=
What
a;;
x,
and the rank
(block) matrices,
0
A=|0
pivot
forms
echelon
row
when
in R.
of columns
r
are
is
of the rank
The number
the reduced
If the
definition
a correct
00
15.
exactly
of
Find
Find
what
(a,b,0),
L.
The number
(a) (b) (c) (d)
(a) The 3 by (b) The 4 by (c) The 3 by 14,
rules
of these
0
b with many solutions Which b’s allow an
=
~“
Which
to
+w
solutions
with these
3 system
a2 by 2 system Ax @) Write the system has solution.)
12.
(0,0,0)
|
x=
Find
from
changed
containing
R must
solutions:
c= af look
like
Tisrbyr Fisrbyn—r the
special solutions?
16.
all
Suppose
echelon
reduced
variables
pivot
r
form
block
(the
Describe
last.
come
be
B should A
by r):
r
four blocks
the
|
and Ax=b
Solving Ax=0
2.2
in the
m
87
by
n
.
B
r=[e 3h is the
What 17.
matrix
nullspace
N of
all 2
Describe
(Silly problem)
If A has
which
the column
What
the
are
numbers
solutions
special
Ahas rank the
S from
r,
thenithas
pivot
pivot
r
Rx
to
2
31
1
4
5
0
0
0
0
=
3
a
for these 1
2|
0
Ol.
0
R=i0
by
byr submatrix and pivot columns of
Find that
S that is invertible. each
Find
pivot
‘Oo
1
OO
0
0
Of.
1
the ranks
of AB and AM 2
1
A=);A 23.
24.
column
Every sions
25.
two-sided
If A is 2
which
AC
27. SupposeA A to B
28.
by
Every m
inverse.
C
J. Form
=
A
|;
rank
1
r
matrix): lb
4
‘|
15
uv! and B
A=
combination
a
and
|! bel:
m=
wz! gives uz!
=
of the
give rank(AB) B
the
are
columns
the number
times
of A.
Then the dimen-
Problem:
rank(A).
R*
as
be
disguised
be
cannot
the value
we
give
=
—1,c;
C,
|
of the columns
third
equals
Spanning Now
we
A is
a
1 to the free
1. With
=
these
Every
m.
by
m
n
system
=
0
1
2
1
3
;
producing
variable
three
J
weights,
zero
1
solve
we
Ac = 0:
1
c3, then
back-substitution
the first column
in Uc
minus
0
=
the second
gives
plus the
Dependence.
zero:
Subspace
define
what
spanned by
it
means
the columns.
for
of vectors
a set
Their
combinations
to span
a
space.
produce
the
The whole
column
a
wi,...,
space
of
space:
of aH ‘Te,vectorspace Vvconsistsof all linearvin combinations is some combination ofthe theSpace. vector Me these vectors span Every =
Ax
independent:
O If
of 2C:
form
>
n
m.
A=
To find the combination
if
linearly dependent
We,
then w’s:
s
_
Everyv comesfromws
Us =
CyW,
+°
ste Cew,
for some
coefficients
Ci.
2.3
It is
that
permitted
c’s need
include
The
be
not
the
only
column
is
space
gives
a
combination
— (—2, 0,0)
wy
this
span
definition
The
of the columns; vectors
it is
e;,...,
1s
a
also span
for
a
To decide
whereas
plane,
the columns
a
is made
and
w,
w3
span
combination
space. matrix span R". Every identity In this example the weights columns.
from
of those
b
=
bye;
spanned by its columns. The row to order. Multiplying A by any x
Ax in the column
vector
coming
€,
is
+---
the
+
b,e,.
try
to
leads
a
Space combination
of the
columns,
are
we
Ax
solve
crucial
idea of
a
if
b. To decide
=
the column
=
to the
of other
columns
the
But
0. Spanning involves independent, we solve Ax vectors independence involves the nullspace. The coordinate they are linearly independent. Roughly speaking, no vectors in
This
plane (the
a
span
R"!
Vector
if b is
space that
the
exactly
rows.
(b;,..., b,)
=
matrices
and
and
also
vectors
b; themselves: components
the
and
the
spanned by
b
vector
Basis
set
(0,1,0),
=
w. two
of A is
space
The coordinate
are
v. The give the same vector might be excessively large—it could
w’s could
of
spanning
95
and Dimension
all vectors.
even
(1,0,0),
=
the
Independence, Basis,
line.
a
The
or
R°. The first
in
plane)
x-y
vector,
w;
combination
because
unique,
zero
vectors
different
a
Linear
space, é€, span R"”
e;,...,
that set
wasted.
are
basis. 0
1
isa sequence of:vectors: forVis t| woproperties Abasis atonce:oe a shaving " 1 ~The vectorslinearly independent (nottoomany vectors). -oa - They the . . eS Sen toofew (not vectors). aa
rece
are
span
This
combination
in the every vector It also means that the
that
V
space
of
-
is
properties space is
combination
is
bjv; +--+ + byug, then subtraction a; b; plays its part; every coefficient v
=
—
and
only
one
We had
way to write better say at
for R”.
basis
Some
v as
once
a
of the basis
combination
a
in linear
If
unique: gives
0
be
must
combination
that
=
v
the coordinate
a
basis
for
vectors,
algebra. It means because they span. +
+--+
a,v,
avg
and also
b;)v;. Now independence Therefore a; b;. There is one
5°(a;
zero.
of
=
—
=
the basis vectors
vectors.
but not
are
are
e;,...,é@,
things algebra unique, bases. Whenever a square matrix infinitely many different independent—and they are a basis for R”. The two columns are
to linear
fundamental
absolutely
this.
the
A vector
is invertible, of this
not
space its columns
nonsingular
2
Every two-dimensional-vector
is
a
combination
3
of those
|
o
|
(independent!)
has are
matrix
R?:
asl) |
only
columns.
|
r2
Vector
Spaces
The x-y plane in Figure 2.4 is just R*. The vector v; by itself is linearly independent, but v, it fails to span R’. The three vectors v2, v3 certainly span R’, but are not independent. Any two of these vectors, say v; and v2, have both properties—they span, and they are So
independent.
form
they
Notice
basis.
a
that
again
vector
a
does
space
have
not
a
unique basis. ~*~
U3
V2 8
Y
Uy
Figure
2.4
These
four
A
spanning
columns
set
span
v1,
v2,
v3. Bases
the column
space
matrix
Echelon
v1,
v2 and
of U, but
that
are
To summarize:
dent, they we
are
be square
3
2
0
3
1
0
O
0
U=10 0
independent:
|
are
asking
the column
didn’t
space
elimination—but
C(A) before
change.
of any matrix span its column space. If they are indepencolumn space—whether the matrix is square or rectangular.
for the
the columns
to be
a basis
for the whole
space R",
of
A space has these choices.
We have
The
x-y
a
Vector
infinitely
The number
different
of
basis
bases, but
must
/
vectors
is
a
#hereis
something
common
property of the space
itself:
prove this fact: All possible bases plane in Figure 2.4 has two vectors
to
the matrix
/
Space many
then
fo
and invertible.
Dimension
the number
The columns
basis
a
as
é
tors.
not
v3.
many
C(U) is not the same of independent columns
If
are
v2,
possibilities for a basis, but we propose a specific choice: The columns contain pivots (in this case the first and third, which correspond to the basic variables) These columns are a basis for the column space. independent, and it is easy to see they span the space. In fact, the column space of U is just the x-y plane within R°.
There
are
they
r{4 3
|
that
v1, v3 and
contain in every
the
same
number
basis; its dimension
to
of
all of
vec-
is 2.
23°
In three
dimensions
had
is rather
matrix
convention, Here
Proof
2; it
was
because
exceptional,
there
Suppose v’s form
combination
of
are
basis, they
a
the v’s:
Tf wy
=
space space, and its
must
v’s
span
other
(linearly
column
space of U of R°*.” The zero
By
zero.
at a
>
vector.
zero
dimension is
m). We will arrive the space. Every w; can
(n
contradiction.
be written
of
this is the first column
+-+++Gn1Um,
ay,v;
The
subspace contains only the
its column
w’s than
more
in three
or
“two-dimensional
a
the empty set is a basis for that is our first big theorem in linear algebra:
the
Since
dimension
the x-y-z axes space R" is n.
along of the
vectors, The dimension
directions.
independent!) in Example 9
three
need
we
97
Independence, Basis, and Dimension
Linear
as
a
matrix
a
VA:
multiplication
,
W=]|u,
Wo
column A is
a
VAx
=
of A. The 0 which
If
m
n
>
was
a
of
more
a
of vectors
earlier
used
than
You
speak
we
k vectors
“dual”
show
to
w’s need
In fact
point is that a basis losing independence. A and still
a
the w’s
gives zero! The
the space. must notice that
set
can
see
or
maximal
basis
that
every
be
0. Then
=
w’s could
is also
m
not
be
a
general result: independent, and no
big,
independent set. It a minimal spanning
™
in R”
be
In
proof was all about the a subspace of dimension of fewer than k vectors
only
a
cannot
set.
repeat:
1 vectors
set
mention
and end up with
To
to
+
vectors—the
we
= n.
m
basis.
this
of which
too
of
that
proof
in every
be column
can
small
a
w.
every
steps. The only way
same
of vectors
not
theorems,
is
to Ax
solution
nonzero
for
column
a
m.
is the number
that is too
The
>
space
v’s and
span the space. There are other set
of
is
and
v
every
=
A of coefficients. set
0. A combination
for
the v’s and w’s and repeat the to have n. This m completes the
is
This
matrix
row
There
haven
contradiction
proof dependent. The
=
a
shape
m.
>
n
J
of A (it is m by n). The second vector fill the second in that combination coefficients
the
A has
Gm}
exchange
we
of
can
is that
key
cannot
we
a
no
know
we
of the v’s. The
is Wx
The dimension
k,
a;;, but
Short, wide matrix, since
basis—so avoid
each
=
I
Um
...
hee
combination
a
|v,
:
know
w> 1s also
=
Wrl
...
..
We don’t
r
4
-
one.
We
can
must
start
with
basis:
be made It cannot
larger without
be made
smaller
span
about
a
four-dimensional
the
word
is used
“dimensional”
vector,
meaning
a
vector
in two
in R*. Now
different we
ways. We have defined a
er2
Vector
Spaces é
four-dimensional
subspace;
example is
an
the
The members zero. of this are components dimensional vectors like (0, 5, 1, 3, 4, 0). One final note about the language of linear algebra. We matrix”
It
first
and
are
six-
subspace four-dimensional
last
a
in R° whose
of vectors
set
is
a space”
of
“rank
or
of
the dimension
“dimension
or
the column
of
that
space
basis.”
a
These
the rank
equals
the terms
use
never
phrases have of the matrix,
of
“basis
no
meaning.
as
we
prove
in the coming section.
Problems1-10
e yy Show
that
v,,
linear
about
are
v2,
but v;,
independent
are
v3
0
=
2.
the
Find
-+c4v4
+--+
c,v;
{-l
0
1
a
=
0, d
0,
=
f
or
0
=
_
0]
{i
Uy
|
spanned by
0
0
1
|
of A.
among
_
U6
=
0
=
0
{|
it
1)
hee
mu
of the space
is the
that if
Prove
0
9
.
_
u4=
14
vectors |
0
|
_
3=
|
mu 3.
independent 1
0
{|
_
2=
=
number
of
3].
u=
1
v’s go in the columns
0. The
=
1
_
This
Ac
or
number
largest possible 1
UL
0
=
Ld
nd
L-
2
1
30
0 od
dependent:
are
v3, v4
v2,
dependence.
1
]
v=
ls
Solve
linear
1
1 uN
and
independence
the v’s.
(3 cases), the columns
of U
are
dependent:
abe U=
4.
If x
5.
a,d,
the
Decide
vectors
bases
found
for
Y=16 ~
V2
gives
W Zero.
are
that the
only
solution
to
Ux
0 is
=
of
9
9|
0
2?
Ww;
—
84
W2
show
3
10
A=] ~
ate
Do the
choices.
that
4
67
|
vectors,
other
two
same
spaces?
0
0000] =
U. Then make
1)
4
independent
W3, and v3
of
which
067
—
show
Ff]
columns.
columns
independent
for A. You have
=
0
et.
(1, 3, 2), (2, 1,3), and (3, 2, 1). (1, —3, 2), (2, 1, —3), and (—3, 2, 1).
vectors
If w1, Wo, w3
10
independence
or
23
7.
d
all nonzero,
are
independent
dependence
three
(/., Choose
Problem 3 U has
0. Then
=
(a) the (b) the ‘Se
in
f
10
9
o
468
the
dependent. Find
4/1 0
oI
2
differences a
combination
ey
vy
=
wy
—
wW3,
of the v’s that
Linear
2.3
8.
If
wi,
V2
in terms -
Find
two
and
solve
vectors
dependent
are
vectors
v;
v;
and
and
11-18
about
are
0
=
because
the
0 in R*. Then find three 2y —3z—t plane is the nullspace of what matrix?
plane
four?
+
x
This
=
spanned by
space
all linear
Take
vectors.
of
set
a
of the vectors.
Describe
the
vectors
the columns
with
subspace spanned by
The vector
.
or
Find
the dimensions of A,
space
is in the
c
If the
True
false:
or
plane
a
by 5 echelon matrix positive components.
b is in the
to
line
a
of a3
all vectors
The vector
it
R*?) spanned by
or
(1, 1, —1) and (—1, —1, 1). (0, 1, 1) and (1, 1, 0) and (0, 0, 0).
vectors
the three
R? (is
of
subspace
the two
row
the
+ .¢303
be
are
on
v1 + cov2
dependent if dependent because
v2 will
(0, 0, 0)
v;=w.+w3,
sums
R’.
in
vectors
independent vectors independent vectors. Why not
(a) (b) (c) (d)
13.
Find
U3, U4 are
vectors
combinations
12.
+ we
two
Problems
11.
w,
=
v3
of the w’s.
(b) The (c) The 10.
W3, and
SUPPOSEv1, V2, (a) These four
vectors, show that the are independent. (Write c, equations for the c’s.)
independent
are
wW2,w3 W, +
=
99
and Dimension
Independence, Basis,
vector
zero
of
2
pivots.
the columns
of A when
there
is
a
solution
to space of A when there is a solution is in the row are space, the rows dependent. row
space of A, (b) the column space of U, (c) the of U. Which two of the spaces are the same?
(a) the column
the
(d)
with
row
space
ao
C
Choose
(x4,
15.
16.
x
=
X3, X1,
vectors
3
1!
01
-1-
so
x
+
w
and
v
v
+
w
and
v
a
basis
for the
Decide
whether
—
—
The two
w.
0]
2
1/1.
0
0
pairs
the
not
or
+404
following =
v
the
and
w
same
combinations
as
space.
When
are
of
they
they
vectors
linearly independent, by solving
are
0:
span to
of A. How
against independence?
does
for
v3
=
a
i?
0.
R*, by trying be tested
1
0
1? |
0
0
0 v=
[0]
of the columns
Write
w.
1
ol
the vectors
and
of vectors
]
also if
v
like
space?
same
v=
Suppose
U=/0
0
of
combinations
fl
17.
and
1
X4) In R*. It has 24 rearrangements
X3,
are
w
+.C2Uq +.€303
Decide
1
(x2, x1, x3, x4) and 24 vectors, including x itself, span a subspace S. Find specific that the dimension of S is: (a) 0, (b) 1, (c) 3, (d) 4.
(x1, X2, X2). Those
v
CyVy
O|
A=|l1
}3 14.
1
ri
a
ol:
aa
1 to solve
c;v;
independence
the elimination
+--+ are
process
+ c4v4
placed from
into
=
(0, 0, 0, 1). the
rows
A to U decide
instead for
or
ter2
Vector
AS.To
bh
Spaces
= (1, = (1,
w;
w;
19-37
Problems
in the
is
subspace spanned by
of A and try to solve
the columns
(a) (b)
b
whether
decide
1,0), 2,0),
we
the
about
are
1), 0),
= (2, 2, = (2,5,
we
Ax
= b.
What
w3
for
requirements
let the vectors
Wy,
...,
w
be
m
by
for result
the
is
b = (3, 4, 5)? wa= (0, 0, 0), andany
0, 2), 0, 2),
= (0, = (0,
w3
wy),
b?
basis.
a
oy,
19,
If vy, These n
20.
UV, are
...,
vectors
then
matrix,
a a
are
a
(a) (b) (c)
All vectors
whose
All vectors
whose
All vectors
that
Find
for the
Suppose
components components
zero.
and
(1, 1, 0, 0)
to
(1, 0, 1, 1).
nullspace (in R°) of
for the column
of U above.
space
c:
U=
find two
Then
Six vectors
V6 are
in
24.
of A? If
they
Find
a
basis
for the
of that
to the
plane plane.
R”, what
span
plane
x
the
with
,
different
—
x
R”. If they
is the rank?
2y +3z y-plane.
Then
a
have) (might
is the independent/ what what then? for
R°.
find
not
linearly they are a basis
are
If
Oin
=
R‘.
Then basis
find
R”, /
a basis
for the intersec-
perpendicular vectors
for all
i
of
the columns
Suppose
from
vectors
aren
rank
tion
25.
of A
The columns
4
R‘.
(do)(do not)(might not) span R‘. (a) Those vectors (are)(are not)(might be) linearly independent. (b) Those vectors vectors of those (are)(are not)(might be) a basis for (c) Any four (d) Ifthose vectors are the columns of A, then Ax = b (has)(does not have) a solution. 23.
) ,
of U.
space
v1, V2,...,
an
R*:
of
add to
R’)and
(in
of
the columns
equal.
are
perpendicular
are
space
row
has dimension
n.
UPasa
different bases
three
bases
22.
55
for each
basis
than
of these subspaces
Find
(d) The column 21.
aaa fo
is
m
the space they span for that space. If the vectors are
independent, >_
5
a
by
5 matrix
A
are
a
(a) The equation Ax 0 has only the solution bis solvable because R° then Ax (b) If bisin =
basis x
R°.
for
0 because
=
=
26.
Suppose
Its rank
A is invertible.
Conclusion:
S is
a
five-dimensional
subspace
(a) Every basis for S can be extended (b) Every basis for R° can be reduced 27.
U
comes
from
A
is 5.
by subtracting
row
of
R°.
to a basis
True
for for
to a basis
1 from
row
false?
R° by adding one more vector. S by removing one vector.
3:
2
13
or
‘
f1
3
02)
1
1}.
0
OF
Y |
A=|{0 1
Find bases for the two
for the two
nullspaces.
column
1
1]
3
2]
spaces.
and
U=/0
(0 Find bases
for the two
row
spaces.
Find
bases
28.
True
(a) (b) (c) (d) 29.
false
or
Ifthe
(give
good reason)?
a
columns
of
a
matrix
The
column
space
The
column
space of a 2 of a matrix
The columns
For which
numbers
of
2 matrix
is the
2 matrix
has the
are
5)
0 c
2
2] and
0
d
2
pivots,
a
TO 5
each
Express
31,
Find
also
Find
a
Find
matrix
a
that is not
A with
counterexample
vector
32.
column
R*, and
space
to
the
if W is
0
0
0
0
}0
0
0
0
the dimensions
of these
in the basis
vector
1
as
form
U, but
different
a
Suppose
V is known
to have
column
space.
v4 is a basis
statement:
If v;,
v2,
then
subset
of the v’s is
some
columns.
of the basic
combination
a
v3,
a
for the for W.
basis
spaces:
in R* whose components (a) The space of all vectors (b) The nullspace of the 4 by 4 identity matrix. (c) The space of all 4 by 4 matrices.
33.
of
;
following subspace,
a
space.
3)
4
02
this echelon
row
2?
space
0
U=
its
as
B=` ‘|
for the column
basis
space.
space.
rank
|
find
row
dimension
same
have
matrices 0
0
rows. its
as
same
for the column
basis
a
the
are
so
25
0 the
by by
and d do these
c
A=|0
By locating
dependent,
are
2
a
r1
30.
101
Basis, and Dimension
Linear Independence,
2.3
zero.
that
k. Prove
‘dimension
add to
in V form a basis; (a) any k independent vectors that span V form a basis. (b) any k vectors In other
properties 34.
True
a
basis
six vectors
solution
(b) AS by
37.
If A is
W
are
in
is known
of vectors
to
be correct,
either
of the two
the other.
three-dimensional Hint:
common.
subspaces Start
with
of
R°,
then
V and
for the two
bases
W must
subspaces,
in all.
false?
or
(a) If the columns
36.
implies
vector
nonzero
a
making 35.
of
that if V and
Prove have
if the number
words,
a
64
How
many
Find
a
basis
of A
are
for every Db. 7 matrix never has
linearly independent, linearly independent
then
Ax
=
of these
of 3
subspaces
(a) All diagonal matrices. (b) All symmetric matrices (A’ (c) All skew-symmetric matrices
=
by
A).
(A'
=
—A).
5b has
exactly
one
columns.
by 17 matrix of rank 11, how many independent 0? independent vectors satisfy Aly
for each
=
3 matrices:
vectors
satisfy
Ax
=
0?
Vector
ter2
Spaces
38—42
Problems
about
are
the “‘vectors”
in which
spaces
0. (a) Find all functions that satisfy < (b) Choose a particular function that satisfies (c) Find all functions that satisfy 2
38.
functions.
are
=
2
3.
=
=
all combinations space F3 contains 3x. Find a basis for the subspace that has y(0)
cosine
The
39.
C
cos
Find
40.
a
that
Bcos2x
+
+
0.
=
satisfy
dy
(a)
mY dy
b)
(b) 41.
of functions
for the space
basis
y(x)=Acosx
——--=0
y x
Suppose
y;
are
have
1, 2,
could
span
dimension
different
three
(x), y2(x), y3(x)
functions
3. Give
or
of
space they of y;, y2, y3 to show each
example
an
The vector
x.
possibility. Find
42.
a
subspace 43.
the 3
Write
are
44,
gives
entries
combination to
of 3
by
following
are
subspace
of the
a
five matrices
and check
zero,
as
A is 5
by
matrix
[A J]
is invertible.
by 5 singular.
not
how
with
dealt
section
previous is, but
4 with
Suppose
the 5
we
that those
Which
Review:
basis
matrix
identity
of
polynomials p(x)
3
five
of the other
a
basis
for the
matri-
permutation
linearly independent. (Assume a combiprove each term is zero.) The five permutations all equal. matrices sums with row and column are
for R??
bases
to
find
one.
rank
4. Show
Show
definitions
rather
like to compute an explicit basis. Subspaces can be described in two ways.
that
that
Ax
than
Now, starting from
Ax =
b has
=
bis
no
solvable
solution
when
when[A
b]is
We know
constructions.
explicit description of
an
a
what
subspace
would
span the space. (Example: The the vectors be told which conditions
that
consists
that
of all vectors
The
first
description
satisfy may
Ax
include
columns
First,
=
may the column
span
in the space
given a set of vector: space.) Second; we mat satisfy. (Example: The nullspac
we
must
echelon
matrix
U
or
a
reduced
useless
R,
be
0.)
ww!
(dependent columns). The (dependent rows). We can’t write vectors
description may include repeated conditions by inspection, and a systematic procedure is necessary. The reader can guess what that procedure will be. an
< 3. Find
degree
(1, 2,0) and (0, 1, —1). (1,1, —1), @, 3,4), 4, 1, -—1), O, 1, —1). C1, 2, 2), (-1, 2, 1), (, 8, 0). (1, 2, 2), (—1, 2, 1), (, 8, 6).
(a) (b) (c) (d)
The
3
for the
basis
a
Review:
45.
by
show
ces! Then nation
of
for the space with p(1) = 0.
basis
we
will
find
When a
basis
elimination for
each
on
of the
A
secon a
basi
produce subspace
n.
B=(A'A)'A™
the rank
when
x
oo.
formulas
I and AC
We show
invertible. has
=
solution
or
inverses
One-sided
Certainly
is 1
uniqueness may
There
R”
span
is
if there
solutions
will be other
But there
BA J,. This
possible
one
case,
|
os
=
inverse
make
if the rank
exactly
sense
actually
are
ism, and AA‘ the rank
when
found.
are
2:
4=|95 o| Since
r
=
m
=
2, the theorem
guarantees
a ~
right-inverse
C:
7
;
0
ac=|53 offo 4 ]=[oah C31
|
C32|
the last row of C is many right-inverses because but not uniqueness. The matrix A has no of existence
There
completely arbitrary.
are
a case
column
of BA is certain
C31 and
c32 to be zero:
to be zero.
The
left-inverse C
specific right-inverse
=
A'(AA")
right-inverse
=
|0
of A
5 Of
E ‘| LO
pseudoinverse—a way of choosing the best C yields an example with infinitely many /eft-inverses:
is
chooses
rd
0 This
the last
because
A'(AA!')~!
~
Best
This1S
the
in
3
Section
0
f/=C.
o
ol
6.3. The
«
transpose
-
pat=
|) >)
Now
it is the last column
pseudoinverse)
has
b,3'=
Blo 5
3110
of B that is
b23
=
0.
5
01
=| A
completely arbitrary. The best left-inverse This is a “uniqueness case,” when the rank
(also the is
r
=
n.
There You
can
free variables, since n when this example has
no
are
see
4
0
x
*
A
rectangular
n,
we
matrix
haver
cannot
or
solution
a
it will be the
solution:
no
only
one.
-
is solvable
when
exactly
b3
0.
=
[by| both existence
have
cannot
andr
=m
solution
one
|b,
=
E*
OF
0. If there is
=
109
Subspaces
[by
0
QO 5
r
—
Fundamental
TheFour
2.4
and
uniqueness.
If
is different
m
from
=n.
A square matrix is the opposite. If m have one property without cannot n, we the other. A square matrix has a left-inverse if and only if it has a right-inverse. There is A™!. Existence only one inverse, namely B C implies uniqueness and uniqueness =
=
when
implies existence, r
=m
Each
=n.
The columns
span
2.
‘The columns
are
This
list
condition
&
is
The
rows
The
rows
is square. The condition for invertibility is full conditions is a necessary test: and sufficient
R”,
Ax
so
much
equivalent
Ax
so
if
other, and
have
A.
typical application
to
n
a
an
that
polynomial can
eigenvalue of positive definite.
is
Here
exists
point
is that
vanishes
+--+--+x,t"~!matches
all
polynomials P(t)
uniqueness, a polynomial of degree n are we dealing with a square
x1 + xot
with
LDU,
=
n
chapters. Every
1
—
1. The only such degree n 1 other polynomial of degree n —
of
these n
of coefficients
Dn, b;,..., b;. The in P(t) =
=
equations:
Pxy)
oT
#
4
any values values: P(t;)
Given
matrix; the number
Lot
Interpolation
—
interpolating
the number 1
pivots.
of
4, is P(t) =0. No and it implies existence:
at t,,...,
This is
roots.
there
to later
ahead
0.
A is invertible.
that
ensures
PANE can
is not
A: A is
look
we
bD.
R”.
linearly independent. be completed: PA determinant of A is not zero.
Zero
only
rank:
are
Elimination The
for every x = the solution
solution
one
0 has
=
longer, especially
to every
of A span
b has at least
=
independent,
be made
can
the matrix
of these
1.
=
Tb, 2)
x2]
ov
7
P(t)
bj
-
.
PL That
polynomial
be
can
te
ty
by n and passed through any b;
.
°
full
n
at
eee
rank.
distinct
.
.
ET Ax
=
points
Dn |
[on b
always
t;. Later
we
has
shall
a
solution—a
actually
find
zero.
of Rank 1
Matrices
Finally
is
of A; it is not
the determinant
matrix
matrix
Vandermonde
.
comes
with
the easiest rank
case,
when
0). One basic theme
the rank
is
as
of mathematics
possible (except for the zero is, given something complicated,
small
as
Vector
er2
show
to
Spaces
A=
1
the
algebra,
has
of the first row,
multiple
a
matrix
the whole
write
can
is
row
linear
For
simple pieces.
simple pieces
1:
Rank
Every
into
be broken
can
of rank
matrices
are
it
how
the
as
the
so
product of
1.
r=
In fact, space is one-dimensional. column vector and a row vector:
row a
we
f
(column)(row)
=
2
1
I)
iff2
4
2
2
2
product
of
1. At the
rank
column
4
a
,
The
1.
2.
rows row
multiples
If
false:
True
or
then
the
Find
the dimension
=
m
nullspace
-1
space
are
n, then
the
larger
a
3 matrix
by
all
4
—-14 is
A
v',
vector
lines—the
3 matrix.
T
uv
column
=
space of A equals the dimension than _ a
basis
for the four
the
are
all
row.
multiples
of
u.
space. column
subspaces
Ifm
associatedwith
=
}-2 The
1]
1
Lee
with
ee
| .
If the
matrix, AB in the nullspace of A. (Also the
product
contained of B, since
AB is the
each
row
zero
of A
multiplies
=
B to
0, show that row
give
space a
zero
thecolumn
of/A row.)
space of B is is in the left nullspace
The Four Fundamental
24
.
A is
Suppose
an
by
m
matrix
n
of rank
what conditions
Under
r.
111
Subspaces
those
on
numbers
does 1? inverse: AA~! A7!A (a) A have a two-sided Ax b have infinitely many solutions for every b? (b) =
=
=
is there
.
Why
.
Suppose Find
only solution and why? The
1
by 4x3
a
2x2 + 10.
is
11.
y
b
=
QO.Hint:
=
If Ax
O has
=
Find
has
to Ax
3 matrix
Construct
are
of A and write
given
show
that
14
Find
1. What
=
as A =
0
037.
0
0
0
the
O
0
6
If the columns the
16.
nullspace
(A paradox)
of A
b
a= i|=" 7
17.
Find
B
a
matrix
if V is the
to be
solvable for
r
for
0
O
a
0
ww
linearly independent (A
are
the
A has
row
A that has V
as
is
satisfies
its
row
BA
=
m
,
and
J; it is
and
space,
by n), then
is
B. Then
right-inverse
a
But that
space
a
thereexists AB
a
matrix
the rank
=
I
leads
left-inverse. B that has
Whichstep its
V as
=o
Find
a
basis
for each
rO
21,
OF
Of
of the four
123
subspaces 4
A=/01246/;/=1]1
|0
00
1
2
r1
,
8 of 0
O]
1
0
}O tT 1;
[0 000
1
2
;
A'AB
to
m1
1},
is -inverse.
a
subspace spanned by m1
18.
some
ol and w=|1 1} andT=/%7]. 1 J
1
Suppose
fails
right-inverse (when they exist) r1
,
(A‘A)~'A". justified? or
0
=
that
so
g
a
1
is
=
Ay
to
A=|§“él
.
=
15.
+
x,
pivots?
and/or
A=|'
0. What
uv":
and
4 0, choose d
witha
are
a left-inverse
=
nullspace.
same
7
rank
x
R° such that
in
vectors
A'y f A and /.
that
the matrix
3
das
of all
consists
example of
an
20 c
unknowns) is
n
solution, show that the only solution
one
solution,
A=/0 If a, b,
(1, 1, 1)?
linearly
are
with
contain
is the rank?
r1
13.
of A
nullspace
least
a nonzero
f.
(m equations in
0
=
columns
by
at
What
sides
the rank
a3
nullspaceboth
space and
row
whose
0. Find
always
right-hand 12.
3 matrix
=
If Ax
whose
the
is the rank .
matrix
no
3
4)
1
2
,O9o0d 0 0,
A?
=
is not |
nullspace,
ter
2
Vector
Spaces
19.
If A has the
20.
(a) If
7
a
has rank
9 matrix
by
is the
What
fundamental
four
same
Construct
a
matrix
the
with
the dimensions
are
3, what
its column
are
required property,
space
contains
space
has basis
(b)
of
(c) Dimension
|!| | >|
LOW
,
(ec) 22.
Row
|
Without
column
=
space
OW
,
,
of left
nullspace.
H
contains
space
left
nullspace +
space,
;
1
and bases
for the four
and
B=
and
24.
What
bases Write
Un 4
for the four
subspaces
for A,
A].
B=[A
of the four
the dimensions
are
1
~
matrix
by 6
for the 3
also
aad
invertible.
A is
ee
for
5
wi
3 matrix
by
subspaces
f
1 a
the 3
.
nullspace.
-
0
A=
Suppose
can’t.
you
H
3
23.
nullspace?
3
find dimensions
elimination,
subspaces?
| 5.
contains
has basis
1 + ‘dimension
=
(d) Left nullspace contains
and left
explain why
or
Space
nullspace
,
nullspace
of the four
space
1
Column
cB?
=
[fo
1]
(a) Column
B, does A
as
dimensions?
has rank
(b) If a3 by 4 matrix 21.
5, what
of all four
sum
subspaces
C, if I is the 3
and
subspaces for A, B,
by
3
—
and 0 is the 3
matrix
identity
0]
A={I 25.
Which
are
and
I
(a) [A]
26.
If the entries most
27.
all three
that
Prove
the
subspaces
likely
(Important) for which
of
by
Construct
©) have
matrices
four
=
an
m
b has
n
no
solution.
column 29.
Without
matrix
space.
Why
r.
be true 0 has
=
can’t
this
be
a
find bases
basis
0]
A=1!161 9
8
r.
Suppose between
for the
71 0] |0 11 10
0 and
1, what
if the matrix is 3
What
there
right-
are
by
are
the
5?
handsides
b
and r?
m,n,
solution?
a nonzero
for the four
10
sizes?
different
between randomly
(1, 0, 1) and (1, 2, 0)
with
computing A,
rank
same
of rank
must
of
(0).
C=
4 a).
nd
subspaces?
matrix
by
matrices
chosen
of the dimensions
and
ot
A the
Tf
|r |
for these
same
are
Ax
a
I
B=
a3 by 3 matrix
A is
matrix?
zero
and
(a) What inequalities (< or
3
2
of A has
in the
are
‘1
2
of
pivots. nullspace. space then A! = A. of A equals the column space.
column
space
1).
spaces
of A:
Vector
er2
38.
Spaces
If AB — 0,
39.
be
tic-tac-toe
Can
side
neither 40.
rank(A)
that
prove
passed
+
rank(B)
subspace
completed (5 up
a
the
of A. If those
nullspace
vectors
in
are
R”,
n). Its transpose is the
do linear
the
3 go
2 and
Edges
gives information in, edge 5 goes
3
matrix.
Each can
of A. Column
115
and Networks
2.5 Graphs
=
0
bz —b4+bs
=
rows
(1)
0.
equal rows 2 + (1) already produces b; + by
find that in
and
5. But this is
1+ 4
the five components, because the column would come from elimination, but here
space
they
bz
=
+
bs.
nothing
There
has dimension have
a
5
meaning
on
are —
2. the
graph. Loops: Kirchhoff’s add
(x3
to —
arrive
back
Law
—
zero.
X2)
a loop must says that potential differences around Around the upper loop in Figure 2.6, the differences x1) + satisfy (x. the circle lower Those To are differences (x3 x1). b>. b; + b3 loop and
Voltage
=
—
at the
=
same
potential,
we
need
b3 + bs
=
by.
is. Law: space Voliage Kirchhoff’ mRThe forbttobeini thecolumn ‘The ofpotential aroundaloopmustbe differences ’
test
_
sum
zero.
er2
Vector
Left
Spaces
Aly
To solve
Nullspace:
0,
=
find its
we
meaning
the
on
graph.
y ha:
The vector
for each edge. These numbers one five components, flowing along represent currents 4 on those the five edges. Since A’ is by 5, the equations Aly give four conditions in of “conservation” at each node: Flow five currents. equals flow They are conditions
0
=
node:
at every
out
—
—yi
ATy
-
yi
=0
—
¥3
+ 3
y2
Q
=
yg
—
yg +
Total
1 is
to node
current
yg
=
0
to
ys
=
O
to node
3
0
to node
4
=
ys
zero
node 2
beauty of network theory is that both A and A? have important roles. 0 means that do not “pile up” at any node. finding a set of currents Solving Aly are small The traffic keeps circulating, and the simplest solutions currents around around each loop: loops. Our graph has two loops, and we send | amp of current The
=
Loop Each
loop produces a the
whether
current
the
so —
basis
a
are
1
The component +1 The combinations
nullspace. against the arrow.
y2
0
(the dimension
had
1].
-1
—1 indicates
or
of y; and m —r to be
(1, —1, 0, 1, —1) gives the big loop around
=
and left
Part
The
Space:
mension
Two of the
is the rank
row r
=
Rows
combination
finrowspace
are
a
in the column
space
and
related.
y2 =
the outside
first three Rows
basis
1, 2, 4
for the
row
in the
row
(f;, fo, fs, f4)
rows
fi,+ft+ht+tfsA=O
The left
nullspace
satisfy b; b2 + b3; left nullspace are perpendicular! That Theorem of Linear Algebra.” space
=
—
R*, but not independent
in vectors space of A contains will find three 3. Elimination
to the
1, 2, 4
closely
are
“Fundamental
graph. The those edges form a loop). no loops. look
nullspace
vectors
in the column
0: Vectors
=
to become
Row
space
C1, —1, 1, 0, 0), and the
=
y'b
also
and
y;
y.
or
y, =/0
O/] and
0
1
of
graph. The column
y,;
with
goes
=
—
-1
y in the left
vector
the left nullspace, 3 2). In fact y;
fill
5
vectors y,=[1
all vectors. and
rows,
row dependent (row 1 + row 3 independent because edges 1, 2, 4
are
are
=
space.
/n each
space
will have
row
the entries
that
xinnullspace
same
add
to zero.
contains 0. Then is
soon
Its diwe
2,
can
and,
contain
Every
property: x
=c(1,1,1,1)
(2)
Theorem: The row space is perpendicular to the, Again this illustrates the Fundamental 0. nullspace. If f is in the row space and x is in the nullspace then f'x For A!, the basic law of network theory is Kirchhoff’s Current Law. The total flow The numbers into every node is zero. into the nodes. The f;, fo, /3, 4 are current sources source y2, which is the flow leaving node 1 (along edges 1 and 2). f,; must balance —y; That is the first equation in ATy f. Similarly at the other three nodes—conservation of charge requires flow in flow out. The beautiful thing is that A? is exactly the right Law. matrix for the Current =
—
=
=
|
117
Graphs and Networks
2.5
3s
5Current Law: Aty3faatttheodes equations 8 ‘The express Kirchhof? =
e
isZero. node every. i n = Flow. out. into Flow is Thenetcurrent ifFthe TChis sf + fo4 h-+A=0. law. canonlybesatisfied outside: tote current from =
=
s
=
1S
- Withf
0, the law Aly
=
by:a. current that satisfied.
0is
=
goes
around
a
loop.
Spanning Trees and Independent Rows of y, and y in the left of x (1, 1, 1, 1) in the
Every component is true
The
same
The
key point You
can
replaces edge
is that every see it in the 2
by
a new
elimination
first step for edge “1 minus
@
>
v7 2
;
Paa edge
Y-0 That
2”:
edge 1
%
edge
nullspace is 1 or —1 or 0 (from loop flows). nullspace, and all the entries in PA = LDU! step has a meaning for the graph. 2. This our 1 from row matrix A: subtract row
=
elimination
1
2
—
rowl
-1
1
0
0
row2
-l1
O
1
O
1
row
0
2
—
1-1
O
edge “1 2” is just the old edge 3 in The next elimination step will produce in row 3 of the matrix. zeros This shows that rows 1, 2,3 are dependent. Rows are dependent if the corresponding edges contain a loop. Those r edges we At the end of elimination have a full set of r independent rows. form a tree—a 3, and edges 1, 2, 4 form one graph with no loops. Our graph has r possible tree. The full name is spanning tree because the tree “spans” all nodes of the 1 edges if the graph is connected, and including one more graph. A spanning tree has n edge will produce a loop. A. The matrix In the language of linear algebra, m 1 is the rank of the incidence n 1. The spanning tree from elimination row gives a basis for space has dimension that row space—each edge in the tree corresponds to a row in the basis. of the The fundamental theorem of linear the dimensions algebra connects subspaces: edge and creates the opposite direction.
step destroys
an
a new
edge.
Here
the
new
—
=
—
|
—
—
Nullspace:
dimension
1, contains
Column Row
dimension space: dimension r space:
Left
nullspace: dimension
Those
It counts
loops.
four lines
it has
(# of nodes)
n
=
n
=
m
—
—
give Euler’s formula,
zero-dimensional Now
r
a
—
linear
nodes
x
+
n
—
1, independent =
r
m
minus
—n-+
1 columns rows
from
1, contains
independent. any spanning tree. y’s from the loops. are
in topology. way is the first theorem one-dimensional edges plus two-dimensional
which
algebra proof
(# of edges)
1, any
—
1).
(1,...,
=
in
some
for any connected
(# of loops)
=
graph:
(n) —(m) + (m-—n+I1)=1.
@)
Vector
ter2
For
Spaces
10 nodes
each connected
are
10
and
of 10 nodes
single loop
a
edges, the
eleventh
to an
Euler
node
is 10
number
in the center,
11
then
10 + 1. If those
—
—
20 + 10 is still 1.
from space has x' f = f, +--+ f, = 0—the currents add to zero. outside Every vector b in the column space has y'b = O0—the potential all loops. In a moment around add to zero we link x to y by a third law differences in the
f
vector
Every
row
(Ohm’s law for each resistor).
the
At the end of the season,
of
average
all teams
a more
on
first
step is
to
between
them.
The
teams
hundred
nodes
and
The
rank
polls
and it sometimes
opinions,
to rank
want
the matrix
A for
an
that
application
Teams
Ranking of Football
The
stay with
we
but is not.
frivolous
seems
First
college
football
teams.
becomes
vague basis.
after
mathematical the
recognize
graph.
If team
The
the top dozen
j played
edges.
are
the home
mostly an colleges. We is
There
direction
a
is
k, there
team
are the nodes, and the games the few thousand edges—which will be given
a
ranking
edge
an
few
a
are
by an arrow Ivy League,
Figure 2.7 shows part of the and also a college that is not famous serious for big time football. and some teams, Fortunately for that college (from which I am writing these words) the graph is not connected. Mathematically speaking, we cannot prove that MIT is not number 1 (unless it happens to play a game against somebody). the
from
visiting
to
team
team.
Yale
Harvard
Michigan
Texas
USC
-~—-T
r
K
a
“> |
e
I
MIT
y Purdue
Princeton
Figure
2.7.
Part
If football
of the
graph
for
college
perfectly consistent, team v played home
Ohio
@-----
State
Notre
Dame
+
Georgia
Tect
football.
could
“potential” x; to every team Then if visiting team h, the one with higher potential would win. i b in the score would exactly equal the difference.x, x the ideal case, the difference even have to play the game! There would be complet in their potentials. They wouldn’t agreement that the team with highest potential is the best. This method has two difficulties (at least). We are trying to find a number x for ever a few thousand b; for every game. That means team, and we want x; x, equation The equations x, unknowns. a into linear and only a few hundred x, b; go syster has matrix. a with in column Ax +1 b,in which A is an incidence row, Every game were
we
assign
a
—
—
=
—
=
=
-and
—1 in column
v—to
indicate
which
teams
are
in that game. there is no solution.
The scores mu difficulty: If b is not in the column space fit perfectly or exact potentials cannot be found. Second difficulty: If A has nonze in its nullspace, the potentials x are not well determined. In the first case x do vectors case x is not not exist; in the second are unique. Probably both difficulties present.
First
The
nullspace always
contains
of 1s, since
the vector
only at the differences potential to Harvard.
A looks
the
Xy. To determine
119
Graphs and Networks
2.5
potentials‘we can arbitrarily assign zero ({ am speaking mathematically, not meanly.) But if the graph is not connected, every the vector a vector to the nullspace. There is even separate piece of the graph contributes | and all other x; with xvirr 0. We have to ground not only Harvard but one team in each piece. (There is nothing unfair in assigning zero potential; if all other potentials are then the grounded team ranks first.) The dimension below zero of the nullspace is the number of pieces of the graph—and there will be no way to rank one piece against another, since they play no games. The column scores space looks harder to describe. fit perfectly with a set of if Harvard beats Yale, Yale beats Princeton, potentials? Certainly Ax = b is unsolvable and Princeton beats Harvard. More than that, the score in that loop of games differences diave to add to zero: Xp
—
=
=
Which
Kirchhoff’s This
is also
a
law
law
for
of linear
score
algebra.
differences Ax
=
b
buy + byp can
be solved
bpy
+
when
b
=
0.
satisfies
the
same
0. leads to 0 dependencies as the rows of A. Then elimination In reality, b is almost certainly not in the column are not that scores space. Football consistent. To obtain a ranking we can use least squares: Make Ax as close as possible to b. That is in Chapter 3, and we mention only one adjustment. The winner gets a bonus of 50 or even 100 points on top of the score difference. Otherwise winningby 1 is too close to losing by 1. This brings the computed rankings very close to the polls, and Dr. Leake (Notre Dame) gave a full analysis in Management Science in Sports (1976). After writing that subsection, I found the following in the New York Times:
linear
=
(10-2) in the seventh (9-1-2). A few days after publication, packages containing spot above Tennessee fans began arriving at and angry letters from disgruntled Tennessee oranges the Times sports department. The irritation stems from the fact that Tennessee thumped Miami 35-7 in the Sugar Bowl. Final AP and UPI polls ranked Tennessee
In its final
rankings
for
1985, the computer
placed
Miami
fourth, with Miami were
significantly lower. of oranges Yesterday morning nine cartons sent to Bellevue Hospital with a warning
the oranges
A
graph
application
and Discrete
Networks
becomes
that
at
the
the
loadingdock.
quality
and
contents
They of
uncertain.
were
for that
So much
arrived
a
network
be the
of linear
Applied when
algebra.
Mathematics numbers
c,,...,
Cm are
assigned to the edges. The a its stiffness (if it contains numbers go into a diagonal
length of edge i, or its capacity, or Those (if it contains a resistor). spring), or its conductance to the incidence matrix C, which is m by m. C reflects “material properties,” in contrast matrix A—-which about the connections. gives information is c; and the Our description will be in electrical terms. On edge i, the conductance is proportional resistance is 1/c;. Ohm’s Law says that the current y,; through the resistor number
c;
can
Vector
ter2
to the
Spaces
voltage drop
This
Ohm’s
Law
is also
written
equation
all
on
ences
around
E
edges
Aty
Voltage
KCL:
Thecurrents
allows
The
is y = Ce. and Current Law
Law
around
If there
asks
law
are
no
complete the
to
loop
—
current
As
resistance.
add to
vector
a
external
add
to
us
to
x, —
x2) + (x;
of current,
sources
to zero.
x3)
—
the differ-
Then
the nodes.
the currents
framework:
zero.
(and f;) into each node add
y;
sum
a
each
assign potentials x;,..., like (x2 x1) + (%3
to
us
times
current
Law
voltage drops
loop give
everything cancels. multiplication A‘ y. is
Ohm’s
at once,
(conductance) (voltage drop).
=
IR, voltage drop equals
=
The
a
(current)
Cie;
KVL:
law
voltage
=
Yi
Kirchhoff’s
We need
The
e;:
each
into
Kirchhoff’s
=
0, in which node
by
the
Law
Current
=0.
equation is Ohm’s Law, but we need to find the voltage drop e across The multiplication Ax gave the potential difference the nodes. between the resistor. Reversing the signs, —Ax gives the drop in potential. Part of that drop may be due to the resistor: battery in the edge of strength b;. The rest of the drop is e b Ax across other
The
=
Ohm’sLaw
y=C(b-—Ax)
C7ly+Ax=b.
or
a
—
(4)
in
The
central
fundamental equations of equilibrium combine These equations problem of applied mathematics.
is
a
linear y and
currents
the
elimination
For
block
out
A! below
the
pivot.
pivot
is
|
result
C~',
has
e
Co! AT
the
equation
for
5 P| HI
Then y
=
back-substitution
C(b— Ax)
Important grounded, matrix
into
remark
=
One
and the nth column
is what
we
now
mean
A'C,
subtraction
and
knocks
p—
|x|
row,
with
the
Atco)
symmetric
matrix
ATCA:
equation
in the first
Aly
is
b
yl
—ATCA|
Fundamental
f
to
the
(6)
=
multiplier
A
is in the bottom
alone
x
are
A
=
The
unknowns
is
co! 0
a
everywhere:
appear
disappeared. The symmetric block matrix:
the
form
The
into
(3)
which
symmetric system, from the potentials x. You see Block
Kirchhoff
and
equations
Equilibrium That
Ohm
(7
equation produces reach
potential of the
by A; its
Nothing mysterious—substitute
(7).
must
be fixed
original m
y.
—
in advance:
incidence
1 columns
matrix are
x,
=
0. The nth node
is removed.
independent.
The
The square
i:
resultins matri
A'CA,
is the
which
key
to
is
for x,
solving equation (7)
|
121
Graphs and Networks
2.5
invertible
an
of order
matrix
n—l:
n—-lbym
by
m
Ce Suppose
a
4 is
Node
C
source battery b3 and a current grounded and the potential x4,
f> (and five resistors)
04
AVAVAY fo
in
bs
Ro
Rs
Y2 -
The
first
thing
—
—
Yi
equation from adding
—
y3
Ay
law
A'T=/]
has
0
1, 2, 3: -1
O
1
-1
O
O
=
=
=
f—1
fy
=
—
at nodes
O
=
ys
f
=
rg
se
Es
yo
+
yo
No
¥3
WAV AY, Y4
.
is the current
—y1
O
OO 1
ft
for node
-1
the other
three
law is y4 + ys +
the current
4, where
O
f2
b. The potentials x are connected equation is C~! y + Ax the five conductances C contains y by Ohm’s Law. The diagonal matrix for the battery of strength b3; in edge 3. The right-hand side accounts + Ax
0. This follows
=
equations.
The other
C-ly
-I
O
|
is written
to the currents
=
=
Aly
babove
nodes.
four
connect
is fixed.
0
=
Ry
L2
.
[irc
A
YI 6
n-lbyn-1
mbyn—1
m
c;
=
1/R;.
The
form
has
block
f:
=
0
Ry
yy
—-l
1
yo
0
—|]
0
]
V3
b3
0
O
-1}]
ly}
—|] 0
Ro
R3 CT
Ally]
io 5
Ry
_
-1
R;
x|
—]
Oj
0
to
c;
with
0
x1
)
0
O
O
x2
Dp
bs}
£04
-1
1-1
five currents =
{ys]
—I1
A'Cb by 3 system A'CAx 1/R; (because in elimination you
the 3 =
by 8,
0
O
0
|
;
is 8
|
QO —|]
1
system
— |
I
The
|
0.
]
and three —
f.
The
divide
Elimination
of
A'CA contains
the
reciprocals
We also
show
the fourth
potentials. matrix
by
the
pivots).
y’s reduces
Vector
er2
row
Spaces
the
from
and column,
ATCA
=
the 3
outside
by
3 matrix:
|
$03 +5
Cy
|
grounded node, —C}
Cr+
Cy
—c3
C2
en,
—C3
—cs
—C)
0
tes
eg
teq|]
(node 1) (node 2) (mode 3)
cq
|
)
—~C5
—C4
Ca
C
is included,
The first entry is 1+1+1, or cy +c3-+cs5 when touch node 1. The next diagonal entry is 1 + 1 node
diagonal
2. Off the
the
or
c’s appear with minus and column, which are
+cs5
(node 4)
edges 1, 3, 5 the edges touching cy + co, from signs. The edges to the grounded because
deleted when column 4 is removed belong in the fourth row and columns from A (making A'CA invertible). The 4 by 4 matrix would have all rows adding to zero, and (1, 1, 1, 1) would be in its nullspace. that A'CA is symmetric. It has positive pivots and it comes from the basic Notice in Figure 2.8. illustrated of applied mathematics framework node
4
fs
A A
| (Voltage Law)
| (Current
AT
|
Law)
]
Figure
At
€=)~
b—~|
|—G
The framework
2.8
|
for
Ohm's
Law)
equilibrium:
sources
_Y=
b and
f,
three
steps
to
ATCA.
and y become In fluids, the unknowns displacements and stresses. and x is the best least-squares fit to and flow rate. In statistics, e is the error
In mechanics, are
pressure the data. These
x
equations and the corresponding differential equations are in our to Applied Mathematics, and the new Applied Mathematics and Introduction textbook Scientific Computing. (See www.wellesleycambridge.com.) We end this chapter at that high point—the formulation of a fundamental problem in Often that requires more insight than the solution of the problem. applied mathematics. We solved linear equations in Chapter 1, as the first step in linear algebra. To set up the of mathematics, equations has required the deeper insight of Chapter 2. The contribution and of people, is not computation but intelligence. matrix
Wag..
Set
Problem 1.
3-node
the
For
dence
2.5
matrix
nullspace
triangular graph A. Find
of A. Find
a a
solution
solution
in the to
to
Ax
Aly
figure following, OQand
=
=
0
of A.
nullspace 2.
For
the
describe
anddescribe
write
all other
all other
3
the
by 3 inci-
vectors
vectors
in the
in the left
same-3by 3.matrix, ‘showdirectly.from.the columns that everyvector®biin b4 +bo—-b 0. Derivethe same thing from the three column space will satisfy the
=
2.5
node
Graphs
and Networks
123
1
:
2
edge
.
rows——the
equations in
differences
around
Show
Si
directly fo
+
does
.
-
node
fz
+
that
=
a
from
LQ
Ax system!
the
that
rows
0. Derive
the
the
f’s
Write
the
the 6 that
4 incidence
by
satisfy Ay
If that
thing from
about
mean
potential
the three
row
space
will
equations ATy
=
satisfy f. What
into the nodes?
currents
are
matrix
A for the second
0. Find
=
second are
graph represents b,..., bg, when is
potential differences
agree
elimination)
the conditions
Write
the
down
incidence
there
now
three
vectors
can
you
and
a
six games it
possible
with
the b’s?
that make
dimensions
matrix, and
Compute A'A How
|
the
Draw
a
incidence
graph with matrix
numbered
6
that or
the
from
b solvable.
subspaces
for
matrix
C has entries
appear
on
diagonal
the c’s will
directed
this
6
4
by
edges (and
the main
numbered
c,
...,
diagonal
nodes)
cg.
of
whose
0
~1
0
1
9
1
9
0
0-1
|
tree? a
differ-
score
is
A=!
graph a edge produces
and the
assign potentials x;,..., x4 so are finding (from Kirchhoff
Fy]
Is this
four teams,
fundamental
by
where
and
=
subspace.
the 6
graph
=
four
basis for each
in the
You
Ax
of the
A™CA,where
tell from
to
graph
ATCA.
figure. The vector will be m—n-+1 3 independent them to the loops in y and connect
between
A'CA? 10.
that
in the
f
graph.
ences
9.
does
Put the diagonal matrix C with entries C1, C2, and compute C3 in the middle Show again that the 2 by 2 matrix in the upper left corner is invertible.
vectors
8.
vector
every
same
(1, 1, 1, 1) isin the nullspace of A, but
.
b. What
=
ee
loop? the
when
mean
3
(Are the
spanning
rows
tree.
of A
Then
0 —|
1
independent?) Show that removing the last the remaining rows areabasisfor _—s—isé(?
er2
11.
Vector
Spaces
With
the last column
1
the
on
diagonal
removed
from
of C, write
the
the 7
out
Cly
preceding A, by 7 system + Ax
what
network, 12.
If A is
a
12
the
are
potentials there
are many free variables are there in the solution
spanning
from
and currents
connected
a
in the solution
A'y
to
currents
the nodes
In the
graph
14.
If MIT
beats
above
4 nodes
with
and 6
find all 16
edges,
35-0, Yale ties Harvard,
Harvard
in the
differences
3 games
other
For both
(#
18.
of
edge
an
The
there?
are
17.
is
If there
between
has
graph
graphs
nodes)
Multiply
rankings, already built in?
is that
considered—or 16.
free
Ho"
variable to leave
be removed
—
matrices
every n
pair
should
of nodes
If the
below, verify Euler’s (# of edges) + (4 of loops)
A‘
A,and
trees.
Yale
beats
the
of
strength
the
(a complete graph),
does
the
of A?A contain
Why
20.
Why does a complete graph tree connecting all six nodes
nullspace
with
n
=
a
node
itself
to
know
opposition
how
edge: allowed.
many not
are
bi
1.
its entries
from
come
the
6 nodes
have
edges.
m
There
graph:
;
ep...
is its rank?
(1, 1, 1, 1)? What
has
potentiz are
formula: =
how
guess
differences
score
into each node. (a) The diagonal of A'A tells how many (b) The off-diagonals —1 or 0 tell which pairs of nodes are 19.
7-6, whé
for all games.
nodes, and edges from
drawn
to find
spanning
(H-P, MIT-P, MIT-Y) will allow
football
for
method
our
is its rank?
many
and Princeton
that agree with the score differences? for the games in a spanning tree, they are known In
b? How
=
edges?
what
edges must
many
differences
15.
the
on
graph,
Ax
to
f{? How
=
for x1, x2, x3. Soly nodes 1, 2, 3 of tk
—f entering =
tree?
13.
score
at
matrix
7 incidence
by
equations A'CAx
(1, 1, 6). With those
=
f.
=
three
y4 leaves
Eliminating y,, y2, y3, the equations when f
1, 2,
0
=
Aly
the numbers
and with
=
15
edges?
n”-*
are
=
A
spanning
6% spanning
trees!
ai.
adjacency matrix edge (otherwise M;;
of
The
write ‘from
down node
M
a
graph
has
M;;
=
1 if nodes
0). For the graph in Problem and also M*. Why does (M7”);; count
i to node
=
j?
i and
6 with
j
are
connected
6 nodes
the number
of
by an and 4 edges, 2-step paths
space invertible.
its column
That
lie at
which
space, and on those spaces is the real action of A. Itis partly hidden
into
row
What
n-dimensional
Suppose
x
now
space, is an
if A is
R’. The illustrates A=
four
c
OQ
0
¢
by
n
demands
by nullspaces
and left
nullspaces, inside
means
look.
closer
a
100 percent
r
multiplies x, it transforms that Ax. This happens at every point x of the n-dimensional space transformed, or “mapped into itself,’ by the matrix A. Figure 2.9
transformations 1.
That
n.
n-dimensional
into a new vector whole space is
vector
itis dimension
and go their own way (toward zero). is what happens inside the space—which
right angles matters
of
125
Transformations
Linear
2.6
When
vector.
that
from
come
A
matrices:
Stretches,
multiple of the identity matrix, A cl every vector by the same factor c, The whole space expands or contracts (or somehow the origin and out the opposite side, when c is negative). goes through
A
=
awn
A=
1
—
;
Poor :
—_
er
J
°
ee
a
1
#0
y,
matrix
x).
LAreflection
matrix
side
of
_ppposite a
ae
transforms
(4, QO),the
matrix
output is Ayy==
ee
v
leaves
(2,2)
reflectionmatrix
0
axis is the column
nullspace.
is also
(x, y)
a
Figure 2.9
=
part. The
other
to
~
space
oN
space onto The example transforms
point (x, 0) of A. The es
on
y-axis
a
lower-dimensional each
the horizontal
that
projects
axis.
to
.
.
|
(x, y)
vector
A
ot
(0, 0) wn,
fl
That is
ween
ey
N
!| 90° rotation Transfotmations
reflection of the
plane by
(45° mirror) four
matrices.
the fy E -
acer
)>~.
|
tretching
Q2, 9)
+
-
| (—Y,2
ty)
line
permutation matrix! It is alge(y, x), that the geometric picture
oA
ft
the
the
on
45°
is the
(2,2)
==
reverses
the whole
projection matrix takes subspace (not invertible). in the plane to the nearest
O
mirror
image
concealed.
A
A=
its
(—2, 2) = (0, 4).
+
so
O
part and
one
braicallysimple, sending
7
was
1
every
into
vector
thisexample the like combination
to
O
every
In
mirror.
(—2, 2). Ona reversed
,
origin. This point (x, y)
the
2t(90°)
all vectors
turns
That
,
space around whole throu transforming the
turns
,
0]
Ax
[
oe
en
a
ED
a
a
example
na
Q
bE
we
A rotation
0
Foy
&>
i
2.
QO —l
projection
on
axis
Vector
er2
Spaces
to stretch There are matrices examples could be lifted into three dimensions. the plane of the equator (north pole transforming to the earth or spin it or reflect it across south pole). There is a matrix that projects everything onto that plane (both poles to the center). It is also important to recognize that matrices cannot do everything, and some transformations T(x) are not possible with Ax:
Those
It is
(i) (ii)
impossible
If the vector
A(cx)
x
Matrix
origin,
x’, then 2x
since
0 for every matrix. go to 2x’. In general cx must
must
AO
=
cx’,
go to
since
c(Ax).
=
since
to
goes
If the vectors
(iii)
the
to move
x
A(x + y)
y go to x’ and
and
Ax
=
then
y’,
their
sum
x
go to x’ +
+ y must
y’—
Ay.
+
those
multiplication imposes
rules
the transformation.
on
rule contains
The second
0 to get AO 0). We saw rule (iii) in action when (4, 0) was reflected the 45° line. It was across split into (2, 2) + (2, —2) and the two parts were reflected separately. The same could be done for projections: split, project separately, and add the
the first
(take
rules
These
projections.
apply
has importance
Their
linear
called
are
=
=
c
them
earned
A(cx
+
dy)
comes
Transformations
a name:
The rules
transformations.
that
transformation
to any
be combined
can
from a matrix. that obey rules (i)—(ii1) into one requirement:
c(Ax)+ d(Ay).
Any matrix leads immediately to a linear Does every is in the opposite direction:
transformation. linear
The
interesting question
more
lead
transformation
to
matrix?
a
The
of an approach is to find the answer, yes. This is the foundation to linear algebra—starting with property (1) and developing its consequences—that is than the main approach in this book. We preferred to begin directly much more abstract
object
with
section
of this
and
matrices,
we
now
transform
done
by
vectors
an
has
Ax
vector
matrices,
by
m
Having
gone
matrices
R” to the
produce
that
original The linear is
far, there and
addition
in
rule
a
same
different
vector
of
has
x
n
linearity
transformations.
space
R”. Itis
space
R”.
absolutely permitted That is exactly what is
components, is
an@ the transformed
equally satisfied
by rectangular
transformations. reason
no
to
stop. The
scalar
but
operations and
y need
in the
linearity
be column
multiplication, only spaces. By definition, any vector space allows the are x and y, but they may cx + dy—the “vectors” actually be polynomials functions satisfies equation (1), x(t) and y(t). As long as the transformation
(1)
are
combinations or
also
in R”. Those
vectors
The
matrix!
components.
m
they
so
condition
n
linear
they represent
need not go from in R” to vectors
A transformation to
how
see
or
are
not
x
not
the
|
it is linear. We take
degree
space
They
n.
is
examples
as
n
+ |
look
like
the spaces p
=
ap +
P,,, in which ajt
+--+
+
the vectors
a,t",
(because with the constant term, there
and are
are
polynomials p(t)
the dimension m
+ 1
of the vector
coefficients).
of
The
of
operation
differentiation,
d/dt, is
A=
linear:
d
Ap(t) The
of this
nullspace
column
always original
5,(a0 +ayt A is the
The
space.
sum
+a_t”)
+--+
of
+---+na,t"
=a,
space of constants: the right-hand side
one-dimensional
is the n-dimensional
space in that
P,,_; nullity (= 1) and rank space
1.
(2)
0. The dap/dt of equation (2) is =
(= n) is the dimension
of the
P,,.
space
from
Integration
=
127
Linear Transformations
2.6
to t is also
0
(it takes P,,
linear
P,+1):
to
tf
Apt)
6
|
=
(a+
ant") dt
o+-+
ay
agt
=
+o
does term.
time
is
there
no
Ap(t)
Again
polynomial like
fixed
a
=
this transforms
(2+ 3t)(agp P,,
(3)
|
nullspace (except for the zero not produce all polynomials in P,,,;. The right Probably the constant polynomials will be the
Multiplication by
+
n+1
0
This
n
EF
to
P,,.;,
with
left
as
but has
integration constant
no
2 + 3¢ is linear: +
+--+
always!) equation (3) nullspace.
vector, side of
a,t")
=
2anp+---+3a,t"71.
nullspace except
no
p
=
0.
In these
examples (and in almost all examples), linearity is not difficult to verify. It hardly even seems interesting. If it is there, it is practically impossible to miss. it is the most important property a transformation can have.* Of course Nevertheless, most transformations are not linear—for p’), example, to square the polynomial (Ap or to add 1 (Ap t*) 1). It will be (A(t p +1), or to keep the positive coefficients linear transformations, and only those, that lead us back to matrices. =
=
Transformations
Linearity know
has
Represented by Matrices a
crucial
consequence: vector in the entire
Ax
for each X1,...,Xn. Every other vector x is the space). Then linearity determines
Linearity
If
x
The transformation the basis
vectors.
T(x)
=
The rest
and y leads to condition free hand with the vectors the transformation
Invertibility
is
a
for each vector in a basis, then we Suppose the basis consists of the 1 vectors space. of those particular vectors combination (they span
If
we
of every
perhaps
Ax has
no
is determined
(4) for
in second
Ax
then
Ax
vectors
n
x;,...,X,.
(they
=c;(Ax,)
are
The
as
an
what
requirement (1) The
important property.
(4) to do with
for two
transformation
independent).When
is settled.
place
+---+c,(Axy).
left, after it has decided
freedom
by linearity.
in the basis vector
know
Ax:
c¢,xX,;+---+c,x,
=
x
*
=
—
those
does are
vectors
have
settled,
a
Vector
er2
Spaces
takes
transformation
linear
What
and x2 to Ax; and Ax?
x;
2 1
A
x=
Ax;
to
|3);
=
4
multiplication T(x)
be
It must
goes
Ax
=
decide basis
a
For the
basis.
6}. 8
basis
is not
convenient.
The
That
of
Action
“d/dt”
A
same
differentiation
represent of
polynomials
is
=
for
P;
unique (it
degree
pr
is), but
only linear
the
is also
and
integration.
is
natural
3 there
a
First
choice
we
must
for the four
Api =0,
acting exactly space =
p3
like
a
with
choice
some
Apo=pi,
the usual
(0,0, 1,0),
basis—the
pg
and this is the most
0, 1, 2t, 3t?:
are
Ap3=2p2., matrix?
Aaite
Aps
Suppose
coordinate
(0,0, 0, 1).
=
matrix
Differentiation
is necessary
vectors
matrix, but which
pa=t?.
p=’,
=f,
four basis
of those
d/dt
(0,1,0,0), (5):
p=,
never
derivatives
four-dimensional tion
|
vectors:
Basis
p2
8
4]
(1, 1) and (2, —1), this
that
find matrices
we
on
|6}.
=
with
transformation
Next
basis
different
Ax.
the matrix
by
4 a
to
goes
|
A=|3
with
|
H
x2=
[2
Starting
4
0
The
in the usual
were
we
vectors
p,;
(1, 0, 0,0),
=
is decided
matrix
(5)
3p.
=
by
equa-
=
Ap, is its first column, which is zero. Ap2 is the second column, which is p;. Ap3 is is zero). The 2p2, and Ap, is 3p3. The nullspace contains p; (the derivative of a constant of a cubic is a quadratic). The derivative column space contains p1, P2, P3 (the derivative like of a combination 2++t— 1? —?#°is decided by linearity, and there is nothing p =
new
about
that—it
of every The matrix can
derivative
is the way
we
all differentiate.
It would
be crazy
polynomial. differentiate
memorize
|
that
p(t), because f
‘Oo 1
to
0
matrices
in
build
linearity!
Y
Ol
2) |
dp
Wo
00
AP
2
01
J
00
0
3
1/=|3
00
0
O1
|—1
—2
0
—>|
—
2t
2
—
30’.
the
~
129
Linear Transformations
26
In
short, the matrix carries all the essential information. If the basis is known, and the matrix is known, then the transformation of every vector is known. Kes to itself, one The coding of the information a space is simple. To transform basis is A transformation
enough.
the differentiation
For
Its derivative
347
Since
y3
do the
P; into 5
n, or
Py,
4. It
space
3p3
1 is
from
came
basis
a
goes from cubics for W. The natural
AxX;=yo,
[basa
...,
matrix
Integration
Aim
need
integrationare
brings
the differentiation
Aa
O
100
0
02
0003
Differentiation
inverses!
has
zeros
is the
is In
in column
OT
0
0
O
OF.
1
i.
to
that
1 J
0
and
0
Agi Aint
=
1
0004
L
left-inverse of integration. Rectangular 2 cannot the opposite order, AjntAaige
a
=
1. The
derivative
be
integration followed happen for matrices, cubics, which is 4 by 5:
0 0
A will
Or at least
To make
down
0
O
O
operations.
original from quartics
matrix
=f,
4
5 0010
function.
the
=
0
sided
back
inverse
y.
1
0
10
1,
Ax4 = —Yys.
or
|9
=
.
we
3t?.
=
of V:
vector
4
0
1
by
is yj = 4. The matrix
choice
1
‘Oo 0
differentiation
(d/dt)t?
1.
=
p;
quartics, transforming
to
¢*, spanning the polynomials of degree from applying integration to each basis comes
0
and
vector
0, 0, 3, 0. The rule (6)
contained
=
or
for each.
from
came
f
Differentiation
basis
the first basis
last column
f
[sane
a
at a time.
need
we
1
requires
the last column
integration. That
so
to another
The
zero.
Opa,
+
column
for
=
ys
by
+
a
same
W
column
Op
+
0’, yy =,
=
by
m
Op;
one
matrix, column
so
the matrix,
We
V
is zero,
=
constructs
=
from
of
a
constant
identity andthe integral of the derivative
is
zero.
of t” is t”.
matrices be true.
have
cannot
The
In the other
5
by
columns
5
two-
product Ain Aaig
Vector
pter2
Spaces
H
@, Projections P, and Reflections
Rotations
began with 90° rotations,
the 45° line.
Their
matrices were
0
O
1
—-I
0
1
Ab o
o=() a] Pb a
(reflection)
(projection)
(rotation)
through
the x-axis, and reflections
projections onto especially simple:
This section
of the x-y plane are also simple. But rotations underlying linear transformations in other mirrors are other lines, and reflections through other angles, projections onto still linear transformations, provided that the origin almost as easy to visualize. They are Using the natural basis lo. and 0. They must be represented by matrices. is fixed: AO [°|, we want to discover those matrices.
The
=
Figure 2.10
Rotation
1.
basis
the two
on
on
the
(6),
those
it lies
rule
sin).
“@-line”” The
go into
numbers
transformations
one
basis
second
vector
a
(0, 1)
It also
perfect
chance
(we the
to test
whose into
rotates
of the matrix
the columns
Qz is
family of rotations
This
Does
first
The
vectors.
through an angle 0. goes to (cos 6, sin@),
rotation
shows
use
the inverse
the square
of Qo equal Q-» (rotation backward through 0)?
of Qo equal Qoo (rotation through fe
-s|
s the
product of Qo
QO6Qe
[e—s?
-s]_
2es
cos
@ and
between
—
+
sin g
siné
—
sin@ cosy
-
c—s*|
double
|
cos
6
sing
Rotation through 6 (left). Projection
--|
--|
onto
Yes.
4} angle)?
Yes.
|cos26
—sin2é6
|sin20
cos26]
_
~~
6 then
Qy equal Qo+9 (rotation through
and
@ cosy
a
-2cs
|
cl
cells
cos
Figure 2.10
for
and matrices:
c
Does
s
correspondence
2] [5 t= avo+=[o Does
length is still 1; (—sin6, cos@). By and
c
the effect
shows
|cos@+g) |sin(@+@)
the 6-line
(right).
~)? ++
-+
Yes. +>
eet
The
last
case
when
appears
they give were
contains
the first two.
¢ is +0.
All three
yes.
to the
For A gis Aint,the For
constants).
The
inverse
multiplicationis defined exactly
Matrix
corresponds
appears when g is —0, and the square questions were decided by trigonometric identities (and those identities). It was no accident that all the answers
way.to remember
anew
product of
composite
the
the
that
so
product of
the matrices
transformations.
transformation
the order
rotations,
131
Linear Transformations
2.6
of
is the x-y plane, and QgQ, is the order makes a difference.
was
does
multiplication
same
as
all
identity (and AjntAaireannihilated
the
Q, Qs.
For
not
a
and
rotation
a
V
U=
Then
matter.
W
=
the
reflection,
matrices, we need bases for V and W, and then for U and V. By keeping the same basis for V, the product matrix goes correctly from the basis in U to the basis in W. If we distinguish the transformation A from its matrix (call that [A]), then the product rule 2V becomes extremely concise: [AB] [A][B]. The rule for Technical
To construct
note:
the
=
multiplying matrices match the product of
in
Chapter
linear
1
totally
was
determined
by
this
transformations.
Projection Figure 2.10 also shows the projection of (1,0) onto @. Notice cos that the point of projection length of the projection is c I mistakenly thought; that vector has length 1 (it is the rotation), so we by c. Similarly the projection of (0, 1) has length s, and falls at s(c, s) gives the second column of the projection matrix P:
2.
gs
;
Projection matrix
has
perpendicular the 6-line and
P*
are =
projected onto projected to themselves! are
is not
3.
course
the
c?
cs
cs
Ss
has
origin; that line Projecting twice
7
stays where the mirror.
(cs, s”). That
|.
is the
nullspace
is the
same
as
c* + s?
=
Cc? -
cs
21
a
cos*@ + sin? 6
(ce? +57)
es(c* +57) les(c? +57) s2(c7 +57) |
Points
on
of P. Points
projecting
the on
once,
=
1. A
projection
matrix
_p 7
equals
Reflection
the reflection
=
as
multiply
must
inverse.
no
(c, s),
P:
p= Of
P=
the transformation
inverse, because
no
line
@-line
onto
The
the 6-line.
=
This
must
requirement—it
its
own
square.
Figure 2.11 shows the reflection of (1, 0) in the 6-line. The length of equals the length of the original, as it did after rotatton——but here the 0-line it is. The perpendicular line reverses direction; all points go straight through Linearity decides the rest. Reflection
matrix
H
2c? —1-
2s
|
2cs
—-2s*
—
1]
Vector
er2
Spaces
|2c*-1
1]
c
S| a] |
ate
7
i
|
7
2cs
2c2~-1
roc Image
= Bs*
2cs
original
+
2
=
2cs 1
~
projectic
X
He+2=2P2x
_ Figure
This
2.11
but less clear
from
projections: equals twice
H
is its
the
One
transformations Each
in the exercises.
this
the matrix
depends
and the second
(i)
The
vector
column
comes
2P
(iii)
For
—
I when
the
as
+x
matrix
relationship of reflections tc 2Px—the image plus the origina:
=
H*
that
question
I:
=
=I,
the
length
to
represent
vector
negative,
to
basis
.
basis that
gives
of x;
it—which
produce is used
is not
for
changed.
This
matrix
=
E mr
H and
Those
The
column.
second
vector
is
on
is constructed
(projected projected to zero.
is
this
and
and
shearing are is the main point of we emphasize that
stretching
vector
H
P? =P.
since
of
E 5|
=
the first
basis
same
P
to
same
that its
k”
the
the basis
rotations, the matrix
Q=
a
from
onto
reflected
has
comes
(ii) For reflections,
confirms
increase
is back
from
Hx
can
matrix
its first column
that
the
through
choosing a basis, choice of basis. Suppose the first basis is perpendicular:
the
on
basis
projection
Ax
is also
is
=4P?-4P+I1
example
But there
section.
approach
means
H? =(2P—1)* Other
=
It also
projection.
and the matrix.
=
J. This
—
the geometry
J. Two reflections bring back thi property H? H inverse, H~', which is clear from the geometry
own
the matrix. 2P
=
the 0-line:
through
A reflection
original.
1287-1
1}
the remarkable
H has
2cs
|
—
45),
Reflection
matrix
0}
°]
96)
to
second
The
basis
stays the
same.
The
matrix
a
P.
lines
A (or QOor by different
still rotated
are
P
——
or
A)
matrices
lead |
the best basis.
A is still
through 6,
and
before.
is represented single transformation accounted for by S). The theory of eigenvectors will
Thus
is
vector
matrix
whole
transformation
always:
as
itself). The second
question of choosing the best basis is absolutely central, and we to it in Chapter 5. The goal is to make the matrix diagonal, as achieved for P are since all real vectors rotated. make O diagonal requires complex vectors, of a change of basis, while here the effect on the matrix We mention The
the 0-line
to
is altered
come
back
and H. To the to
linear
S“1AS.
(via different bases, this formula S~!AS, and to
Set
Problem 1.
2.6
Whatmatrixhas the result
ing
133
Linear Transformations
2.6
followed
the effect
the
onto
of
What
x-axis?
by projection
the
onto
through 90° and then projectrepresents projection onto the x-axis
rotating every
vector
matrix
y-axis?
s
2.
or
3.
a
productof
the
Does
A
y?
1 and
=
A. What
=
8 rotations
of the x-y
around
remains
y, show
stretching
in the x-direction.
3 matrices
fay project
that
plane produce a
the
on
y)
x,
rotation
linear
transformation.
between
Ax and Ay.
a
the circle
Draw
from
result
that
multiplication If
is
z
x-axis, by indicating what happens
halfway
the
leaves
which
shearing transformation,
a
to
y-axis
(1, 0) and
axis is transformed.
the whole
the transformations
represent
that
the x-y plane? through the x-y plane? onto
vector
every
points
straight after Az is halfway
unchanged. Sketch its effect (2, 0) and (—1, 0)——andhow 3 by ©) What “oO
it the
E a yields
A=
a
that curve?
is
line
x
matrix
produces
sketch
shape
straight @ Every and between The
F|
matrix
x* +
5.
and
reflection?
The
by
5 reflections
(b) reflect every vector (c) rotate the x-y plane through 90°, leaving the z-axis alone? (d) rotate the x-y plane, then x-z, then y-z, through 90°? (e) carry out the same three rotations, but each one through 180°?
7)
the4
Ps of cubic. polynomials, what matrix represents d*/d12?.Construct by 4 matrix from the standard basis 1, t, t”, t°. Find its nullspace and column
From
the cubics
On-the
space
space.What do theymean
.
P3
2 -- 3t?
multiplication by the transformation .
The
1, r,
to to the
solutions
the
to
terms of polynomials?
in
linear
With initial
values
in Problem
9 solves
What
linear.
u
11.
Verify directly
12.
Suppose A~' is
13.
A is
a
c?
linear
u? This
+
s?
product (AB)C
u
Cx.
Then
rule
2V
applies
a
form
applying
vector
space
independent solutions,
x
=
1,
If
from
A
is
from y
=
initial
O and
the
x
=
plane
x-y
vectors
values
to solution
is
0, y
1 as
basis
for
=
satisfy H*
matrices
represented by
to
the matrix
=
J.
itself.
Why does M, explain why
i
transformations AB
of basis
0, what combination
transformation
transformation
of linear
=
1 that reflection
=
=
u
=
Find two
solutions).
y att
=
(using
A7'x + Avty? y) represented by M~!. +
The =
=
represents
from
come
space.
by 2 matrix for W)?
from
still
are
is its 2
V, and your basis
Avl(x
u”
equation d*u/dt?
differential
anddu/dt
x
=
A
matrix
t7, £3.
of solutions (since combinations to give a basis for that solution 10.
what
fourth-degree polynomials P,, The columns of the 5 by 4 matrix
to
u
starts
and reaches
with
a
(AB)Cx.
vector
x
and
produces
ter
2
Vector
Spaces
(a) Is this result the (b) Is the result the
as
same as
same
law sary and the associative This is the best proof of the
14.
Prove
15.
The
7? is
that
of all 2
space
by
A? Parentheses
unneces-
are
transformations.
law for matrices.
same
2 matrices
by
A?
A(BC) holds for linear
=
transformation
linear
a
(AB)C
B then
C then
separately applying applying BC followed
(from R°
if T is linear
has the four
R°).
to
“vectors”
basis
oof [oo fro foa) basis. 16.
the 4
Find
by
the 4
Find
by
In the vector
is the effect A that
3 matrix
polynomials A nonlinear
transformation
for
b. The
positive
b and
7
a1.
numbers
dx
do + ayx
=
0.
=
(x)
(x2,
for
R!
if
S is
T(x)
negative
to
Ax
=
A~!.
=
S be the subset
subspace and
find
a
b has
one
solution
exactly
basis.
for
x? b has two solutions following transformations
invertible?
are
of
=
of the
R')
numbers
a3x?, let
+
because
b. Which
None
linear,
are
x3,
=
angle
for the transformation
must
leave
the
vector
zero
w
=
of these
23.
If § and
24,
Suppose T(v) satisfies
linear
are
(b) T(v) (d) T(v)
T (cv)
of these
=
=
with
S(v)
linear?
is not
transformations
=
T
fixed:
T(0)
x2,
=
it also
Prove
.
Q. Prove
=
=
=
=
T(v)
x3)
from
this the
input
is
v
(vj, v2).
=
(vy,v4). (, =
except that T(0, v2) cT(v) but not Tv + w) v,
transformations
The
1).
v, =
=
then
S(T (v))
=
v
v2?
or
(0, 0). Show that this transformation T(v) + Tw).
satisfy T(v-+w)
=
T(v)+T(w),
and which
T(cv) =cT(v)? (a) T(v) (c) T(v)
(x1,
=
(v2, v1). = (, v1).
(a) T(v) (c) T(v)
takes
that
X;)?
=
Which
a
=
invertible
not
to the real
ax?
+
T(v) + T(w) by choosing T(v + w) cT(v) by choosing c requirement T (cv)
25.
x4) is transformed
x3,
(b) T(x) =e”. cosx. (d) T(x)
transformation
A linear
Which
x2,
that A?
that
Verify
x’ is
=
and the rotation
is the axis
from
22.
this
to
(c).
even
What into
respect
a
represents
p(x)
solution
no
(x1,
A*? Show
of
is invertible
(a) T(x) =x?. (c) T(ix).=x+11. 20.
all
{, p(x)
example
the real
(from not
A with
to right shift: (x1, x2, x3) is transformed B the left shift matrix from R* back to R*, transforming X4). What are the products AB and BA?
Psof
space with
every
matrix:
cyclic permutation
X1). What
X4,
x3,
4
(0, x1, X2, x3). Find also (X1, X2, X3, X4) tO (x2, x3,
19.
find its matrix
transposing,
=
(x2, 17.
of
linear transformation 1? Why is A?
the
For
=
v/|lvll.
=
(v4, 2v2, 3v3).
(b) T(v) (d) T(v)
=
vy + v2 + v3.
=
largest component
of
v.
satisfy
26.
27.
For
(a) Tv) (b) Tv) (c) T(v)
&
(d) T(v)
=
The T
28.
of
these transformations
W
to
R’,
=
find
T(T(v)).
—v.
+d,
=v
1).
90° rotation
=
(—v, v;).
=
"1
projection
“cyclic”
(=- cm - =).
=
transformation
(T (T(v)))?
Find
R?
V=
135
Linear Transformations
2.6
Y
What
T
by
(v1,
v2,
v3)
(V2, v3, V;). What
=
kernel(those
are
for the column
words
new
space
and
of T.
(a) T(vy, v2) (c) T (v1, v2) 29.
=
A linear
transformation
is all of
W and
nullspace)
(b) T (vy, v2, v3) (V4, V2). (d) T(vz, v2) (v4, v1).
(v2, v1). (0, 0).
=
is
T!(v)?
is
and
the range
T is defined
=
=
from
the kernel
V to
W has
contains
only
an
v
=
inverse
from
0.
are
Why
the range
W to V when
transformations
these
not
invertible?
(a)
= (v9, 0)
T
(v1, v2) (b) T(z, v2) (c) T(yy,u)=v 30.
Suppose
Problems
M is any 2
=
Suppose
(6b)
What
A
E ‘|.
The
M such
(a) (b) (c) (d)
Show that
extra
T?
=
AM
from
transposes
a
T is defined
that
J is not
zero
T is linear?
in the range
of T. Find
gives
AM
Is this
a
matrix
is
linear. definitely
matrix
of
is in the range of T. is impossible.
7(M)
(a) What
linear
Which
Find
=
0
(the kernel
of
a
matrix
with
T(M)
7) dnd all output matrices
#
0. Describe
about
changing
transforms
the basis.
(1, 0) into (2, 5) and transforms
all
T(M) (the range
of T).
36.
M™
=
matrix.
matrices
36-40 are
by
transformation.
=| ||m|[6 ?|.
Problems
M.
matrix?
every
Suppose T(M) with
by 2 matrices
1s zero.
true?
are
of T is the
Every matrix —M T(M)
show
identity matrix
T(M)
come
T that
= identity =
that
doesn’t
properties
The kernel
multiplication
C,b).
transformation
The linear
that the
all 2
V contains
space
(0, 0). Find T(v) when
(dd) v=
v=(I1,1).
(c)
E Ar
=
to
every matrix M. Try to find a matrix A that that no matrix A will do it. Zo professors:
transformation
these
35.
Show
(2, 2) and (2, 0)
to
input
of matrix
rules
T transposes Suppose
M. every transformation
The
and A
AM.
for
34.
v=@G,1)..
2 matrix
=
(1, 1)
be harder.
matrix
a nonzero
33.
by
W=R?.
+2)
vo,v,
T transforms
31-35may
T(M) 32.
linear
a
R’.
=
W=R'.
v=(2,2).
(a)
31.
= (1,
W
(0, 1)
to
(1, 3)?
(2, 5) to (1, 0) and (1, 3) to (0, 1)? (b) What matrix transforms (2, 6) to (1, 0) and (1, 3) to (0, 1)? (c) Why does no matrix transform 37.
38.
39.
40.
41.
(a) What (b) What (c) What
N transforms
condition
on
(b)
If you
keep
the
matrix
their
lengths,
basis
same
M is
make
part (b)
yield
the matrix
to
matrix
What
that transforms
(1, 1) and (1, 3)
(a, c)
a,5
=
atx
linear
Every invertible just choose w;
7
=
input
transformation
can
is reflection
the
v
by
and
(1, 2). Find
of two
ST
the rotation
4 Hadamard
coefficients
we
have
b; in
v
two one
A;v; fori
=
=
1, 2, 3.
the v’s?
are
For the output basis
its matrix.
as
reflection
across
S(T (v))? Find
a
S is
S(T (v)) and
reflections
=
T
basis
a
reflection
y-axis. The simpler description the
the
across
(S(v)). This shows
rotation.
that
y-axis. If generally
Multiply
these
reflection
sin2a
is
and
entirely +1
sin 2a
2a
cos
—cos26
—cos2a|—1:
£4
oe41 1
1
—1
1
1
—I
1
—-1
—l
1
—1
=
(7, 5, 3, 1) bases
is
sin 26
26
H
Suppose
). Howare
,
angle:
matrix
H~! and write
J
(x, y) whatis
=
‘tT
Find
have
the 45° line, and S is
sin20 4
,
“
product
to find
If
7'(v;)
means
and output bases
the x-axis
across =
cos
The
=
T be invertible?
must
across
plane.
Suppose T is reflection (2, 1)then 7 (v) ST ATS. that
the
(v;). Why
V is
for 7. This
eigenvectors
for T when
is the matrix
Show
M
equations
v1, V2, v3 are
Suppose 7
(, f)
for A, B, C ifthe parabolaY = A+ Bx + Cx’ equals 4 = b, and 6 at x = c? Find the determinant of the 3 by 3 matrix. For a, b, c will it be impossible to find this parabola Y?
numbers
Suppose
to
(0, 2)?
to
transforms
that
the three
are
matrices
48.
impossible?
matrix.
a
=
47.
to
but put them in a different order, the change-ofIf you keep the basis vectors in order but change
vectors
matrix.
the x-y of the product ST.
46.
37
(7, t) and (s, u)? (1, 0) and (0, 1)?
to
(1,0) and (0,1) to (1, 4) and (1,5) is The combination a(1, 4) + (1, 5) that equals (1,0) has (a,b) =( of (1, 0) related to M or M~!? coordinates those new The
domain
45.
c, d will
(2, 5)
M isa
basis
(1, 0) and (0, 1) (a, c) and (8, d)
N in Problem
M and
do
b,
a,
and (b, d) to (s, u)? What matrix transforms
What
44.
matrix
How
which
43.
M transforms
(a)
atx
42.
matrix
as
a
and
c;
in the
1
combination and
1;,...,v,
—-1{
other
w;,...,
of the columns Ww, for
basis, what
R”.
is the
of H. If
a
vector
has
change-of-basis
matrix
in
b
Mc?
=
from
Start
byuyy+--+ Your
True
50.
false:
or
If
(v) for
T
Suppose all
transformed
different
n
vectors
in the unit
x
Find
a
basis
know
we
for the
0 < x,
square
less than
~
and denoted
product or dot product, product and keep the notation
90°
by (x, y)
xy.
hc onal ¥vectors. l 90°. greatertthan
:
the squared
arty
angle 100°, dashed angle 30°.
25. Dotted
=
the scalar
inner
name
90°
than
greater
~~]
_
ot
ata
(a™b)*—andthen
b satisfy
Schwarz
(a 4°)’
the Schwarz
(b™b)(a"a) (a")? —
Ja=
~
take
we
la*b|
roots:
square
inequality, which
inequality
0.
|
(ala)
is
| cos 6| < 1in R’: (6)
|la|| |||).
Remark
The mathematics
of least
lines.
squares
In many experiments there is no be crazy to look for one. Suppose we b will are
be the
holding
a
reading mixture
[De
aM
|
a
on
a
of two
Geiger
reason are
counter
chemicals,
7
37
is not to
limited
expect
handed
some
at various and we may
a
to
linear
fitting the data by straight relationship, and it would
radioactive times
t.
material. We
knowtheir
The output
may know that half-lives (or rates
we
of
but
decay), amounts
like
not
exponentials (and
two
the
D, then
C and
are
how
know
do not
we
a
much
the
practice,
at times
4,
f),...,
If these
hands.
our
two
behave
readings would
Geiger counter straight line): b=Ce™
In
is in
of each
unknown
like the
sum
+ De.
(8)
1s not exact. Instead, we make readings b;,..., Geiger counter and equation (8) is approximately satisfied: Ce" —~At +
Dew!—puty
is
Ax=bD
of
ow &
Dy,
cw &
by.
Dy,
:
,
—At
Ce*m
—phty
Dew
+
readings, m > 2, then in all likelihood we cannot solve C and D. But the least-squares principle will give optimal values C and D. would be completely different if we knew the amounts The situation C and D, and least trying to discover the decay rates 4 and ys. This is a problem in nonlinear If there
for
were
it. But
A and
two
still form
We would
it is harder.
and squares, and minimize
optimal
than
more
are
its derivatives
setting
yj. In the
exercises,
E
*, the
will not
to zero
stay with linear
we
sum
least
of the squares of the errors, give linear equations for the squares.
Weighted Least Squares A
simple least-squares problem
vations two
x
=
b, and
equations
in
x
b2. Unless
=
b;
=
bp,
we
|= to
now,
E?
minimized
accepted b;
we
(x
=
are
a
patient’s weight
faced
with
from
inconsistent
an
two
obser-
system of
unknown:
one
1
Up
x of
is the estimate
—
1b
fa}
_
b as equally reliable. &« do):
and
b,)* +
We looked
for the value
x that
—
dE’
b;
~
0
in
at
+
bo
x=
a
The same conclusion comes from A'AX A'D. In fact optimal x is the average. A'A isa 1 by 1 matrix, and the normal equation is 2x b; + dp. are not trusted to the sam°ree. The value Now suppose the two observations x scale—or, in a statistical problem, from a b, may be obtained from a more accurate b2. Nevertheless, if b2 contains some information, we are not larger sample—than x willing to rely totally on b,. The simplest compromise is to attach different weights w? the weighted sum of squares: the xw that minimizes and choose and w%,
The
=
=
=
=
Weighted
E’-=
error
If w; > w2, More importance is attached tries harder to make (x b,)* small:
wy(x to
—
b,)* + w5(x
z
3
5
=
2/wi(x
—
b)) + wz(«
—
by)|
=
by)’.
b,. The minimizing process
—
dE?
—
0
at
(derivative =Q)
Instead data.
of the average of b; and b, (for w; = This average is closer to b; than to bp.
1), Xw is
=
wz
weighted
a
of the
average
The
b from changing Ax ordinary least-squares problem leading to Xw comes to the new Wb. This changes the solution system WAx from x to Xw. The matrix W'W turns up on both sides of the weighted normal equations: =
=
What
happens in the
the
point meaning length of is
Perpendicularity
projection Axy last
matrices
W.
0. The
=
and the
paragraph They involve and
b
longer W'W
—
describes
only
y*Cx.
W. The
no
matrix
error
projected
is closest
space that the length involves
(Wy)'(Wx) That
of b
picture
column
when Wx.
the
to
Ax? The
to
of
weighted length
y'x
means
symmetric combination W orthogonal matrix
C
=
the
new
a
new
ordinary the test
the
sense,
invertible
from
come
is still
has
system
new
In this
middle.
AXw are again perpendicular. all inner products: They the
equals
x
0; in the
=
in the
appears
“closest”
word
the
b. But
to
projection Axy
W'W.
The
inner
Q, when this combination J, the inner product is not new is C or different. Q'Q Rotating the space leaves the inner product unchanged. Every other W changes the length and inner product.
product of
x
=
y is
invertible
matrix
W is
possible %
O—are In
define
a
=
inner
new
=(Wy)"(Wx)
(x, yw
invertible,
inner
rules
W, these
Weighted byW
y
an
=
For any
Since
For
product
and
and
length: (10)
—|[x|lw |W]. =
assigned length zero (except the zero vector). products—which depend linearly on x and y and are positive when in this way,
found
from
the
is the choice in
b—although of the square
We may also know the average in the b; are independent of each If the errors be zero!
of C. The best know
We may
=
x
W'W.
C=
matrix
some
important question statisticians, and originally from Gauss. That is the “expected value” of the error practice,
All
is
vector
no
answer
from
comes
is zero. error the average is not really expected to the error that
that is the variance.
of the error;
other, and their
variances
are
o7, then a smaller
which means measurement, 1/0;. Amore accurate variance, gets a heavier weight. In addition to unequal reliability, the observations may not be independent. If the are are not independent of those errors for Senator, coupled—the polls for President The best and certainly not of those for Vice-President—then W has off-diagonal terms. C matrix W'W is the inverse of the covariance unbiased matrix—whose i, j entry is the expected value of (error in b;) times (error in b;). Then the main diagonal of C7'
the
right weights
are
w;
=
-
=
contains
the variances
Suppose they hold. expected
two
07, which
are
the average
of
(error in D;)?.
(after the bidding) the total number For each guess, the errors —1, 0, 1 might have equal probability is zero and the variance error is 2: 3
bridge partners
both
E(e)
E(e*)
=
=
of
guess
3(—1)+ $) +401) =0 3(-1? + 5(0)?+ 5(1)?
=
§.
7
spades
Then
the
ter3
Orthogonality
The two
identical, both
too high
E(e,e2)
are
guesses because
dependent, because they are looking at
both
or
the inverse
and +(—1),
=
based
are
different
the
on
bidding—but
same
hands.
Say the chance zero, but the chance of opposite errors of the covariance matrix is WW:
is
low
too
they
that
re
is
=.
n
a
The
~
This
2.
E(eiex)) E(e})
matrix
imized?
Check
column
(3, 4).
b
least-squares
solution
that
through
_the
normal
weighted
the
b;
the
=
x
error
vector
| and
by
D
3x,5
—
the
Verify that
+1) Write
E*
b
error
=
||Ax
p is
perpendicular to
ertoast
eo
the
of A.
geny,
cacb
Pe
PS
to
and
u
Sta MORIA ct
tin
v, if
ng
1) b
{3].
=
bee nd
A'b, confirming
=
Find
the solution
that
calculus
as
x and the projection
=
=
The
Meet
c=
1,
resulting equations with A'Ax well as geometry gives the normal equations. Ax. Why is p b? p 5.
columns
to the
with respect zero its derivatives
_
Compare
th
0)
1g 1
to
{11.
b=
SrA
A=1|0
perpendicular
1
ll,
b||? and set
—
is
E? is min
error
=
11 —
What
= 1 and ft, 2 are fitted by a lin and sketcl = 7 by least :squares,
t, | and 2D
=
4x)
—
0
A=|0
out
=C=W'W.
equations.
10, 4x =5.
=
7 at times
=
origin. Solve
3x
to
(10
1
7:
1
=
line.
best
.
=
C+
The
q, and
orthonormal
onto
to zero, average fitting a straight line leads to orthogonal = 3. Then the attempt to fit y = C + Dt leads to —3, t2 O, and t; t; in two unknowns: =
equations
0.
times
the measurement
columns.
Ply}
=ly}.
Z|
of projections
sum
=
and
=c+d(t
case.
If
the
the time —1f).The
best
line is the same!
As in the
=U =
example,
P+
ite.
¢ is
the mean,
and
the
off-diagonal entries shift is an example of
+++
m
and
5“ t;,
Gn
+
get a
also
we
—
ml NG S(t; for
formula
by 7
the time
shifting
Dy, —f)?
7
7)?
convenient
the Gram-Schmidt
tn
7
Ge=D [n+ (ty —0)?
The best
th 4 7?
(a=)
7
find
we
fm
179
Orthogonal Bases and Gram-Schmidt
3.4
these
made
which
process,
d. The
®) A? A had
earlier entries
This
zero.
the situation
orthogonalizes
in advance.
Orthogonal matrices are crucial to numerical linear algebra, because they introduce no instability. While lengths stay the same, roundoff is under control. Orthogonalizing has become vectors an essential second only to elimination. technique. Probably it comes And it leads to a factorization A LU. OR that is nearly as famous as A =
=
The Gram-§
Suppose easy.
To
vector
v
project
plane
of a, b,
c,
products av,
b'v,
you and
orthonormal.’
Now
we
g;: it
that
of the first two, But
to
given
b has any component component has to be subtracted: vector
Figure
of
3.10
At this
this
a
way
a,
b,
c
true, to make
we
and
want
we
a.
in the direction
B=b—(qgib)q,
vector
to
point g; and not be in the plane of g; component in that plane,
q2
are
life is
orthonormal,
require only the inner forced to say, “Jf they are
are
them
of g;
and
q,.
The g, component
are
(a'v)a. To project the same (a'v)a + (b' v)b.To project onto
orthonormal.
of b is removed; set.
The
third
a
so
that gq;
If the
of a),
(9)
q
direefof, andnot
to q;
the resultis
C
=
in the
for qo.
the direction
and B normalized
(If
problem a/|a|| isa
no
=
orthogonal to q,. (which is the direction
orthogonal direction starts plane of a and b. However,
and gz, which is the and that has to be subtracted.
is
qj, q2, g3. There
It is the part of b that goes in a new In Figure 3.10, B is perpendicular to gq . It sets
orthogonal
direction
they
compute
you
you
make
to find are
If
c.
just add projections. All calculations
three
c'v.
b,
a,
of a. We divide by the length, go in the direction The real problem begins with g.—which has to be
Second B is
the first one,
onto
add
vectors
can
unit vector. second
v
independent
propose is simple. We
The method
with
three
vector
a
the
onto
the span
given
are
you
and qp.
with
c.
it may 0, this
It will have
a
signals
ter3
Orthogonality
that a, b,
want,
the part that is in
This
is the
idea
one
place.) What perpendicular to
direction
a new
C=c-—(qioq
vector
Third
in the first
independent
not
c were
of the whole
in the directions its components and over again.* When there is a fourth
that
directions
and
Ce
—
Gram-Schmidt
vector
is left is the component the plane:
already settled.
are
subtract
we
d
C/|IC|l.
=
to subtract
process,
vector,
g3;
from idea
That
every
ne
is used
ov
its components
away
C
in tl
of g1, G2, q3.
Gram-Schmidt
the
Suppose
independent
~~
vectors
are
b,
a,
—_—
c: ny
—
2 b=
|Q],
a=
{0},
a the first
To find gi, make the second vector
{1}.
c=
0] into
vector
a
Au
unit vector:
a/ /2.
=
gq;
To find g,
subtract
fron
in the first direction:
its component
r
a}
=
B=b—(qibq.
|0|
.
The
g2 is B divided
normalized
its
by
|
0
-—=|
J/2 v2}
DY
yaa
4]
fi/v2])
length,
to
produce
O}.
==]
2
unit
a
LoL vector:
1//27 q2
0
=
;
1/2 To find g3, subtract
from
c
its components
C=c-—(qicq
|
along
g; and q>:
(qicq
—
r
1
2) =
0]
This is
already
of square vectors
a
roots
|1}—-V2} 1
unit vector,
(the painful
g1, 92, g3, Which
//27 |-v2
0
0
//2.|
itis g3. | went to desperate part of Gram-Schmidt). The the columns
]1}.
|=
S—1//2}
so
go into
fol
1//2)
of
an
lengths
to cut
is
result
orthogonal
i
[0
matrix
basis
Q=
1%
©
=
Gl J
*
If Gram
thought
of it first, what
was
set
the number
of orthonormal
Q:
1
fi/v2 Orthonormal
a
down
left for Schmidt?
0
1/72
0]
0
1}.
LW/v2 -1/v2
04
3.4
OrthogonalBases
and
Gram-Schmidt
18
-
Ponents inthe directions qy,...,q;-
Ay
Remark
=
@lajqi
—
a;
the calculations
on
thatare alreadySettled:
1 think
forcing their lengths to equal one. dividing by those lengths. The example
roots.Notice +
square
the
PL) B=
whose
columns
A and a
with
Q
are
8 EL
A—
by that
is
a
combination
the
as
factors
sum =
the
Schmidt the
A=la
6
cl=
diagonal (and
the letter
qib =1//2
and
factor U is
qic
=
fl A=1!10
1
2)
0
1
JP
0
0}
qT:
=, 0}.
LL
La]
We ended
c.
between
of the
Q,
The matrices and there has to be
space,
know
b in
vector
what
Figure 3.10
combination
it is:
(qrb)qo.
+
Similarly c
(93c)q3.
A
matrix
a
those matrices?
g’s. The
we
with
=
If
we
is the
express
sum
that in
OR:
. OG
qi
i
plane as a, b. The third The QR factorization is like second
C, without
|
in the last matrix! done. The first vectors
The
from
of its qi and gz components. (qic)qi + (q2c)go+
del,
same
columns.
(qib)q
zeros
was
B and
same
{0} —92
~—
b,
a,
g; and q2, and
factorization
new
;
Notice
5
in m-dimensional
are
combinations
of the orthonormal
jo OR
the
«fa
/1|
were
is the relation
vectors
n
the a’s
we have the
orthogonal a, B, C, only at the end, when
enter
them.
connects
Every vector in the plane is the of its 91, G2, g3 components: c form
C=
columns
b=
matrix
of
Ol
A, whose
when
The idea is to write
Square would have
aTb/aTa instead
andthen
41> 92; 93. What n
roots
above
the
compute
OR
matrix
are
m
third matrix
a
Then
to
2]
QO;
The Factorization
(q?_,a;)q;-. a
=
r1)
0
OP
We started
from
+.
it is easier
without
using
—
eee
is called
qs
TL
ot
R is upper a and qi fell
vectors
A=
92
c
LU
and
©
hy
triangular the
on
q3 were
because
Gramline. Then 91, g. were in involved until] Step 3.
same
not
xcept that the first factor R, because the nonzeros are »
already taken). The off-diagonal entries
qic
=
/2,
fisv2 0
Lt/v2
found
above.
2
The whole
V2 0
1
-1//3
0
of the way
Q has orthonormal to
of R
the are
factorization
1/2
right
the numbers is
JF
1//2 /2| 1]
=
of the
OR.
er3
Orthogonality
vectors q1, 2, 43, lengths of a, B, C on the diagonal of R. The orthonormal which are the whole object of orthogonalization, are in the first factor Q. Maybe QR is not as beautiful as LU (because of the square roots). Both factorizato the tions are vitally important to the theory of linear algebra, and absolutely central If LU is Hertz, then QR is Avis. calculations. The entries 7;; ga; appear in formula (11), when |jA ,||q; is substituted for A:
You
the
see
=
(qiaj)q
=
aj
Q times column jofR.
(q;_4;)qj-1+ | Ajlla;
Se
=
(13)
|
|
3 J) Every m byn matrixwith independent 7 T he columns ~
When
m
I must
problem
Q
the main
forget
not
are
R is upper
Q becomes
square,
A
factoredinto
QR.
=
triangular and invertible. an orthogonal matrix.
point of orthogonalization. equations are still correct,
b. The normal
=
and
orthonormal,
are
and all matrices
n
=
Ax
of
canbe
columns
It but
simplifies
the
least-squares
A' A becomes
easier:
A'A=R'QO'OR=R'R. equation A’
The fundamental
A'‘D simplifies
Ax =
R'Rx=R'Q'b of
Instead
solving QRx which
Gram-Schmidt, The
idea
same
R is
because
just back-substitution
are
The
triangular.
to find
needed
Rx=Q"D.
be done,
can’t
Q and
of orthogonality applies
triangular system:
to a
or
b, which
=
(14)
R
to
solve
we
real
functions.
Rx
Q'b,
=
is the mn?
cost
in the first
(15) which
is
operations
of
place. The
sines
cosines
and
are
of sines the powers 1, x, x are not. When f (x) is written as a combination Each term is a projection onto a line—the series. line in and cosines, that is a Fourier or sinnx. It is completely parallel to the function space containing multiples of cosnx
orthogonal;
vector
and very of x and
case,
the powers
Function This 1.
2.
is
brief
and
the ideas
to extend
of
length
Fourier
4. 5.
to find
to
We will
the best
try
are
a
of
number
infinite-dimensional
famous
recognize the orthogonal “‘columns” to apply Gram-Schmidt
3.
but it has
optional section, the most
to introduce
for Schmidt:
To
orthogonalize
Series
and Fourier
Spaces a
important. And finally we have a job produce the Legendre polynomials.
and
inner
series the sines
as
vector
product a
and
sum
good intentions:
from
of
(Hilbert space); f (x); projections (the
space
vectors
v
to functions
one-dimensional
cosines);
orthogonalization to the polynomials 1, x, x”, approximation to f(x) by a straight line.
to follow
this outline,
which
opens
up
a
range
of
new
and
...;
applications
for
|
linear
algebra,
in
a
systematic
way.
After studying R”, it is natural to think of the space HilbertSpace. of components. v all vectors (vj, V2, V3,...) with an infinite sequence actually too big when there is no control on the size of components v;. of length, using a sum of squares, definition idea is to keep the familiar
1.
=
R©. It contains space is A much better This
and
to
include
only
those
that
vectors
have
infinite
series
Jul]?
converge with finite
so
space time to
infinite-dimensional and
v
(||v
+
w
space. It is the celebr of dimens:vus vecome to let the number
vector
a
way
w
viw
Orthogonality
This
be added
can
leaves
This
sum.
184
infinite,
keep the geometry of ordinary Euclidean space. Ellipses become ellipsoids, and perpendicular lines are recognized exactly as before. are orthogonal when their inner product is zero:
same
The vectors
v2 toy+05 +
=
finite
a
length
form
they
is the natural
Hilbert
_
and at the
to
must
(1,1, 1,...). Vectors multiplied by scalars,
Che
finite length:
a
Length squared The
185
idt
and G
Orthogonal Bases
3.4
=
=0.
+ U3W3 +--+.
+ vpw2
vw,
it still obeys the Schwarz and for any two vectors guaranteed to converge, inequality |v’ w| < |u| ||w||. The cosine, even in Hilbert space, is never larger than 1. There is another remarkable thing about this space: It is found under a great many different disguises. Its “vectors” can turn into functions, which is the second point.
is
sum
2.
This
the whole
Inner
and
‘Lengths f is like
with
a vector
interval.
Products.
Suppose f(x)
To find the
length
of such
of the components becomes impossible. inevitable way, by integration:
onthe
a
fi?
of function
(sinx)?dx
=
Hilbert
way
to measure
length—just the integral The
of
space has become a function their length,and the space in
(16). equation 1/x? is infinite.
as
of
idea
same
= sinx
f(x)
and
The
space.
contains not
replacingsummation
of
functions:If
two
It does
x
along
the squares and natural
(17)
=z.
0
0
Our
< 27.
x
20
(f (x))* dx
=
of sin
adding replaced, in a
20
Length || f ||
be
the
given
0.
u(¢), with
is A:
MAR [3].a=|53).
the vector
uo)
equation
=
we
want:
(2)
Eigenvalues and Eigenvectors
ter5
This is the basic derivatives
the
appear—and A is
matrix
do
How
would
of the
statement
be
find to
If there
u(t)?
It also
equation—no higher
has
coefficie
constant
|
one
unknown
scalar
instead
only
were
have
We would
answer.
the unknowns.
in
first-order
a
of time.
independent
we
easy
it is linear
that it is
Note
problem.
a
instead of
two,that
of
question
equation:
vector
a
d
—
Single equation The
to this
solution
equation
with
=au
is the
thing
one
initial
At the
required
time
t
a,
so
factor
u(t)
that
(3)
e”u(0).
=
1,
=
u
=0.
att
know:
to
equals u(O) because e Thus the initial au. du/dt
0,
=
need
you
Pure exponential
u=u(0)
(4)
The
condition
=
of e”
derivative and the
equation
satisfied.
both
has
the are
|
of
behavior
for
times.
The
equation is unstable ifa > 0, neutrally stable if a 0, or stable if a < 0; the factor e” approaches infinity, remains bounded, a a If a the were then same to zero. a tests or goes + if, complex number, wouldbe e!4’ cos Bt+i sin Bt. part produces oscillations applied to the real part «. The complex Decay or growth is governed by the factor e™. So much for a single equation. We shall take a direct approach to systems, and look for solutions with the same exponential dependence on t just found in the scalar case: Notice
the
u
large
=
=
=
:
v(t)
w(t) or
in vector
ey
=
At
=e”
(5) z
notation |
u(t) This
the whole
is
key to Substituting
solutions.
The
e*’ is
factor
for
reason
common
assuming the
ex,
=
(6)
du/dt equations differential and e”’z into the ey
v
w
=
to same
Ney
=
Nez
=
every
=
4e“y
Ze“ y and
term,
exponent
—
—
Au: Look
=
equation,
for
pure
exponential
find
we
5ez
3e**z.
can
A for both
be removed.
This
cancellation
is the
unknowns; it leaves
Eigenvalue problem
equation. Thatiis the Cigenvalue
use
into
u
=
e'x—a
du/dt
=
In matrix
e”™that grows Au gives X}e'x = Ae“x. number
(7)
form or
decays
The
Eigenvalue equation
it is Ax
times
= Ax. a
You
fixed
cancellation of e
“Ax= Axie
can
vector
see x.
it
again if
we
Substitutin
produces (8)
Lo!
Now
and
Itis
x.
have the fundamental
we
(lambda) Our goal
is
of Ax
Notice
=
then
that
Ax
chapter. equations can
for
x
a
nonlinear
would
A
equation;
be linear.
to the left
over
It involves
be
unknowns
1
The number
A
two
forgotten!
is the associated
x
and x’s, and to
eigenvector. them.
use
°x
=
Ax is
equation
this term
bring
this
algebra problem, and differential an eigenvalue of the matrix A, and the vector to find the eigenvalues and eigenvectors, 4’s
an
is
The Solutions
the
equation of
235
Introduction
S54
¢
In fact
multiplies
write
could
we
x.
If
we
AJ
x
discover
could in
A,
of Ax, and
place
side:
(9) The
matrix
identity
shorter, but mixed The vector
x
The number
Of
course
you see Ax =x,
of
out
matrices
up. This
is the
is in the
matrix
has
straight; problem:
to the
A
A
equation (A
—
)x
O is
=
XI.
—
AI has
—
the
a
nullspace.
to suggest otherwise, but nullspace. It was ridiculous 0 always satisfies nonzero eigenvector x. The vector x equations. The goal is to build u(t) solving differential we are interested only in those particular values x for
a a
it is useless
in
exponentials e’x,
and
but
key
that
so
We want
point.
and vectors
nullspace of
xXis chosen
every
the
keeps
=
aetna
contain
vectors
other
than
zero.
For this, the determinant
In
our
example,
shift
we
A
by
In short, A gives a conclusive —
AJ to make
Subtract AJ Note
that A is subtracted
only from
Determinant This
it
A-Al=
the main
be
AJ
cast
singular.
test-—~
singular:
*
~A
—3—-—
2
diagonal (because
it
|A—AI|=(4—A)(—3—A)+10
is the characteristic
eigenvalues. They factoring into A?
polynomial. Its roots, where from the general formula for 2 (A+ 1)(A 2). That is
come
A
—
=
—
ee 14
—
I multiplies J). A*7—-A-—2.
or
the determinant of
the roots zero
if
A
=
a
is zero,
quadratic,
—1
or
A
=
the
are or
2,
from as
the
er5
Eigenvalues
and
Eigenvectors
confirms:
general formula
—bt/b? A=
Eigenvalues There A
—
AI has
The
eigenvalues, because A? (and no higher power
two
are
A matrix
with
nullspace.
—1 and A
A=
values
determinant
zero
In fact the
nullspace
2
=
is
e
solution
has
lead to
computation
singular,
whole
a
is any
The
second
eigenvector
is any
2
Every
[5
or
nonzero
eigenvectors;
-5]
by
2 matrix
(A —Al)x vectors
it is
a
0.
=
in its
x
subspace!
fo
fy]
multiple x,
Ax
=
be
must
line of
nonzero
_
of x;:
H
=
,
separately: (A—-Al)x
Myg=2:
—] and 2.
=
roots.
of Ax
there
so
_
for Az is done
two
solution
a
Eigenvector for A The
ae
A) in its determinant.
contains
(the first eigenvector)
V9
=
oo
The
lt
a
quadratic
a
of
—4
=
5 3 P| |
multiple
nonzero
for
Eigenvector
=
of x»:
A,
;
X27 =
,
A;J give x2, and the columns of A Az/ might notice that the columns of A 2 for 2 is matrices. of This x;. by special (and useful) multiples of x equal to 1 and solve (A —AJ)x In the 3 by 3 case, I often set a component if x is an eigenvector then so is 7x and so is —x. for the other components. Of course in the nullspace of A AJ (which we call the eigenspace) will satisfy Ax vectors In our example the eigenspaces are the lines through x, (1, 1) and x2 (5, 2). the differential to we back Before going application (the equation), emphasize Ax: Ax steps in solving You
are
—
—
=
—
=
=
0
All Ax.
=
the
=
1.
2. 3.
AI. With A subtracted of A along the diagonal, with (—A)”. is a polynomial of degree n. It starts determinant Find the roots of this polynomial. The n roots are the eigenvalues of A. 0. Since the determinant For each eigenvalue solve the equation (A XI)x other than x 0. Those are the eigenvectors. there are solutions zero,
Compute the determinant
—
=
—
this
is
=
equation, this produces the special solutions Au. Notice e~ and e”: exponential solutions to du/dt
In the differential pure
u
=
e*'x. They
=
u(t) =e"
x, =e
H I
and
u(t)
==
e! x5 e =
Br 2
are
the
These
two
numbers
solutions
special and
c;
and
cj,
equation du/dt
Au,
=
they so
is
just
as
can
be
complete solution. They added together. When u,
their
Now
of solutions
solution
u(t)
be chosen
Satisfy
up
the linear
coe**x9
x1 +
(12)
equations (homogeneous and linear) The nullspace is always a subspace, and
differential
to
Ax
0.
=
|
two
that
hope
to
they
=
Poy
condition
The constants
are
c;
+ Cox,
CX,
=
3 and cp
the two
components
Solution
1,
=
u(t)
Writing
and
free parameters c, and c2, and it is reasonable to satisfy the initial condition u u(Q) at t = 0:
have
we
Initial
ce"
=
still solutions.
are
by any multiplied
be
can
uv; + uo:
sum
superposition, and it applies it applied to matrix equations
combinations
can
the
does
Complete This
give
2al
Introduction
5.1
separately, v(t)
8
je =
2
1 to the
(13)
,
5
C2
original equation
is
H+e Br have
we
3e7' +:
=
or
and the solution
3e%
=
u(O)
=
v(O)
5e”,
=
w(t)
(14)
8 and
w(O)
3e'
=
5:
=
2e”,
+
eigenvalues A and eigenvectors x. Eigenvalues are important in themselves, and not just part of a trick for finding u. Probably the homeliest example is that of soldiers going over a bridge.* Traditionally, they stop marching and just walk If they happen to march at a frequency equal to one of the eigenvalues of the across. bridge, it would begin to oscillate. (Just as a child’s swing does; you soon notice the natural frequency of a swing, and by matching it you make the swing go higher.) An engineer tries to keep the natural frequencies of his bridge or rocket away from those of a stockbroker the wind or the sloshing of fuel. And at the other extreme, spends his life trying to get in line with the natural frequencies of the market. The eigenvalues are the most important feature of practically any dynamical system. The
key
in the
was
Summary and Examples To summarize,
when
matically tions
u
ex;
=
develops
this
at
introduction
has
solving du/dt
=
shown
Au.
Such
the
how
A and
x
appear has pure
equation growth or decay,
an
eigenvalue gives the rate of this rate. The other solutions will be
mixtures
naturally
and
of these
and
auto-
exponential soluthe eigenvector x
pure
solutions,
and
adjusted to fit the initial conditions. Ax Ax. Most vectors The key equation was x will not satisfy such an equation. They change direction when multiplied by A, so that Ax is not a multiple of x. This means that only certain 2Xare special numbers eigenvalues, and only certain special vectors x are eigenvectors. We can watch the behavior of each eigenvector, and then the mixture
is
=
*
One
which
I
never
really believed—but
a
bridge
did crash
this way
in 1831.
ter5
Eigenvalues
combine
modes”
“normal
these
the
way,
and Eigenvectors
matrix
underlying
by computing that. Symmetric matrices eigenvectors, so they are start
we
with
We start
3
when
0
Other A
like
multiplies
x
and x2 it
x;
by
is Ax for its
The
of
a
a
Vie1 NIE
4
|
like
mixtures
are
times
the
4.
1 when
=
x
times
projects
to
with
full
of
set
but
we
0
a=|t)
3x; and Ax2
=
eigenvectors,
two
3 and A2
=
=
2x».
and when
2:
Lol
3x,;+10x%=
But the action
of A is determined
0!
or
n=l|
itself, and
Az
0 when
A=
eigenvectors, and | is repeated r
so
=
0
with
=| | —1
projects to the zero vector. the nullspace. If those spaces and A 0 is repeated n —r
x
is
times
=
—
n
1
are
= with
space of P is filled with 7 and n dimension r, then A
(always
a
avoid
to
discussed,
,
=
eigenvector.
an
The column have
lack
to be
Ai, ==2
5x2 of the
+
18
°
We have
have
identity: Ax;
eigenvalues A;
matrix
A,;=1
of the x;
x,;+5x.
projection
|p)
n=
multiple
a
produces
has
shortcut
no
matrices”
]
with
typical vector x—not eigenvectors and eigenvalues. of
is
there
matrix:
diagonal
a
eigenvalues
equations, Fibonacci equations. In every example,
matrices.
particularly good
is
(1,5)
in another
the book.
A; =3.
A acts =
A
This
A
thing
same
to difference
especially easy. “Defective diagonalizable. Certainly they
over
has
eigenvector
vectors
diagonalized. 5.2 will be applied
|
iq A={9 On each
not
examples
is clear
Everything
are
to take
them
allow
will not
To say the
solution.
and also to differential processes, the eigenvalues and eigenvectors;
and Markov
numbers,
be
can
in Section
diagonalization
The
find the
to
=
X’s):
1 {0
Four
eigenvalues allowing repeats
0
0
0
0
0
0
0
0
of
0
0
Ty
Das
_
P=1o 0
4=1,10,0. _
9. Like every other number, zero nothing exceptional about X might be an eigenvalue and it might not. If it is, then its eigenvectors satisfy Ax Ox. Thus x is in the nullspace of A. A zero eigenvalue signals that A is singular (not invertible); its
There
is
=
=
determinant
The
is
zero.
eigenvalues
are
matrices
Invertible
on
the main
1-vA
det(A-AN=|
0
have
diagonal
all
when
4
5
F-A
6
1
¢
0.
A
is
triangular:
|=(1—A)(Z-A)(4—-A).
5.1
The determinant ;
main
the
just eigenvalues
A= + the This
is
were
product of already
in which
example,
the
the
diagonal
along sitting
eigenvalues
the main
if A =
zero
1,
4
=
3, or 4
diagonal.
by inspection, points to one diagonal or triangular matrix without
be found
can
of the
theme
It is
entries.
239
Introduction
chapter: To transform A into a A changing eigenvalues.We emphasize once more that the Gaussian factorization LU is not suited to this purpose. The eigenvalues of U may be visible on the diagonal, but they are not the eigenvalues of A. For most matrices, there is no doubt that the eigenvalue problem is computationally more difficult than Ax b. With linear systems, a finite number of elimination steps in a finite time. (Or equivalently, Cramer’s rule gave an exact produced the exact answer for the solution.) No such formula formula can give the eigenvalues, or Galois would turn in his grave. For a5 by 5 matrix, det (A AJ) involves A>. Galois and Abel proved that there can be no algebraic formula for the roots of a fifth-degree polynomial. All they will allow is a few simple checks. have been on the eigenvalues,after they computed,and we mention two good onesy ns
RH
och Tentrete
EM
atne
its
=
=
—
‘gum)and, product.
The
projection with
agrees
matrix,
matrix
| + 0
with
as
P had
diagonal
it should.
So does has
determinant,
}, +and
entries the
which
determinant,
is 0-1
Then
0.
eigenvalues 1, =
of its
to
0. A
5+ 3
singular
eigenvalues equal diagonal entries and the eigenvalues. between For a triangular matrix they are the same—but that is exceptional. Normally the pivots, diagonal entries, and eigenvalues are completely different. And for a 2 by 2 matrix, the tell us everything: and determinant trace zero
should
There
be
no
one
or
more
confusion
zero.
the
|
bA
has trace
+
a
“
det
(A
AJ)
—
det
=
Those
eigenvalues
A’s add up to the trace;
two
A?
=
trace
The
are
1
ad
d, and determinant
+
—
(trace)A
[(trace)?
—
4
=
—
bc
+ determinant
det}'/?
5
Exercise
9
gives )~ A;
=
trace for
all matrices.
Eigshow There a
2
move
is
by
a
MATLAB
2 matrix.
around
It starts
the unit
(just type eigshow), displaying the eigenvalue problem for with the unit vector makes this vector x (1,0). The mouse
demo
circle.
=
At the
same
time
the
screen
shows
Ax, in color
and also
Eigenvalues and Eigenvectors
ter5
of
moving. Possibly Ax is ahead to x. At that parallel moment,
4 y
Ao
Ax
=
[08
03
{0.2
0.7
Ax
is
parallel
~
(0,1)
=
Possibly Ax is behind x. Sometimes Ax (twice in the second figure).
x.
~~
~
~
=.3,
Ay
0.7) |
1 f
Ax
C4
Az's,
(0.8, 0.2)
=
> x
The
built-in
SN
A is the
eigenvalue
length
of Ax, when
three
for A illustrate
choices
eigenvector x is parallel. 2 real eigenvectors.
the unit
possibilities: 0, 1,
or
eigenvectors. Ax stays behind or eigenvalues and eigenvectors are complex, as they are There is only one line of eigenvectors (unusual). The This happens for the last 2 by 2 meet but don’t cross. There are eigenvectors in two independent directions.
1.
There
2.
3.
are
eigenvector A is
Suppose stay
eigenvector You
mentally
Ax,
=
x
back
crosses
Its column
around.
circles
follow
does
When
and where?
x
when
appears
can
and it
x,,
singular (rank 1).
while
line
that
on
real
no
first
at the
of x’s
“circle
_-
_
(1,0)
=
One
0. Zero
is
an
eigenvector eigenvalue
instead
of
This
x.
for the rotation
moving This
and
Ax
x
typical! Ax crosses eigenvector x3.
x
is
of
a
six matrices.
Q.
directions
is
line.
a
the
means
below.
matrix
second
is
space
and Ax for these
Ax go clockwise,
at the
ahead
The
The
has
Ax
vector
x
to
along the line. Another singular matrix. How many eigenvectors
of counterclockwise
with
x?
*=[o4) lo a) tof Lao} Lha}[oa] Problem
Set
5.1 7
1. Find trace
the
equals
2. With
the
What
are
If
shift
Pr
we
eigenvalues the
same
to
matrix pure
A
~—
to those
eigenvalues,
of the matrix
exponential are
the
A
=
and the determinant
A, solve the differential
77, what
~
related
eigenvectors
of the
sum
the two
and
equation
E 1 Verify equals
du/dt
=
their
that
product.
Au, u(0)
=
solutions? and
eigenvalues
eigenvectors and
of A?
|-6
—I
B=A-11=|“31: 7
how
the
are
Ht they
(4,
Solve
du/dt
Pu, when
=
isa
P
of
Part
5.
Find
the
i
dt
increases
u(0)
2
eigenvalues
6.
Give
that an
and
row
one
while
the
2
0
0
0
is subtracted
that
from
the
another.
Br
=
fixed.
nullspace part stays
of
TO
and
1
show
u(O)
«(OT
4
A={|0
to
with
eigenvectors
4, + A2 +A3 equals the
example
|u
exponentially
3
Check
DO] Nye
2
_
—
projection:
1
du
241
Introduction
5.1
O 2] 2 0 /2 0 0!
B=ji0
and A,A2A3
trace
eigenvalues Why is a
the determinant.
equals
changed eigenvalue
zero
multiple of changed by the
when
be
can
not
a
steps of elimination?
«7s, Suppose
(a)
that A is
Show
This
an
that this
should
same
A
is
x
confirm
(b) Assuming
of A, and
eigenvalue
that
show
Ax
eigenvector:
Ax.
=
A—7I, and find the
B=
eigenvalue.
3.
Exercise
4 0,
of
eigenvector
an
is its
x
is also
x
of A~'—and
find
the
eigenvalues by imagining
that
eigenvector
an
eigenvalue. 8.
Show
that the determinant
equals the product polynomial is factored into
the characteristic
det (A —AI)
(Ay
=
of the
A)(A2
—
A)
—
(An
++
(16)
A),
—
.
and
9.
making
Show
that
coefficient
a
clever
the trace of
choice
of i.
the
equals
on (—A)"—!
the
of the
sum
right
side of
ayy —%
ay2
a2
det
(A
—
AJ)
of
10.
(—A)"~'.They
(—A)"~!and
a22
terms
in
Ain
vee
Ase
—
find the
Ad
det
=
=Eni
|
that involve
eigenvalues, in two steps. First, equation (16). Next, find all the
all
from
come
set
Qn2
the main
r
Ann
coefficient
that
Find
diagonal!
compare.
2 by 2 matrices such that the eigenvalues of (a) Construct of the eigenvalues of A and B, and the eigenvalues of
of the individual
AB
not
are
A + B
are
the not
products the
sums
eigenvalues.
(b) Verify, however, that the sum the individual eigenvalues of
eigenvalues of A + B equals the sum B, and similarly for products. Why
of the A and
of all
is this
true?
eigenvalues of A equal the eigenvalues of A‘. equals det (A? AJ). That is true because eigenvectors of A and A’ are not the same.
{f) The
This .
»
rok
i ( ¢
Jee!
ae
t
wae
%
}See+ 447 +5A
—
solution.
no
A* of this matrix
_[8
OF
-
and the
eigenvalues
5 with
nullspace
solution
=u
©
“companionmatrix”
polynomial |A
particular that
and the checker-
ones
—
find the
has eigenvalues 0, 3, basis
9
eigenvalues?
A=1/0
that its characteristic
eigenvalues 7, 8,
Or oro or
CO
re
‘O
so
of
Fe
re
|
to nonzero
of ones,
row of the
the third
the matrix
=
4 matrix
by
for both
5
eigenvalues when A and C in the previous O is repeated n r times. eigenvalue A
and
the rank
are
and D has
i
A=
1]
eigenvectors correspond
What
6 matrix
by
eigenvalues
1
a= ar
eigenvalues 4, 5, 6,
of the 6
eigenvalues
[|
Which
C has
eigenvalues 1, 2, 3,
If B has
b
a
|
and
=
3(A+
A®) from
matrices: 2
4
HE A+T=}3 eigenvalues
are
by
1.
21.
the
Compute
and
eigenvalues
A™!:
of A and
eigenvectors
243
Introduction
5.1
ans ain PE12] a= [22] A7' has the has 22.
eigenvectors
A. When
as
the
Compute
and
eigenvalues
eigenvectors
and Aq, its inverse
A’ has the
(a) If (b) If What
same
you
know
you
know
do you
do
x is A is
an
Ax
to
Ax, in order
=
A
2_| 7 -3
AF
=
|-2
eigenvalues A,
and 42, A” has
eigenvalues
the way to find A is to the way to find x is to
eigenvector, eigenvalue,
an
and
A has
A. When
as
A’:
of A and
3
4 4a=15
24.
eigenvalues 1,
eigenvalues
_{-1
23.
A has
to
prove
(a), (b), and (c)?
(a) A? is an eigenvalue of A’, as in Problem 22. (b) A~' is an eigenvalue of A~!, as in Problem 21. 20. 4+ 11s an eigenvalue of A + J, as in Problem (c) 25.
the
From
unit
vector
(4,2, 2,2),
u=
P=uu',
the
construct
rank-1
matrix
projection
Pu = u. Then u is an (a) Show that eigenvector with A 1. 0. Then A zero vector. (b) Ifv is perpendicular to u show that Pv 0. (c) Find three independent eigenvectors of P all with eigenvalue A =
=
=
=
26.
Solve
det
(Q
AI)
—
0
=
cos@
|
@= Find 27.
the
the
cosé
matrix
i’s
more
for these
Find 29.
A 3 to
Ay
three
by
4
=
matrices
3 matrix
find three
and
A, 5, =
that have
B is known
of these:
(a) the rank of B, (b) the determinant
of
x
—
A
AI)x
0. Use
=
i?
@ +i
cos
=
by the xy-plane
angle
sin@:
@.
—1.
=
1) unchanged. Then
(1,1,...,
=
reach
to
A
1. Find
=
permutations:
}1
Ahas
the
rotates
leaves
P=j,0
If
|
eigenvectors of Q by solving (Q
‘Oo
28.
quadratic formula,
—sing
sind
Every permutation two
by
B'B,
1
(0
0
1
0 then
Oo and
P=j{0
det(A
trace to have
a
1
}1
0] + d
0
=
,
0]
AI) 5) (A—4)(A determinant 9, 20, and =
—
1
0
eigenvalues 0, 1,
—
2. This
=
A727 9A —
A
information
=
4+20.
4, 5. is
enough
Eigenvalues and Eigenvectors
r5
(c) the eigenvalues of B’B, and (d) the eigenvalues of (B + )~°. 30.
Choose
the second
31.
Choose
a,
b,
0
of A
row
*
so that det (A
c,
|
=
AI)
—
9A
=
A has
that
so
4 and 7.
eigenvalues
A’. Then the eigenvalues
—
—3, 0, 3:
are
i
fo A=|0
ja 32.
Construct
Ife
1
0]
0
I}.
bc. =
=
=
of M.
:
add to 1.
by 3 Markov matrix M: positive entries down each column e. By Problem 11, A 1 is also an eigenvalue : (1, 1, 1), verify that Me has eigenvalues Challenge: A 3 by 3 singular Markov matrix with trace 5 any 3
|
|
A=
33.
is 34.
2
Find three
matrix
is
thathave
2 matrices A
The matrix
zero.
This
by
might with
singular
not
A2 0. The trace is be 0 but check that A” =0. A,
[2
1 A=|2/[2
1
2
1 2])=}|4
2
4
|2
1
2
1 35.
the
Find
(Review)
O
}0 37.
When
+ b
a
c
=
}3
show
that
(2
oO 1
|0
B=
6
+d,
vector
a
indepen-
combination
of A, B, and C:
TO
4 51,
A=|0
Any
is
x |
34
2
B. Reason:
same
the
is Bx?
What
is Ax?
eigenvalues
f1
A
=
eigenvectors:
with
eigenvalues A1,...,An
same
Xn. Then
+ ep X_. What
CyX1 bees
36.
the
Suppose A and B have dent eigenvectors X1,...,
-
|
4’s and three
three
1. Find
rank
and the determinant
zero
=
=
2
O|,
0
0]
(1, 1) is
an
a
2
C=/2
and
2 2 }2
eigenvector
2]
2
and find both
2. eigenvalues:
b
s=[*4] 38.
When P exchanges Find
eigenvectors
1 and 2 and columns
rows
of A and PAP for A
ri
40.
There
of P?
four
are
this
What numbers
|
r6
by
3
3
3]
1
Il
4
4]
3
8
4
}8
real
by 2 matrix (other than 1) with A? 3 They can be e*7"/ and e~2"4/3, What
give?
a =
J.
and
PAP=j;2
2
Construct
change.
LI:
=
=
I?
Its
and
trace
A.
permutation matrices P. be pivots? What can numbers be eigenvalues of P? can
six 3
don’t
6
Challenge problem: Is there eigenvalues must satisfy A? would
eigenvalues
11
14
determinant
and 2, the
2
A=1|3
39.
1
What
numbers
numbers
can
can
be the
be the determinants trace
of P?
What
5.2
aN
DIAGONALIZATION OF A
5.2
We start
in every
used
We
off with. the
right
of this
It is computation.
The
chapter.
“eigenvector
matrix’
and
because
of the small
lambdas
for the
Proof
Put the
eigenvectors
245
Matrix
x; in the
perfectly simple
eigenvectors diagonalize a
S the
call
lambda
section
a
MATRIX
essential
one
Diagonalization of
matrix:
A the
“eigenvalue matrix’”—using eigenvalues on its diagonal. of S$,and compute
columns
will be
and
AS
by
capital
a
columns:
,
AS
e,
A
=
X{
X92
see
Xn
Then
the trick
is
matrix
this last =
=
into
MnXn
see
-
L
-
split
to
AyXy AoX2
—
7
“
a
"
~
a
quitedifferent _
_
SA:
product
_
=
Aj
do AyXy
AgX.
mL
MpXy |
+++
XE
Xn
te
OX
|
It is crucial
to
these
keep
in the
matrices
If A
order.
right
An,
LA
i
ot
1
;
after), then A; would multiply the entries in the first first Column. As it is, SA is correct. Therefore,
row.
,
came
before
We want
A;
pen,
won ja
bihentny,
S to
(instead
of
in the
appear
Me
Me ,
(2) S is invertible,
because
its columns
were
assumed
before
or
applications.
We add four remarks
Remark
I
If the
distinct-——then are Therefore
any
Remark
plied by
2 a
matrix its
n
(the eigenvectors) giving any examples
to
be
independent.
repeated eigenvalues—the numbers eigenvectors are automatically independent (see A has
2
A 1,...,
no
An
5Dbelow).
matrixwithdistincteigenvalues.can bediagonalized. The
constant,
diagonalizing and remains
matrix an
S is not
unique.
eigenvector.
We
can
An
eigenvector x can multiply the columns
be multiof S
by
ter5
Eigenvalues
any
nonzero
even
more
is
and
Eigenvectors
always diagonal (A 3
Remark
Other
the
same.
All
just /).
S will
new
=
diagonal A. Suppose the first colthe first column of SA is Ay. If this is to agree with the first by matrix multiplication is Ay, then y mustbe an eigenvector: of the eigenvectors in S and the eigenvalues in A is automatically
of S is y. Then column of AS, which =
is
matrices
umn
Ay
diagonalizing S$| Repeated eigenvalues leave J, any invertible S$ will do: S~'/S example A vectors are eigenvectors of the identity.
and produce a constants, in S. For the trivial freedom
A1y. The order
produce
not
a
-
Remark
4
matrices
are
Not
all matrices
linearly independent eigenvectors, so example of a “defective matrix” is
n possess The standard
diagonalizable.
0
all
not
1
a=[94) Its
eigenvalues are A;
A2
=
0, since it is triangular with
=
I}
—}
det(A All
eigenvectors
of this A
21)
=
0 is
=
plicity
double
a
is 1—there
Here A would
is
a more
have
of the vector
(1, 0):
=f
eigenvalue—its algebraic multiplicity is is only one independent eigenvector. We direct
to be the
proof
zero
that
matrix.
this
A is not
But if
.
A=
Repeated eigenvalues Their
3 and
of
are
eigenvalues are 3, eigenvectors—which
1, 1. They
needed
that A a
3
=
result
is
0. It
=
not
singular!
needs
to
be
we
came
A=
The
S.
Since
no
1
and
geometric multi-
construct
0, then
=
of X
| 5
for S$. That
can’t
0. There
0
are
the
2. But
diagonalizable. S~'AS
A=
postmultiply by S~', to deduce falsely That failure of diagonalization was not
and
49
0
0 o}t=[o i.
diagonal:
det| 4)=%.
_
multiples
are
the
on
zeros
A;
A2
=
=
premultiply by
invertible
S
S.
from
A;
2
—-l
the
Az:
=
F q
problem is
0,
,
shortage
emphasized:
Diagonalizability of A depends on enough eigenvectors. Invertibility of A depends on nonzero eigenvalues. There
is
connection
between
diagonalizability (n independent eigenvectors) and invertibility (no zero eigenvalues). The only indication given by the eigenvalues is this: Diagonalization can fail only if there are repeated eigenvalues. Even then, it does not 1 but itis already diagonal! There always fail. A J has repeated eigenvalues 1, 1,..., is no shortage of eigenvectors in that case. no
=
|
5.2
The
is to
test
check, for
of
ideas,
247
Matrix
are that is repeated p times, whether p other words, whether A AJ has rank n p. To complete show that distinct eigenvalues present no problem.
there
—
—
have
we
a
eigenvalue
an
independent eigenvectors—in that circle
of Diagonalization
to
If eigenvectors x) » Xpcomespond. se“then: thoseeigenvectorsare linearly: tndependen |
D.
neem,
first Suppose C1X1
the
+
CoxX2
that
2, and that O. Multiplying by A, we
=
kK
the vector
previous equation,
x.
of x,
combination
some
=
find c,)A,x; + coA2x2
=
and x2 produces zero: O. Subtracting Az times
disappears:
C1(Ay Ag)x, 0. =
~
Since
Similarly
and the two 4 0, we are forced into c; = 0. c= O, vectors are independent; only the trivial combination gives zero. combinato any number of eigenvectors: If some This same argument extends tion produces zero, multiply by A, subtract A; times the original combination, and x; which produces zero. x,-1, disappears—leaving a combination of x), By repeating the same steps (this is really mathematical induction) we end up with a multiple of x; that 0, and ultimately every c; = 0. Therefore eigenvectors produces zero. This forces c; that come from distinct eigenvalues are automatically independent. A matrix with n distinct eigenvalues can be 1Is the typical case.
A; ~ A» and
x,
.
_
...,
=
This diagonalizedy’ we at Oe
ee
“s
Examples of Diagonalization The main point its
of this section
matrix eigenvalue
A
S~'AS
(diagonal).
= A.
We
§
EP
matrix S converts eigenvector
The
this for
see
eS
ay
ae
is
ye
i
?
S
projections
A into
and rotations.
|
re
The
projection
A
into
the columns
=
i |
has
1
1
last
The
equation
eigenvalues
can
a
can
we
must
have
two
You
dy
= —i.
not
are
be rotated
for the
roots—but
The
at a
zero
the way
those
roots
out.
The
eigenvectors
are
so
; and
clear
0
=
for
|
Ku.
a
eigenvectors
go
O
S~'AS
=
A.
rotation:
has
still have which
vector,
du/dt
glance. Therefore
QO -l
K=
be able to solve
see
c 3:
,
The
15 AS=SA=|
and
be verified
themselves
vector
can’t—except
=
11
|} m
90° rotation
How
A
of S:
s=
That
eigenvalue matrix
det(K
its direction
—Al)=A°
+1.
it unchanged? Apparently
But there must be eigenvalues, characteristic polynomial A? + 1 should
is useless. The
are not real.
still
i and A, imaginary numbers, real. Somehow, in turning through90°, they are
eigenvalues also not
and
of K
are
=
Eigenvalues and Eigenvectors
ter5
i
multiplied by
or
—1:
_[-i
(K —AyL)x1
-1]
AgI)x2
—
—-1]
_
eigenvalues
1
They go
distinct,
are
ly]
and
xj
and
x2
[0
_
=
=
L..
The
_
“_
1
[i (K
[0
[fy]
oh> A 7:2} (9
—
if
even
and the
imaginary,
[1
i a,
=
fi _ =
eigenvectors
are
independent.
of S:
into the columns
|
i
1
S=
|
with
faced
are
an
S'KS
and
i
—i
We
’
=
_
=60 O
-i
ir
are needed even complex numbers eigenvalues, there are always n complex the imaginary part is zero.) If there are too
that
inescapable fact,
If there are too few real for real matrices. eigenvalues. (Complex includes real, when few eigenvectors in the real world R°, or in R”, we look in C? or C”. The space with complex components, and it has new definitions vectors all column contains length and inner product and orthogonality. But it is not more difficult than R”, and to the complex case. 5.5 we make an easy conversion Section
is
There
d2,..., 2,
exactly
are
Ax
from
start
in which
situation
more
one
dx, and
=
and
2 is
by
A leaves
The
eigenvalue
an
the direction
result
same
matrix
The
eigenvalues
of A~'
are
=A?x.
=AAx
(3)
A”, with
of
does the second. unchanged, from diagonalization, by squaring S~!'AS
This
this rule
1/4;.
AAx
eigenvalues of A” eigenvector of A?. We
easy. A is also an
of
That
the
eigenvector
same
then
x
of A?
squared.
If A is invertible values
=
same
continues
also can
to hold
applies be
S,
seen
A?
so
the
eigenvectors
without
are
Ax=Ax
then
x=AA7!x
(the power
k
=
diagonalizing: 1
and
{*
=
A:
=
A?.
unchanged.
The
of A:
“=
if
multiplication
Ss S'A?S
or
for any power
to its inverse even
If the first
x.
so
(S~'AS)(S-AS)=
A? is diagonalized by the are
in
The
are
eigenvector of multiply again by A:
comes
Eigenvalues
the calculations
every
A’x Thus
of
A and AB
and Products:
Powers
C”
=
A7'x,
—1). The eigen-
Diagonalization of
5.2
If K is rotation
through 90°, then through —90°:
K~' is rotation
—-!1
_|0
K=| At eigenvalues of K —i and 1/(—i) 1/i
The
=
K 4
For won't
what here
{1
K 2
are
i and
=i.
Then
0)
=| 4}. 0
—i; their
K* is
O
=| 1
and
rotation
complete
a
A 4
also
—1 and
are
squares
_
=
{i
The mistake do not.
lies in
We could
A and
that
assuming
have
ABx
matrices
two
=
B share
with O
Aux
=
zero
the
5
are
360°: _]1 O
|
=
=
Ax.
eigenvector eigenvalues, while AB
x.
same
0
1/;0
=| 5
the
WAx
=
1
O
eigenvalues of AB—but we same reasoning, hoping to prove A and py is an eigenvalue of B,
product of two matrices, we can ask about It is very tempting to try the get a good answer. is not in general true. If X is an eigenvalue of is the false proof that AB has the eigenvalue wi: proof
through O
—/) and
means
—1; their reciprocals
{5(—i)*|
a
False
K {|
and
249
Matrix
through 180° (which
K? is rotation
{rl
a
1
In
has
general, they 1
=
1:
O
100}[10] [09} =
eigenvectors of this A and B are completely different, which is typical. For the same the eigenvalues of A + B generally have nothing to do with 4 + w. reason, for A and This false proof does suggest what is true. If the eigenvector is the same B, then the eigenvalues multiply and AB has the eigenvalue A. But there is something more important. There is an easy way to recognize when A and B share a full set of eigenvectors, and that is a key question in quantum mechanics: The
the
same
«
matrixif andonlyif 7 matrices s share igenvector oFDiagonalizablen Proof
Ifthe
in either
order:
AB
Since
=
same
S
both
diagonalizes
SA,S'SA,S7!
=
SA, A,S7!
A=
and
SA,S~!
and
multiply
wecan
SA,A,S™'.
=
BA. A2A, (diagonal matrices always commute) we have AB In the opposite direction, suppose AB BA. Starting from Ax Ax, we have
A; Az
=
=
ABx x
and Bx
are
both
for convenience
assume
one-dimensional—then of B
SA,S~',
=SA,S7!SA,S7!
BA
=
=
Thus
B=
as
well
as
A. The
=
BAx
=
Bhx
=
iXBx.
0). If we eigenvectors of A, sharing the same A (or else Bx that the eigenvalues of A are distinct—the eigenspaces are all Bx must be a multiple of x. In other words x is an eigenvecto proofwith repeated eigenvalues is a little longer. =
Eigenvalues and Eigenvectors
ter5
uncertainty Heisenberg’s
principle
from
noncommuting matrices, like is skew-symmetric, Q. Position is symmetric, momentum position P and momentum J. The uncertainty principlefollows and together they satisfy QP PQ directly of Section 3.2: < from the Schwarz || Qx|||| Px|| inequality (Ox)'(Px) =
—
“|x|?
x™x
=
The product of {|Ox||/||x|| wave
you
is x—is
function
try
the
to measure
and
x™(QP PQ)x
=
of
position
position
when
errors,
small, because get both errors change its momentum.
impossible particle you
a
and
momentum
It is
.
2||Ox||||Px|l.
©0,
matrix?
Markov
Asia, and Europe have assets of $4 trillion. and $2 trillion in Europe. Each year in the Americas
are
and
stays home that
;
and
s is
sent
of Asia
goes to each to the Americas.
and
Europe.
5
For Asia
gives
Americas|
| Americas A
=
Asia |
|}.
=
Ho
approach a
to be a
A have
stays home,
(a) Find the matrix
process?
Markov
a
in the Americas,
companies
and Europe,
corresponding
Uoo
b
a
for any a and b. uz 4 and b does
Does
is the limit?
Multinational
following equation
and b is the
a
SA*S~'uo (b) Compute u, on (c) Under what condition and what
state
_
1.
=
mr
14.
half stay where they and Los Angeles. Boston
other
and in Los
Set up the 3 the eigenvalue 4
Every month half
trucks. Move-It-Yourself
for
centers
Angeles go to Chicago, the trucks in Chicago are split equally between and find the steady by 3 transition matrix A,
and the
are,
major
three
are
Europe
Europe
|
k+1
year
4
Asia
|
a
year
k
and eigenvectors of A. (b) Find the eigenvalues of the $4 trillion as the world ends. (c) Find the limiting distribution of the $4 trillion at year k. (d) Find the distribution the sum that the sum of the components of Ax equals 15. If A is a Markov matrix, show of the 24x with A 4 1, the components if Ax of the components of x. Deduce that =
eigenvector add 16.
The a
solution
circle:
and centered
(F)
ung.
(B)
Ung,
(C)
Unyi
Find
the
difference
—
—
—
du/dt
to =
u
to zero.
(cost,
=
Au
u
we
Suppose
sint).
—i) goes around in by forward, backward,
i and
(eigenvalues ; | approximate du/dt
=
F, B, C:
differences Un
=
Aun
Un
=
AUn4+1OF
Un
=
OF
Upp
(I + A)u,
=
Unt
(I
=
+ Uy) OF 5A(Un+1
eigenvalues of equation does
J + A,
UZ
the solution
—-
Un+l —
=
stay
(J and on
is Euler’s
method).
(backward Euler).
A)~'u,
A)!, uv,
(this
_
Lay”(J + +A)Un. (I Lay (I + 3A).For —
a
circle?
which
Difference
5.3
17.
What
18.
Find
of
values
the
largest
in vj.)
produce instability
aw
b,
a,
for which
c
these
matrices
sal 19.
term
Multiplying
A)~'.
represents
(I
finite
the condition
sum;
that it
—
equals (J
It is
A={0
10
21.
A
sum
agrees
with
increase
What
,
(I
fax
23-29
J
0
1
O
0
k
as
2"
6
8]
—>
25.
The
|O}’
square 26.
If A and their
27.
6
8
6
8]
=
the
same
A and
SA,S~'.
to prove
(a) When (b) When
do all
5
+1
and
of A from
R
for B*:
[3* 3% —24 ak
9, the eigenvalues of B 4
;
—1 and 9:
are
5
4
B=` if S./A S~!. Why
=
|:
5* 41
this formula
|
k
is there
no
real
matrix
.
B have
do the
[5k
B=2
has
i’s
with
into
Prove
A
A*:
for
this formula
915-1]
SA*S~'
| and
root
matrix”
SA*S7}
=
rap
has
| 5
square of B?
B have
their
-
1}?
A*
and
1
are
2)"
1
3
of A
factorizations
Suppose B
28.
root
that
explicitly
A
SA‘ $~! to prove
compute
matrix
0.
2)"[0
rap | a
a
series, and confirm
A
SAS!
A=
5
Find
it has
(the steady states) of the following?
00
B and compute
eigenvalues
nonnegative, provided
has Amax =
which
fh
about
B= {3
A is
series
This
=I.
why increasing the “consumption A, (and slow down the expansion).
2
Diagonalize
when
A+ A?+---)
+
A* (including A°) and show
A=) | 24.
A)(I
~
economics
or
=
A and
Diagonalize
(J
A)7?.
—
A
are
1
find the powers
the limits
are
Problems 23.
=
Explain bymathematics must
22.
k 3
For
that
nonnegative
rO
20.
stable:
neutrally
or
for that is Ama, < 1. Add up the infinite for the consumption matrix
A)~',
—
stable
are
A(V_,+ Wp)?
=
Wng1
lo oh LSel check
by term,
@(V_ + Wn),
=
265
A‘
and Powers
Equations
that
are
the
=
same
the
same.
full
same
AB
the
set
of
full
set
So A
=
of
independent eigenvectors,
B.
eigenvectors,
so
that
A=
SA,S™! and
BA.
eigenvectors for the eigenvectors
4
for
=
0 span the nullspace 4 34 0 span the column
N(A)? space
C(A)?
Eigenvalues and Eigenvectors
r5
if all |A;| < 1, and they blow up if any zero The powers A* approach in his book Linear Algebra. Lax gives four striking examples 7 ? _[5 Al 5 _|?
29.
Py 1)A
1074
|
a
(71024
4
—
=
e” of
B and
_[
|| D104|
_C
69
5
bs
Bi =JandC?
that
C to show
1. Peter
>
10778
eG
|A;|
Seg
matrix theory has of equations, rather than a single equation, you find a system = A*ug depended on the powers equations, the solution u,z a part to play. For difference solution u(t) = e“’u(0) depends on the exponential of A. For differential equations, the it, we turn right away to an to understand of A. To define this exponential, and
Wherever
example:
Differential ifferential
The
step is always
first
to
find the
t
77
du _ay=
equati tion
u
1
a
the
eigenvalues (—1 and —3) and
aL
Af=cof]
1 (1)
—2|4
=
eigenvectors:
j-ca
the general solution lead to u(t). Probably the best is to match 0. u(0) att to the initial vector are These of pure exponential solutions. is a combination ~The general solution where A is an eigenvalue of A and x is its eigenvector. solutions of the special form ce'x, = A(ce*'x). differential equation, since d/ dt(ce“x) These pure solutions satisfy the of the chapter.) In this 2 by 2 to eigenvalues at the start ‘ntroduction
approaches
several
Then
|
=
(They were
our
example, there
two
are
pure
exponentials to
ce
xy+ oex,
be combined:
u(t)
Solution
=
of
1
—t
Lyle
! u=
-l
é
1)
34
(2)
C2
.
At time
zero,
when
as
recognize S, they were for
solution
u(O)
the matrix
difference
=
1, u(O) determines
=
+ 2X2
cx,
=
c,
and c>:
F 1< ~~
eigenvectors. The constants equations. Substituting them of
c
Sc.
=
2
=
back
S~!u(0) are the same into equation (2), the
is
u(t) Here
exponentials are e°
condition
Initial You
the
is the
=
F4 a |
+3,722
=
same
makes
the
three
exponents
the two
methods
appear:
A
=
the
consistent;
equations change form.
It
0, A to
seems
1,
=
growth us
that
quicker.
du/dt
=
differential
4
4
equation
u
is easy
describes
to
explain a
process
and at the of
same
diffusion.
5.4
oncentration —_—
OQ
V
269
Equations and e*
Differential
0
W
Fe
=
So
Figure
5.1
Divide
an
contain
of diffusion
A model
infinite
pipe
concentrations
between
rate
Sy
segment,
two
into four segments v(Q) and w(0)
is continuous
in time
remains
four
of
Attimet
each
difference the unknowns
are
S$;and Sp. The concentration v(t) in S; is changing in two ways. and into or out of S,. The net rate of change is dv/dt, and
inner
segments
time
t, the diffusion in concentrations. Within each
(zero in the infinite
in space;
0, the middle
=
At
chemical.
a
uniform
but discrete
segments.
(Figure 5.1).
is the
adjacent segments
the concentration
53
Sg
between
ae
sam
segments). The process v(t) and w(f) in the two
segments
There
is diffusion
dw/dt
into
So,
is similar:
d
Flow
rate
Flow
rate into
into
—
S,
d oe
S$,
law of diffusion
exactly
matches
wh
=
(O-w)+(v—
example du/dt
our
dt
w).
=
Au:
-2
—2u+w] |v—-2wi
du
and
yall
(w—-v)+(O0-—v)
dt
—
This
=
1
|
1
-—2|
1
°
eigenvalues —1 and —3 will govern the solution. They give the rate at which the concentrations decay, and 4; is the more important because only an exceptional set of can lead to “superdecay” at the rate e~°’. In fact, those conditions starting conditions must from the eigenvector (1, —1). If the experiment admits come only nonnegative concentrations, superdecay is impossible and the limiting rate must be e~’. The solution the two that decays at this slower rate corresponds to the eigenvector (1, 1). Therefore —> oo. will become concentrations nearly equal (typical for diffusion) as t comment on One more this example: It is a discrete approximation, with only two diffusion described unknowns, to the continuous by this partial differential equation: The
2
Heat
ou Ot
equation
equation is approached by length 1/N. The discrete system
=
That heat
dividing
of
with
uy
Un
—2
the
pipe
NV unknowns
1
ou ax?
into smaller is
governed by Uy}
1
-—2
and smaller
Un
segments,
matrix
This is the finite difference In the
tions
limit of
Those
N
as
—>
eigenvectors sinnax u(x)
with
=
are
Ax
=
—n?x*. Then the solution
A=
side Au
right from
equation du/dt exponentials, but now there Ax, we have eigenfunctions
of pure
from
heat
the
reach
we
00,
combinations
still
are
Instead
d*u/dx,
derivative
the second
with the 1, —2, 1 pattern. The after a scaling factor N* comes
approaches problem.
the flow
0?u/dx’.
=
are
from
Its
solu-
infinitely many.
d?u/dx*
to the heat
Au.
=
equation
is
OO
yeep vet
u(t)
oe
sin
Scene
=
nix.
n=l
The
constants
vectors
as
As
by
of Differential
the initial the
u(x), because
functions
are
Stability Just
determined
are
c,
condition.
The
is continuous
problem
novelty
is that
and not
discrete.
the
Equations
equations, the eigenvalues decide how u(t) behaves as be diagonalized, there will be m pure exponential solutions A can equation, and any specific solution u(t) is some combination
for difference
long
as
differential
u(t)
eigen-
Ser*S tu
=
=
x,
ce
Hee
tener
t
>
oo.
to
the
xy.
e*'’. If they all approach zero, then u(t) approaches if they all stay bounded, then u(t) stays bounded; if one of them blows up, then zero; the solution will blow up. Furthermore, the except for very special starting conditions size of e*’ depends only on the real part of A. It is only the real parts of the eigenvalues
Stability
that govern
et This
stability: =
decays
part is
factors
governed by those
is
ete! for
a
—
If
X=
a
+ ib, then
andthe
e“(cosbt +isinbt)
|e”|
magnitudeis
=e”.
for a 0, and it explodes for a > 0. The imaginary 0, itis constant from the real part. oscillations, but the amplitude comes =
:
|
0 If
or
0
0
0
Ur=}
-l/77-1
U;y'(U;'AU;)U2=
then
9
My
9 At the last step, an eigenvector of the 2 into a unitary M3, which is put into the
by corner
2 matrix
in the lower
°
~I
—ly7-l
= U;*(UyU;'AU;U2)U3=|
O
This We
could
lemma use
=
U,U2U3 is still
applies it to prove
to
a
O
Ap
*
1
0
ws
|}O
O
#*
right-hand
unitary matrix,
and
*K An2
9g
| O
U The product
* x
x
corner
goes
of U3:
dy
Triangular
*
*K
*
x
(
rx 0
U-'AU
all matrices, with no assumption A* approach zero that the powers
*K
0 =
that
=P
*
T. A is
when
all
diagonalizable.
Ai|
Diagonalizing Symmetricaiid Hermitian This
of
dividing by .
10.
11.
cos
=
one,
finish
and
make
A
the
the basis
(1, 4)? The columns
of M@
V;
—
from
come wad | N
a
same
also
d,;v, + dyv2. Check
as
the vector
two bases, express
For the
—sin@
0|
coséd
OQ}.
ee M~'AM.
(3, 1) entry of
numerically If V;
and
that
in
calculation
different
a
its
eigenvalues, we AJ) by using only the square see
would roots
(1, 4)
=
to the
basis
exercise:
the last
If the transformation
T is
a
=
reflection
combinations Vi expressing / Vaas 4 and “
a
Re
BM
(3,9)
#
Oc
=
asa
4
c,V, +c. combination
M connects
m,v,
across
that
= (2, 5),
v;
c
tod:
Mc
=
V2 and
d.
and V. + m2,v2 + m2v2, mj2v, = = d>, the vectors c; V; +c¢2 V2 and d, v1 dy and 191C,+M2C2 m11C, +M42C2 Mc = d. This is the “change of basis formula” the same. are
Confirm
be
|
(1, 1), V2
=
>
the v’s.
plane:
eigenvalue
diagonal
polynomial det (A @—and that is impossible. changes
mijv;of
—C.
so
could
we
1-2
in the
zero
to
plus—minus pairs. —A(Dx).
ee
of the
M
matrix
What vo
if
the roots
main
the
|
the rotations that produce easy to continue, because We have to leave one spoil the new zero in the corner.
is not
d and A will
Otherwise,
way.
in =
Mz=+=J|sin@0
produce
2
—]
C is similar
fcos@
il
@ to
11
>
4
that
M in the
fl, ig
Choose
—l1
to AB.
=
a
2
—]
(and D 1s invertible), show that the eigenvalues of C must come Ax, then C(Dx) directly that if Cx
any A and
-1
B=
to
—DC
=
of them.
and find two
,
i
invertible) that
=
that
to show
|
B is
Show
r |
2
1
Deduce
—Is,
Similar
121
If CD
to
B
(Let
J?
to
Loo.
is
i
(a) (b) (c)
similar
A.
to
A + J.
to
up of Is and
made
are
2
12
(if
similar
are
C is similar
that
1
A=
Show
matrices
that
similar
A is never
B, show
to
Which
all matrices
diagonal M,
a
similar
A and
to
and
=
the 45° line in the
plane, find
+d2v2
its matrix
respect to the standard basis vj = (1, 0), v2 = (0, 1), and also with respect are similar. V, = (1, 1), V2 C1, —1). Show that those matrices
with
to
=
12.
The
identity
transformation
takes
every
vector
to
itself:
Tx=x.
Find
the
if the
corresponding matrix, ond basis
13.
is
= (1,0),
w,;
derivative
The
of
(a) Writethe
3
is v;
= (1,2),
cx?
is not
is b + 2cx
D such
=
v2
Ox?,
+
and
(3,4) the identity matrix!)
=
3 matrix
by
basis
= ©, 1). (It
wo
+ bx +
a
first
(
sec-
»
as
?
@
that
the
aK
~
v8”
|
fa D
|
b
b|
_
}2c}.
=
0.
na j
results
in terms of (b) Compute D° and interpret the (c) What are the eigenvalues and eigenvectors of D?
14.
Show
is
an
eigenvalue f aT f x)= eigenvalues (here —co
= fo f (0)dt has space of 2 by 2 matrices, Find the and eigenvalues
T f(x)
no
HN
pie
Vrecusrscreitensnnacere
15.
number
that every
On the matrix.
derivatives.
the Gf /d transformatio /dxybutt
=
to 1;
and then
_
to
M will do it:
diagonal
a
_
1
we
0
L. be
determinants
fli
J
1] f1 O]fo 1] fi
[9
0 1{ and 00
can
to
only
2171
is to transpose
job
012
a
ead
1 with
2 to 1, and
1
pBP=|t
is
a
hi (tOEIL 6
4
Zero
1 and
change
to
job _
‘tpp
A={0
| || The
that.
0
=
ru
(A)
{1
_
eigenvalues
now
—]
(B)
B=
1 and1+1=2. diagonal) are 2. The eigenvalues satisfy 1-1 J, which are triangular, the eigenvalues are on the diagonal. We want matrices are similar—they all belong to the same family.
From
(T)
have
and
the main
7, B, and
For
a
matrices
four
These
consists
a=
(8)
block, and A
B has the additional
/,. The matrix
eigenvector (0, 1, 0), and its Jordan As for J; is J, with two blocks. zero matrix, it is in a family by itself; the only M~'0M 0. A count of the eigenvectors will determine similar to J; is J when is nothing more complicated than a triple eigenvalue. to
=
=
Application to difference and differential equations (powers SAS™! are easy: AY can be diagonalized, the powers of A we have Jordan’s similarity A MJ M7, so now we need the =
=
=
A*® (MJM™')(MJM"!).--(MJM7) =
=
and
exponentials). SA*S—. In every
powers
of J:
MJ*¥ mM,
If A case
NA J is
This
and the powers
block-diagonal,
block
fA
1
OS
VJjyi=]O (0
A
1}
0
AY
J, will
when
enter
is in the solution
exponential
of each
block
Fak
be taken
can
kak Ak
={O
separately:
hoLeekre} =ae
(9)
.
a
60
}O0
301
Similarity Transformations
5.6
|
a
triple eigenvaluewith a single eigenvector. corresponding differential equation:
A is
to the
7
|
eM
e7’=
Exponential
Its
t
10 0
er e*
L t2
60
et
eit
te
(10)
|.
|
Here
J +
Jjt
(Jjt)?/2!
+
---
+
produces
1 +
At
A*t?/2!+
+
e*
=
---
on
the
diagonal. The
column
third
of this
~
=
43|
|
1
O
1/0 A
Il
0
Xr
A
uy{
ai
-
=
uy
d
exponential
we
0
i
directly
comes
from
solving du/dt
_
-
0
Uj
starting
fu
3
2
Jiu:
=
from
up
=
|
0}. 1
|
(since J; is triangular). The last equation by back-substitution + u3, and its yields u3=e™. The equation for uz is duz/dt=Auy du3/dt=)u3 solution is te’. The top equation is du; /dt Au, + ug, and its solution is nee, When 1 times. A has multiplicity m with only one eigenvector, the extra factor t appears m These powers and exponentials of J are a part of the solutions uz and u(t). The matrix J: other part is the / that connects the original A to the more convenient This
can
be
solved
=
—
if
Up,
if
du/dt
Whenand M
J
Sections5.3
how the
and
Jordan
=
then
uy
= Au
then
u(t) = e“u(0) =
are
S and
5.4.
Appendix
form
can
Akuyg MJ*M~'ug
Au,
A
=
=
(the diagonalizable B returns
be reached.
I
to
hope
the the
Me’ M~ '4(0).
case) those
are
nondiagonalizable following table will
formulas
the
be
shows
and
case, a
convenient
summary.
Whit circle: Tet ‘ont .
r
of
Eigenvalues and Eigenvectors
ter5
Problem 1.
5.6
Set
If ‘B is similar
M-'AM
and
Describe
in words
C
Find
C is N7!BN.)
=
Pe
A=
|
1
2
—1s,
to show
that oa
Looe.
similar
1s
121
(if
B=
to
2
—-il
—]
2
—Il
27
1
2
11 2
—|]
|
invertible) that BA is similar
B is
=
of them.
and find two
,
B
[?
to
4]
5
to
(Let
A + J.
to
|]
1
Show
similar
A.
to
“
2
;
.
C is similar
that
are
similar
are
up of 1s and
made
diagonal M,
a
matrices
that
similar
never
B, show
to
Which
all matrices
A is
Explain why
similar
A and
to
|
to AB.
(a) If CD —DC (and D is invertible), show that C is similar to —C. in plus—minus pairs. (b) Deduce that the eigenvalues of C must come Ax, then C(Dx) (c) Show directly that if Cx —(Dx). =
=
Consider
A and
any
a
=
“Givens
rotation”
fa
c|
b
A=|de
Note
This
place of diagonal below way.
.
What v2
matrix
So Mi; .
10.
same
also
dv,
as
Confirm
V, 12.
The
respect =
0
1 |
M~'AM.
in the (3, 1) entry of
zero
and
is
impossible.
of M
dyv2. Check
to the =
is the
that
numerically
“change
my,v,
=
basis
+
v;
across =
takes
every
,
combination Vi +c.
asa
c1
c
m2 v2 and vectors
formula”
Mc
tod:
vo
=
matrices vector
Mc
=
V> and
d.
+ mo2V2, and V. mj2v1 C1 Vi +c2 V2and d,v;-+d,v2 =
=
d.
the 45° line in the
(1, 0),
C1, —1). Show that those
identity transformation
~
M connects
d>, the
==
of basis
T is a reflection
standard
=
Oc Vv Wa)
[| = the vector (3,9)
dy and m71C,-+™M22C,
This
V2
Nh
a)
If Vj
exercise:
=
(1, 1), V2
from |
ar
(1, 4) to the basis v, = (2, 5), V, and V2as combinations expressing
= (1, 1),
V; come
two bases, express +
0
the rotations that produce easy to continue, because We have to leave one spoil the new zero in the corner.
the basis
v’s.
If the transformation
with
that
changes
the last
MyiCr+MyC2 are the same. 11.
M
of the
For the
Oj}.
|
produce
one,
(1, 4)? The columns
=
cos@
—
0—and
cos
0)
finish the eigenvalue calculation in a different if we could make A diagonal and see its eigenvalues, we would be of the polynomial det (A AJ) by usingonly the square roots that
the main
the roots
determine
—sind
so
d and h will
Otherwise,
finding
is not
“zeroing”
in
zero
@ to
plane:
M=+jsin@
At]
rotationangle
the
1-2
'cos6
f|, fg
Choose
M in the
plane,
find its matrix
(0, 1), and also with respect are
to
to
similar. itself:
Tx=x.
Find
the
if the
corresponding matrix, ond basis
13.
is w;
of
derivative
The
(1, 0),
=
w2
basis
is vy;
(0, 1). (It is
=
cx? is
+ bx +
a
first
b + 2cx
=
not
and
(3,4) (1,2), v2 the identity matrix!) =
Ox?.
+
303
SimilarityTransformations
5.6
3
3 matrix
by
D such
that
+f
ˇ
g De
Writethe
sec-
_
|
(a)
the
ye
wv
/
a,
@
|
ree D
|
|b
.
}2c}.
=
aan
sg
0
i”
(b) Compute D? and interpret the results in terms of derivatives. (c) What are the eigenvalues and eigenvectors of D? |
14.
Show
that every
Tf (x)
16.
is
dt has
an
no
=
On the space of 2 by 2 matrices, Find the eigenvalues and matrix.
(a) Find
=
SA so Diagonalizable:
=
I
1
orthogonal
x/x;
orthogonal
x;
orthogonal
¥;
orthogonal
x;
x;
x;
x;
=
0
=
0
x(B)
=
Mx(A) nullspace
space;
A(A) +¢
eigenvectors
of A
Amax he
—
0
=
of A
|A|
0
=
eigenvectors
I
3].
throughout Chapter 5. question is fundamental be helpful. For each class of matrices, here are A; and eigenvectors x;.
Symmetric: A’
B’.
Eigenvalues and Eigenvectors
of
the
are
to
to
eigenvalues stay
Properties
A? is similar
be similar
If we exchange
(e)
B, then
to
is similar
(C)
true?
statements
=
I
Steady
eotik/n
diagonal
of A
0
state
(1,Xk
columns
0
x >
of S
Met)
Lees
independent
are
:
OAgr Symmetric: Jordan:
Every
J
matrix:
=
M~
ort.
1AM
A=
UXV!
of J diagonal
each
rank(A)
eigenvectors
=
rank(2)
block
gives of
1eigenvector
A'A, AA’
in V, U
Review
3.1
Find
the
and
eigenvalues
0].
1
a=) :
$
5.2
Find
B=!4, At 2)
a=s[’52] If A has
0 and
eigenvalues
1, corresponding
p) you tell in advance What is A?
how
5.4
5.5
In the
What
is the relation
Does
there
exist
matrix
a
numbers
Solve for
Would
annually 5.8
True
prefer
you
to
are
its trace
and
and
determi-
eigenvectors
of A??
real
with
matrix
and then
find
1
is invertible
for all
have
u(O)=
interest
A + c/
for all real
invertible
A +r/J
r.
eAt. 1
if
u
family
A
andif
compounded
0
u(Q) =
quarterly
at
H ;
40%
per
year,
or
at 50%?
false
or
a
F5
7
eigenvalues
that the entire
A such
values 3
ee
eigenvectors
What
symmetric?
be the
will
to the
Lip
ot
A is
ot
to A?
c? Find
initial
both
A?
of
du
3.7
what
previous problem,
complex 5.6
that
can
nant?
S, for
2
7
and
afar
5.3
matrix
diagonalizing
and A~! if
of A
the determinants
and the
eigenvectors,
307
Exercises
if
(with counterexample
false):
(a) If B is formed from A by exchanging two rows, then B is similar to A. (b) Ifa triangular matrix is similar to a diagonal matrix, it is already diagonal. (c) Any two of these statements imply the third: A is Hermitian, A is unitary, A? =I.
(d) 5.9
What
If A and
happens
F_, related 5.10
Find
B
the
diagonalizable,
are
to
the Fibonacci
sequence
F,? The law Fxy4o
to
Fyii
=
general solution
to
is AB.
so
du/dt
we
is
=
Au if
=
rO A=
in time, and how go backward 1. + F; is still in force, so F_; if
—1
0
il
QO —l}.
0
1
0
|
Can you initial
find
value
a
time
u(0)?
T at which
the
solution
u(T) is guaranteed
to return
to
the
er5
5.11
Eigenvalues
and
Eigenvectors
that
If P is the matrix in S is
and
eigenvector,
an
(Note the connection that
R” onto
projects so
P?
to
=
is every P, which
of order
matrix
subspace S, explain why
a
in St.
vector
1 is the
5.12
Show
5.13
equation (a) Show that the matrix differential X(t) =e“ X(O)e™. of dX/dt that the solutions Prove (b) = AX
every
>
A*
that
means
What
of two
sum
dX/dt
5.15
of A
If the
eigenvalues to du/dt solutions the
Find
and
the
—XA
keep
=
find
the
1
for the
il.
-i
0] and is it true?
eigenvectors,
solve
to
By trying
eigenvalues
0
-i
(0
5.16
same
of
eigenvectors
do you expect
the solution
=
A={i
property
B has
eigenvectors (5,2) and (2, 1), Aug, starting from u (9, 4).
0
What
matrices.
3 with
Au and u;z,,;
=
eigenvalues
1 and
are
:
AX +X
=
eigenvalues?
i.)
=
singular
for all time.
5.14
the
are
vector
every
calle al=[oo)=4 no
root.
square
Change
the
entries
diagonal
(a) Find the eigenvalues and eigenvectors of
A
1
=
4
=
;
=
=
=
ast
True
a
Hl
Au starting from u(O) (100, 100). (b) Solve du/dt to stockbrokers and income =income If w(t) v(t) (c) 4w anddw/dt each other by du/dt iu, what does
5.18
of A to 4 and find
root.
square
5.17
A has
that
show
—>
=
client, and they help
the ratio
v/w approach
oo?
false, with
or
to
if true
reason
and
if false:
counterexample
Au starting from u(0) (a) For every matrix A, there is a solution to du/dt 1). (d,..., matrix can be diagonalized. invertible (b) Every (c) Every diagonalizable matrix can be inverted. the signs of its eigenvalues. (d) Exchanging the rows of a 2 by 2 matrix reverses 0. (e) If eigenvectors x and y correspond to distinct eigenvalues, then x" y =
=
=
5.19
onal matrix.
5.20
show
askew-symmetric matrix,
If K is
If
K"
tors
are
(a) (b)
How
=
How
Find Q if
K
=
the
(skew-Hermitian), orthogonal. that that
O
=
(I
K)(I
—
+
K)~!
is
an
orthog-
Ls |.
—K
do you know do you know
that
K K
—
=
/
eigenvalues
are
imaginary
is invertible?
UAU®
for
a
unitary
U?
and the
eigenvec-
Review
309
Exercises
aleat
(c) Why is e“* unitary? (d) Why is e*‘ unitary? 5.21
diagonal matrix with entries eigenvalues in the following case? 1
Az=i1
Fl 5.22
If A* n
—J, what
=
5.23
If Ax
5.24
A variation
Aly
A,x and
=
hoy (all
=
the Fourier
on
Verify —1
5.25
S'
that
=.sin26_—s
1
1
oy
| sin36
x! y
by
show
matrix
n
that
0.
=
sin 30 x
sin6@
sin66
sin96 the
are
m
its
are
matrix’:
sin46
S—!. (The columns
=
1
is the “sine
S=—=|sin20
V2
1
real), show that
matrix
'sind
1
1
of A? If A is areal
eigenvalues give an example.
and
be even,
must
the
are
M~' AM? What
d, d?, d°, what is
If M is the
6
with
=
7
of the
eigenvectors
tridiagonal —1, 2,
matrix.)
matrix N such that N? 0. (a) Find a nonzero Ax, show that 2 must be zero. (b) If Nx (c) Prove that N (called a “nilpotent” matrix) cannot =
=
5.26
(a) Find the matrix P a (2,1, 2). is the only What (b)
aa'/a‘a
=
that
be
symmetric. the line
onto
any vector
projects
through
=
of P, and
eigenvalue
nonzero
is the
what
corresponding
eigenvector? (c) Solve
uzi;
5.27
Suppose
the first
5.28
(a) For which
Pug, starting from
=
up
of A is 7, 6 and its
row
numbers
and
c
(9, 9, 0).
=
eigenvalues A have
d does
real
i, —i. Find A.
are
eigenvalues
and
orthogonal
eigenvectors?
(b)
For which
c
and d
of the columns 5.29
If the vectors
eigenvectors
x;
can
we
orthonormal
find three
that
vectors
combinations
are
(don’t do it!)? and
x
are
in the
columns
of S, what
are
the
eigenvalues
of
a=s|; so
and
B=S;|S k
5.30
What
is the limit
as
k
—
oo
(the Markov
steady state)
of
P "| A
?
and
VELax
JF}
=,
Os
=
{>
yikes,
ludh 84
“Ob
“|
q
OF ne qv _t
sete,
%,
SH,
saenteston ne
ee
7 we
2
DO,
f
i
~
a
OT
g
ve
4 og
a
irePa ts OARIUANTT Ia o
Jl.
RIE
RLS
y Gye
g
aE
g
4
—
t Base i.
ese
i
saemnnstaiurasaueniaye
/ en
‘ a
eee
Many RHE
ey.
if
Smee
|
hb ¢C,
‘Atae
OA >O
Re,
dolar
OLD
ack?
nate
O
eA
TAMERS ALIMENT
do
a
(
pa
ay
have
hardly thought about the signs of the eigenvalues. We couldn’t A was positive before ask whether it was known to be real. Chapter 5 established that matrix has real can be eigenvalues. Now we will find a test that applied every symmetric to all without its those that A, directly computing eigenvalues, which will guarantee eigenvalues are positive. The test brings together three of the most basic ideas in the book—pivots, determinants, and eigenvalues. The signs of the eigenvalues are often crucial. For stability in differential equations, we needed negative eigenvalues so that e*’ would decay. The new and highly important problem is to recognize aminimum point. This arises throughout science and engineering and every problem of optimization. The mathematical the second problem is to move derivative test F” > 0 into n dimensions. Here are two examples:
Up
to
we
now,
|
2x + y)*—y sin y x3
F(x, y)=7+ Does
either
Remark
F(x, y)
I
or
The zero-order
They simply
answer.
Remark
2
minimum,
f (x, y)
Ox
or
the first derivatives
=4(x
+
y)
lower
Ox
7 and
=
of F
necessary vanish at x
=
y
point
f(0, 0) and f/f.
=
Ay =(
OF dy
ay
and
move
away
=
is whether
0. The
question
from
the tangency
point
the x
=
graphs y
=
y?.
+
0?
=
0 have
effect
no
on
of
chance
any
the
a
;
=——=4(x+y)— 42y
=
z
y
OF and
ycosy—snny=0O
=0. All The
=
plane
=
=
4xy
0:
(x, y) (0, 0) is a stationary poinkfor both functions. is tangent to the horizontal 7, and the surface z plane z Thus
x
+
To have
dy
4
2x?
=
condition:
a
0
OFLay
the
at
F(0,0) the graphs
must
3x? =0
—
minimum
a
give
terms
OF —
terms
raise
linear
The
have
f(x,y)
0.
go above
=
zero.
surface
z
=
is tangent
f(x, y) those planes
or
not,
F(x, y) to the as
we
|
Definite Matrices
Positive
r6§
(0, 0)
at
derivatives
The second
3
Remark
decisive:
are
2 2
Ox?
Ox?
0° f
0° F
0° F
_4
_
ar
4
=
tysiny
2cosy
—
a’ f
2
=
_4
dydx
axdy
axdy dydx 0°F
0° f
_
=
—
y
the
near
way show
contain
the
same
origin.
F has
a
must
f, they
4
Remark
but
minimum,
—x?
term
The
all the action
or
sooner
is at
it from
prevent
can
later
in F have
terms
higher-degree
they
must
if and only if f
minimum
they
the
are
has
a
for
same
exactly
in
behave
two functions
The
answer.
Since
answer.
I
minimum.
am
the
F and
same
going
to
don’t!
functions
that those
the
4, 4, 2 contain
derivatives
second
These
pull
being
F toward
a
effect
no
minimum.
global
—oo.
the
on
f(x, y),
For
question of a local In our example the
with
higher
no
(0, 0).
Every quadratic form f
ax?
=
cy’ has
2bxy +
+
a
point
stationary
at
the
terms,
: -
—
:
origin,
©
— would also be a global minimum. local minimum df/dy =0.A like a bowl, resting on the origin (Figure 6.1). z surface f(x, y) will then be shaped the to use a, y B, the only change would be If the stationary point of F is atx at a, B: second derivatives
where
0f/0x
The
=
=
=
=
|
Quadratic of F
part
This
f(x, y)
third
The to
give
Figure
a
behaves
near
derivatives
definite
6.1
f(x,y
decision.
A bowl
anda
x
B)+xy
9) Qx2
(0, 0) in the
are
0° F
arr (a,
~
saddle:
way
andy
that
F
+
y? °F 2
ay?
(x, y) behaves
into
Definite
(1)
(a, B).
near
(@, 8).
problem when the second derivatives is singular. For happens when the quadratic part
drawn
That
same
B)
fail
the
A
=
lo |
and indefinite
A=
a
true
: ne
f is allowed to points (the bowl
minimum, at all other
Definite
goes
second
second
for the condition
derivative
decides
derivatives:
determine
What
(as well a, b, and
F
not
or
conditions
on
2bxy + cy* is positive definite? (i) Ifax* We look
+
at
x
+
1,
y
=
back
Translating direction.
x
2bxy
these
f(x, y)
=
A
a
large
f(1,
is
x
=
In
the
Here
Again
the
a
derivatives
minimum
complete
the
then
f(x, y)
necessarily
in the y direction c
1
=
4
was
the
condition
to a and
for
both >
a
is
ax*+
=
0.
>
a
where
be
positive.
go up in the
f (0, y)
cy’:
=
positive.
0 and
c
negative
O
>
But
is not
that
ensure
the line
on
f
The
positive?
x
positive f(x, y)
y, because
=
a minimum? positive. Does this ensure no importance! Even though its second positive definite. Neither F nor f has a
c, that
—1.
The
definiteness.
positive
We
be controlled.
must
now
want
simplest technique
a
is to
the square: 2
Express f(x,y), using squares The
first term
But
this
on
a
f =ax*+2bxy the
can square has coefficient (ac coefficient be must
(iii) If ax?
must
0.
>
are
conditions
=
three
is easy:
cy’ is equal to a. This must 0?F/dx* > 0. The graph must
this function
2b
of b, compared
and sufficient
have
we
(like 4,4, 2)
numbers
quadratic
condition
sign of b is of are positive, 2x* + 4xy + y? is not at (0, 0) because =2-—4+1= f(1, —1)
It is the size
necessary
three
that
landc
=
But
coefficient
is no;
answer
Now
maximum.
a
>
—8. The
=
variable, the sign of
one
minimum.
a
necessary
that
c
term
1)
only
is the
y, what
and
x
0 guarantee that f(x, y) is always 2bxy can pull the graph below zero.
O and
>
cross
the
original f
our
has
ensure
0 and look
=
strictly positive
ax” + 2bxy +
and y axes. a and c. —10 overwhelms
positive
One
or
Fyy. These
f) c
means
x
x*—10xy+y’. on
minimum
0? With
positive definite, then necessarily
is
definite, because b
fix
a
positive definite,
F’, that
to
conditions
is no.
answer
is
0, where
=
Similarly,
(ii) If f(x, y) Do
cy’
F/0x?>
as
variables
of two
0° and
Fy,,
=
function
a
between
Fyy
F,,,
whether
For
is
Saddle
versus
to this:
down
problem comes correct replacement
=
=
versus Indefinite: Bowl
The the
0. When f(x, y) y only at x up), it is called positive definite.
vanish
313
Points
Minima, Maxima, and Saddle
6.1
+
2bxy
right be
—
is
zero,
b*)/a.
never
and The
+cy’
a
=atx+
negative,
when
second
term
the last
b 3)
requirement
2
+
cy? stays positive,
then
PY 7)
(2)
the square is multiplied by a then be positive. That must for
positive
definiteness
positive: +
4c
necessarily
ac
>
b*.
is that
>
0.
term
this
Positive Definite Matrices
er6
The conditionsa minimum: and 0. The right side of (2) is positive, >
Test fora c
>
0 and
b?
are
found
a
>
ac
just right. They
guarantee |
we
have
minimum:
Tae] and.
bao
|)
]°
a°F
[ar >
|
oo
f has a minimum, we just leaves ac > b? unchanged: The quadratic the signs of a, b, and c. This actually reverse for if ac > b’. The same change applies form is negative definite if and only v4 of F(x, y). a maximum Since
maximum:
Test fora
has
f
whenever
maximum
a
—
a< O-and
|
: :
-
:
the
to leave only The second term in equation (2) disappears Singular when a > 0, or negative semidefinite, first square—whichis either positive semidefinite, will at the the possibility that f can equal zero, as it when a < 0. The prefix semi allows ac
case
point For f
=
ac
—
point
if
also
occurs
one
direction
surface
In
negative. and
a
c
a
have
along
the line
x
+ y
F(x) has
dimension,
opposite signs.
f increases,
in the other
(0,0)
fi=2xy
Then
two
a
minimum
maximum,
a
or
The
b dominated
when
ofr
combination a
and
c.
It
give opposite results—in
directions It is useful
it decreases.
valley.
a
0.
=
examples,
in both
occurred
into
bowl
a
important possibility still remains:
very
This
from
f(x, y) degenerates
=
z
runs
one
dimensions,
be
may
b*:
FyxFyy > F3, fora the second derivatives fit the diagonal. The cross on below. A quadraticf(x, y)
0 and
-
6.1
/
>
ah,
,
tC
Ry
TF Ab
4e°
AA2TACAxAw~EE
from
directly
comes
a
2
symmetric
Ax in R?
x!
ax?
When
the
matrix
symmetric
variables
+cy*
it
out) is the key
it is
x1,...,
=
Ax
is
a
into
°
x'Ax
[xy
R”
in
Xn
°
XQ
4
Gii
412+
a 21
a 22
chapter. It generalizes and for studying maxima
column
a
vector
quadratic form f (x1,
pure
-
|X
4 an
xX2
) i=]
The
entries
diagonal Then
20;;x;x;. There
f
aj;
ax?
=
4n1
=Gn2
multiply x?
to
a,,
+
2ay9X1X.
+
Ann
+ Ann
ee
..~.5
j=l
into
combines
pair a;;=a,;
X?.
higher-order terms or lower-order terms—only second-order. atx tion is zero are zero. The tangent (0,..., 0), and its first derivatives a stationary point. We have to decide if x 0 is a minimum or a maximum x! Ax. point of the function f are
(5)
AjpXiX;-
LX,
|
x2. The
to
any
Xn):
noon
=
|
For
x.
orpo4
—Ain | -
(4)
whole
to the
theygo
Xn,
FE4 )
shorthand
perfect
a
b
|x y|
2bxy
product x*
the
A,
are
Mx,
+
identity (please multiply immediately to n dimensions, and
315
Points
Saddle
2 matrix!
by
This
minima.
and Maxima,
Minima,
a
The func-
no
=
=
is flat; this is or
a
saddle
=
”y in
ia,
ay
2x?
f=
4xy
+
f =2xyandA
A is 3
by
y* and
+
2x?
&,
=
; |
=
3 for
A
2x5 2
f= [x
x3) |
x.
all first
0°
derivatives
2]
minimum
at (0,0,0).
{x3
in the
approached
A is the
zero.
>
|x.|
same
way.
matrix”
“second-derivative
a stationary
At
with
entries
=
control
terms
Taylor Ata
point
a;; Then
0° F/x;0x;, so A is symmetric. F'/dx;0x;.This automatically equals aj; a minimum when the pure quadratic x" Axis positive definite. These second-order
has
F
the
near
series
Xo instead
the
F(x)
F
F(0)
=
+x‘(grad
F) +
higher sxTAy +
order
(6)
terms.
of zeros. The second 0F /0x,) is a vector (0F /0x,,..., in x! Axtake the graph up or down (or saddle). If the stationary point is at of O, F(x) and all derivatives are computed at xo. Then x changes to x xp F
=
—
right-hand
The
=
stationary point:
stationary point, grad
derivatives
on
are
[x
-1]
-1L is
2x3:
0]
2
-1
F(x;,...,X,)
function
2X9x3 +
—
—1|
O
Any
point.
saddlepoint.
—
2X1X_+
—
saddle
—>
next
goes up from definite—which
side.
section x
=
contains
the tests
to
decide
0). Equivalently, the tests decide is the main goal of the chapter.
whether
whether
x’ Ax is positive (the bowl the matrix
A is
positive
ter
Positive Definite Matrices
6
.
The
quadratic f
x” + 4xy +
=
that its coefficients
2y” has Write
positive.
are
f
as
point at the origin, despite difference of two squares.
a
definiteness
against the positive x! Ax: corresponding f for
Decide
saddle
a
or
of these
and
matrices,
the fac
write
the
out
=
o[f3} oft} The determinant .
.
.
2
Ifa
by equation
2
(b) is
in
det
(A
line is f(x,
what
along
zero;
«ft 3
matrix
symmetric AJ)
—
ef
the tests passes 0 and show that both
=
y)
0,
a >
0?
=
b’,
>
ac
eigenvalues
saddle
between
(a) (b)
F=—1+4(e* —x) —5xsiny + 6y’ at thepointx y with cos atx (x* 2x) stationary point y, =1,y=7.
maximum,
minimum,
or
for the
point
=
quadratic
following functions.
=
F
the
positive.
are
Decide
a
solve
0.
=
—
(a) For which
1
b is the matrix
numbers
|} |
A=
positive
_
definite?
LDL! when b is in the range for positive definiteness. A (b) Factor value of 5(x* + 2bxy + 9y*) (c) Find the minimum y for b in this range. if b 3? (d) What is the minimum =
—
=
.
Suppose Find
an
positive example that
(a) What
(b) Show
equal (c) IfA
Factor =
coefficients
the
3
by
that
3
has
symmetric fi
=
fo
=
is
f;
a
11.
(a)
LL’.
Az into
5 |
{A=
is
write
2x4x3 + 2x 9x3
—
square
and
not
positive
x'A2x
as
sum
of three
—
—
definite.
a
f.
(d) Are the matrices
A~!
a
Where
is
/;
? ‘| for
=
squares.
in
for
ac
that
it is
(complex b), find x¥ Ax. Now x# +
>
clx2|?
|b|* ensure
F7 TT
and
=
=
[X, | +
that
A is
ey
positive positive
A, factor
unless
and determinant.
be
can
(b/a)x|*
alxy
Ay
pivots
matrix
definite
positive
its
definiteness.
positive
4x? is positive. Find its D and L to 3, 2, 4 in f.
R* and check
+ 2RebX1x2
0 and
test
the entries
is Hermitian
>
=
3(4, + 2x7)?+
=
out
the square
a|x;|° that
Write
positive definite,
; |
(b) Complete
(c) Show
2b.
>
f; and f, ?
to
2xX1x3 4XnXx3.
single perfect
+c
definite.
positive
2X1X_
—
a
to 0?
? a, singular.
If R=
2X 1X2
—
x? + 2x3 + 11x3
quadratic f (x1, x2) into LDL", and connect
10.
is not
that
sense
A; and Az correspond
x3
+
b in the
dominate
c
the matrix
so
matrices
x? + x
The it
b’,
+
y? for
the square
quadratic f to write
f
a
as
sum
of
1
2
one
two
or
all x,
y?
b’,
>
ac
|
,
[1
10
otf | 100|A= {149
a=
+ cy” for
ax* +2bxy
=
y
0 and
a >
10)
1
a=(é | =|" “3 the
1 (after
=
=
definiteness.
positive
-1
6
x
at that point).
Aq has two positive eigenvalues? Test eigenvalues.Find an xso that x™A,x 0, so the eigenvalues
when
are
are
and
a>0O
Their
product 4,A>2 is the deterpositive or both negative. They
positive. either
ac—b*>0.
both
positive because their sum is the trace a +c > 0. b’, it is even possible to spot the appearance Looking at a and ac They turned up when we decomposed x‘ Ax into a sum of squares: must
x?
vectors
nonzero
hope to find all of signs of eigenvalues, but
and
question,
hints
some
>
be
of the
—
ax?
Sum ofsquares S
+
ee
ST
eens
b°
+ey=a(x+t-y)
5
+
ebxy
b*)/a
ac—b?
pivots.
(1)
y*.
a
a
pivots for a 2 by 2 matrix. For larger x! Ax stays positive the pivots still give a simple test for positive definiteness: matrices are when n independent squares multiplied by positive pivots. ‘The two parts of this book were remark. linked by the chapter One more preliminary we Therefore ask what part determinants on determinants. play. It is not enough to c 1 —1 and b 0, then det A of A is positive. If a require that the determinant test is applied not only to A itself, A —/J but negative definite. The determinant a in the upper left-hand corner. b* > 0, but also to the 1 by 1 submatrix giving ac of A: The natural generalization will involve all n of the upper left submatrices coefficients
Those
and
a
(ac
—
are
the
=
=
=
=
=
=
—
A; =[ay],
Here
A,.=
a
an
a2;
a22
»
theorem
is the main
on
Azg=
Qi,
a2
ay
1a
G22
G3)|,...,
43,
432
433
and
positive definiteness,
a
A,=A.
reasonably
detailed
proof:
“positive determinants
.
Re
Proof
Condition
eigenvalue
will be
I defines
positive definite
positive
definite
matrix.
en
Our
first
step shows
positive: If
A
a
eal
Ax=Ax,
matrix
has
then
x'Ax
=x!ax
positive eigenvalues,
since
=Al[x|l?. x' Ax
>
0.
that
each
%
Now
go in the
we
of the
Cc,Ax,
=
Ax
=
=
4;
ee
0, then
>
AXy
Cr
x/x;
orthogonality x
every
have
we
have
(cix{Trt
oh
CiAyHe
+
TY(cya COAn.
full
0
for
set
of
Then
1,
=
t+CyAnXn)
eee
nx,
(2)
that x'Ax
shows
equation (2)
x}x;
0, and the normalization
=
a
>
CrAnXn-
CyA4X4ie
=
x'Ax
to prove
*
Ax
If
0,
>
(not just the
x
orthonormal
Because
If all 4;
eigenvectors).Since symmetric matrices + CnXn. eigenvectors, any x is a combination cyx1 +++
vector
every
direction.
other
319
Definiteness
Tests for Positive
6.2
II
condition
0. Thus
>
implies
|
condition
I. |
condition
I holds,
If eigenvalues.
the
But
positive.
have
also
at all nonzero
look
to
deal
A; is positive definite. is their
determinant
If
condition
last
Its
product,
II
holds,
with
[xp 0}
=
III:
I holds,
whose
vectors
x'Ax Thus
condition
if condition
And
we
does
so
n
—
The
know
already
we
every upper left k components
| :A
the
eigenvalues (not
A;,. The trick is
submatrix
to
zero:
xpApxy
=
these
that
are
product of eigenvalues are
of A is the
determinant
A;!)
same
0.
>
must
be
Its
positive.
are all upper left determinants positive. IV: Accordingto Section 4.4, the kth does condition
so
pivot pivots.
so
are all positive, so are the dj, is the ratio of det A; to det A,_,. If the determinants If condition IV holds, so does condition I: We are given positive pivots, and must > that x'Ax 0. This is what we did in the 2 by 2 case, deduce by completing the
The pivots were To the numbers the squares. outside square. for symmetric matrices of any size, we go back to elimination
on
that
happens
symmetric matrix:
a
LDL’.
A=
Positive
pivots 2,
5, and #:
11 Tf.
r
2
—1
{| I want
to
O
-t
split x’
oolf
-l}/=]/-;
2
A=|{-l
2]
0-5
|
x'LDL'
Ax into
Ifx=
is
a
sum
~3
1
860C
the
Ax
=
(L'x)'D(L'x)
Those positive pivots
in D
dition IV implies condition It is beautiful
Elimination
removes
that x,
=2
»
from
1.
all later
SU sw
—
—
Ll, coefficients: 2
2
3
4
(u 5°) 5 (. 50) 4) +
—
multiply perfect I, and the proof
elimination
|v
=
LY wy
2
x?
vj
2, 99 3, and 4 as
pivots
Tu
fw
07
(O with
0
x:
thenL'x=|0
of squares
—2]|=ZDL".
1
s{ 10
— |v],
07
-4
5
1
|
x! Ax
jf. 0
1
7
So
how
see
squares is
to
make x’ Ax positive. Thus
con=
complete.
completing the square equations. Similarly, the
and
+
—
are
actually
first square
the
accounts
same.
for
er6
Definite Matrices
Positive
in x! Ax
all terms
£;;
| You
inside
are
involving
of squares and
sum
has the
—2
5
the numbers
see
can
The
x,.
be
inside
As
pivots
outside.
The
multipliers example. however,
the squares in the from the examples,
we know positive. to look only at the diagonal entries. itis far from sufficient with the eigenvalues. For a typical positive The pivots d; are not to be confused sets of positive numbers. In our 3 by definite matrix, they are two completely different test is the easiest: 3 example, probably the determinant
Every diagonal entry
Determinant
must
a;;
test
detA;
pivots are the ratios dj longest computation. For this The
Even
for theoretical
test
know
we
to
detA.=3,
Each
and Least
detA3;=detA
the A’s
Ordinarily the eigenvalue are all positive: Ag =2,
4.
=
7
=
J2, to
apply
purposes.
Matrices
Definite
Positive
7. d,
=
Ay=2-
it is the hardest
though
useful
A
test
Eigenvalue
_
2, d,
=
2,
=
is the
test
J2.
Az=2+
single matrix, eigenvalues test is enough by itself. a
be the most
can
Squares
for
It is already close. We positive definiteness. connected (Chapter 4), and positive definite matrices to pivots (Chapter 1), determinants eigenvalues (Chapter 5). Now we see them in the least-squares problems of Chapter 3, coming from the rectangular matrices of Chapter 2. b. It The rectangular matrix will be R and the least-squares problem will be Rx has m equations with m > n (square systems are included). The least-squares choice x is A R'R is not only symmetric but positive R*b. That matrix the solution of R' Rx definite, as we now show—provided that the n columns of R are linearly independent: I
hope
you
will
allow
more
one
test
=
=
=
columnssuch that There is amatrixRwithindependent -—(V)
A=
R™R.
(Rx)'(Rx). This squared length ||Rx||* is key is to recognize x' Ax as x'R'Rx then Rx 0), because R has independent columns. (If x is nonzero positive (unless x is nonzero.) Thus x'R' Rx > 0 and R'R is positive definite. A R'R. We have almost done this twice already: It remains to find an R for which The
=
=
=
This
Cholesky decomposition
A third There
possibility
is
many other R by a matrix
R™IR
are
R
pivotssplit
has the
=
Q/AQ",
choices,
Q with
between
R=
JDL’.
L and
square or orthonormal
L'.
SotakeR=VJAQ™.
symmetric positive definite square rectangular, and we can see why. If columns,
another
choice.
Applications of positive duction to Applied Mathematics
definite
matrices
A.
So take
the
Therefore QR is
=
evenly
QAQ™=(QVA)(VAQ").
A=
Eigenvalues
any
LDL™=(LVD)(J/DL").
A=
Elimination
and also
the
are new
then
(QR)'(QR)
=
root
you
(3) of A.
multiply
R'O'OR
=
developed in my earlier book Introand Scientific Applied Mathematics
Tests for Positive
6.2
321
Definiteness
AMx arises that Ax We mention Computing (see www.wellesleycambridge.com). constantly in engineering analysis. If A and M are positive definite, this generalized matrix for the finite Ax, and A > 0. M is amass problem is parallel to the familiar Ax =
=
element
Semidefinite The tests
Matrices will relax
for semidefiniteness The main
to appear.
zeros
6.4.
in Section
method
is to
point
x' Ax the
see
0,4
>
0,d
>
the
with
analogies
>
0, and det
positive
0,
>
definite
to allow case.
cipal submatrices hav
The
r, there
only two
noveltyis
The
QAQ'
=
A’s and
r nonzero
are
Note
that
leads
perfect
r
II’
condition
exchange
row
with
the
-1
-1|
comes
‘i
2
2
A=|-l
—1
2
-l
—1
in
is
all the
to
zero:
; 4
—
is
exchange
semidefinite.
negative
to maintain
positive semidefinite,
by
symmetty.
all five tests:
|
(x1 x2)? + (x (I) x Ax x3)? + 2 —¥3)7 0,A42 A3 =3 (a zero UII’) The eigenvalues are A, are (III’) det A 0 and smaller determinants positive. =
principal submatrices, not could not distinguish between
we
all
were
rank
=
Otherwise,
column
same
=
applies
and
positive semidefinite,
x'QAQ'x y'Ay. If A has y’Ay Ayyz+--+ +A,y?.
=
squares
in the upper left-hand corner. matrices whose upper left determinants
is
x'Ax
to
those
i Hl A
A
diagonalization
—
=
=
=O
(zero if
xy
=
x2
=
x3).
eigenvalue).
=
|
2
A=]{-1
qv)
-1
(V’)
A
=
R'R
with
0
90.
3 3]
-1]>]9
2
}-1
[2
-1]
-1
2]
|0
-$
dependent columns
2 >]o
§]
[0
0
0
2
0}
0
OJ
(missing pivon).
in R:
[
r
2-1 —|
2
-1)7
1
-li=i
O
Of
-1
1
-—l1]/]—-1
1
0 1
-17 Oj;
(1,1, 1) in the nullspace.
Positive
ter6
Definite Matrices
for semidefiniteness
The conditions
Remark
I-V for definiteness
conditions
definite
by
the
trick:
following
Then let
A + ¢/.
matrix
could
also be deduced Add
a
small
from
multiple
original
of the
identity,
Since the determinants
approach giving a positive on ¢, they will be positive and eigenvalues depend continuously 0 they must still be nonnegative. At « moment. €
the
zero.
until
the
very
last
use
that
=
My
often
class
about unsymmetric positive definite matrices. definition is that the symmetric part S(A+ A‘)
One reasonable
term.
definite.
That
If Ax
has A
x# Ax
in
(ReA)x™x> 0,
=
The
eigenvalues
E|
should
be
positive.
But
positive it is not
is indefinite.
AxEy.
=
so
are
never
that Red
0.
>
nm Dimensions
x’ Ax,
we
ellipsoid
in
and its
matrix
dimensions,
of
AxEx and xHAEx
Throughout this book, geometry duced a plane. The system Ax a perpendicular projection. The definite
the
S(A+Al)=
O but
>
=
5x"(A+ A®)x
Adding,
Ellipsoids
i |
Ax, then
=
the real parts
that
guarantees
necessary: A=
I
asks
and
an
has
finally get n
is
1. This
algebra. A of planes.
intersection
determinant
to consider
equation
the matrix
helped gives an
D
=
is the volume
of
figure that
is curved.
a
box.
a
linear
equation
pro-
Least
squares gives Now, for a positive
It is
an
ellipse
in two
dimensions.
x'
Ax
=
1. If A is the
identity matrix, this simplifies “unit sphere” in R". If A 4/,
is the
equation of the 1. Instead of + 4x? the sphere gets smaller. The equation changes to 4x7 + The center is at the origin, because it goes through (G,O,...,0). if x (1,0,...,0), —x. The important step is to go from x' Ax 1, so does the opposite vector satisfies the identity matrix to a diagonal matrix:
tox?
+
x3 +.-+--+x?
=
=
=
---
=
|
.
Ellipsoid
= For
A
4 I
=
,
the
equation
is
x"Ax
i
=
4x7+ x7 + 4x3
1.
=
9
unequal (and positive!) the sphere changes to an ellipsoid. is x One solution (4,0,0) along the first axis. Another is x (0, 1, 0). The (0, 0, 3). It is like a football or a rugby ball, but major axis has the farthest point x make them not quite—those are closer to x? + x3 + 4x3= 1. The two equal coefficients in the x,-x2 circular plane, and much easier to throw! the final step, to allow nonzeros Now comes away from the diagonal of A. Since
the entries
are
=
=
=
P ;|
A=
and
x'Ax
=
5u?
+ 8uv
+
5v’
=
1. That
ellipse
is centered
at
u
=
v
=
0,
off-diagonal 4s leave the matrix positive definite, but axes they rotate the ellipse—its axes no longer line up with the coordinate (Figure 6.2). We will show that the axes of the ellipse point toward the eigenvectors of A. Because A Al, those eigenvectors and axes are orthogonal. The major axis of the ellipse corresponds to the smallest eigenvalue of A. but the
=
axes
are
not
so
clear.
The
323
Definiteness
Tests for Positive
6.2
U
A
tt
ca
ae
wen
h
—
| t
2
1
~
i
Cy
6.2
Figure
ellipse x! Ax
The
To locate
the
ellipse
=
of the
axes
+ 8uv
4;
compute
we
(1, —1)/ J/2 and (1, 1) /./2. Those up with the
5u?
are
5v*
at 45°
A,
A
=
from
unit
9. The
=
ww
eigenvectors are and they are lined
axes,
is
axes.
"
1] and
Su*
squares
9
dA =
+
outside
are
the square
completing
The first square axis is one-third as
8uv+v?
=
2 v2 5) +9(2 (=.-
the to
v
u
are
the ee with squares. 5(u + 24y)°+ 2v’,
By
equals 1 at (1/./2, —1/ 2) at the long, since we need (4)°to cancel
1:
=
2
v
u
x' Ax
rewrite
to
2
|
New
principal
angles with the u-v the ellipse properly
see
«oe.
4
(1.
1 and its
=
1 and
=
The way to
ellipse.
+
Mery
oy
_
|
—
ch
‘|
)
+
V2
inside.
This
=—1,
(4)
is different
pivots outside.
the
end of the
major
the 9.
1 can be simplified in the same Any ellipsoid x'Ax way. We straightened the picture by rotating the Q AQ". diagonalize A the change to y Q'x produces a sum of squares: =
=
The minor
axl,
The
key step
is to
Algebraically,
axes.
=
x’
Ax
=
(x' Q)A(Q'x)
=
y
Ay
=
Ay?
+++
+
Any,
1.
=
(5)
eigenvector with the smallest eigenvalue. The other axes are along the other eigenvectors. Their lengths are 1/./Ay, 1/./A,. Notice that the 4’s must be positive—the matrix must be positive definite— a these square in trouble. An indefinite 1 describes or roots are equation y? 9y? hyperbola and not an ellipse. A hyperbola is a cross-section through a saddle, and an through a bowl. ellipse is a cross-section The change from x to y Q'x rotates the axes of the space, to match the axes of we can see the equation that it is an ellipsoid, because the ellipsoid. In the y variables so manageable: becomes The
major
axis
has y;
=
1/,/A; along
the
...,
—
=
=
Positive
er6
Matrices
Definite
The Law of Inertia and
matrices
eigenvalues,
become
simpler by elementary operations. The essential properties thing stay unchanged. When from another, the row is subtracted row a multiple of one space, nullspace, rank, and the same. For eigenvalues, the basic operation was a similarity all remain determinant A — S~!AS (or A > transformation M~'AM),. The eigenvalues are unchanged (and also the Jordan form). Now we ask the same question for symmetric matrices: What are the elementary operations and their invariants for x' Ax? A new vector The basic operation on a quadratic form is to change variables. y is related to x by some nonsingular matrix, x Cy. The quadratic form becomes y'C'ACy. This shows the fundamental operation on A: elimination
For
which
is to know
of the matrix
=
The
of
symmetry
A is
since
preserved,
other properties are shared
What
law
A-—>C'AC
transformation
Congruence
by
C'AC
remains
C’AC?
A and
(6)
forsomenonsingularC. symmetric.
The
is
answer
The real
question is, given by Sylvester’s
of inertia.
the"same has. CTAC umber ofpositive eigenvalu cigenvalues, negative oe Ae oe as andZeroeigenvalues a
SE
a
The signs of the eigenvalues (and not the In the proof, we transformation. congruence
C'AC we
nonsingular, and there work with the nonsingular A
is also
can
are
eigenvalues themselves) will suppose
eigenvalues
zero
no
that
+ «J and A
—
€J,
A is
to worry
preserved by nonsingular. Then about. (Otherwise are
and at the end let
0.)
—
e
from
trick
topology. Suppose C is linked to an orthogonal chain of nonsingular matrices = matrix 0 and? 1, C(t). Att Q by a continuous AC(t) will change gradually, C(0) = C and C(1) = Q. Then the eigenvalues of C(t)' to the eigenvalues of Q'AQ. as t goes from 0 to 1, from the eigenvalues of C'AC Because C(t) is never singular, none of these eigenvalues can touch zero (not to mention the number of eigenvalues to the right of zero, and the number cross over it!). Therefore C'AC as for Q' AQ.And A has exactly the same eigenvalues for to the left, is the same
Proof
a
to borrow
We want
=
as
the similar One
O-'AQ
matrix
good
choice
for
Q'AQ.
=
Q
is to
apply
Gram—Schmidt
to
the
columns
of C. Then
tQ + (1 —t)QR. The family C(t) goes QR, and the chain of matrices is C(t) slowly through Gram—Schmidt, from QR to Q. It is invertible, because Q is invertible and the triangular factor tf + (1 t)K has positive diagonal. That ends the proof. C
=
=
—
J. Then CTAC Suppose A eigenvalues, confirming the =
If
A=
F 4]then det
Then C'AC
must
C'C is positive definite.
one
=
Both
J and
C'C have
law of inertia.
CTAC has
C'TAC have
=
(det
negative determinant:
a
C')(det A)(det C)
positive
and
one
=
—(detC)*
0 (and
therefore
are
—4]
s
—4
—4
5
t
and
positive definite.
B=1{13 |
;
0
O
¢t
4
4
ft ;
*
x
|
=___
andb=
__
definite!
positive definite)?
3
(2)? (=)
lengthsa
If it
I.
=
25. -You may have seen the equation for an ellipse as + = as Ax? + A2y* 1? The when the equation is written
with
).
,
all the ’s.
diagonal matrix with positivediagonal entries is positive definite. might not be positive symmetric matrix with a positive determinant
s
half-axes
,
is true:
statements
For
than
and would
eigenvalues diagonal.
Every positive definite matrix is invertible. The only positive definite projection matrix A
on
0:
>
cannot
(a) (b) (c) (d)
A
Ax
negative number)
|
the main
on
x'
a
worse,
not positive when (x1,%2,x%3)=(
have
would
hasa
—a,;I
1s
symmetric
a
to have
even
|
|x2|
|
diagonal entry a,;;
were,
23.
|
x3
fx
(or
zero
b
=
1, What
ellipse 9x?
+
are
16y?
a =
and b 1 has
-dests
6.2
26
ellipse x? + xy + y? eigenvalues of the corresponding
the
27
the tilted
Draw
With
positive pivots in D, the (Square roots of the pivots give factorization A CC", which
is
28.
In the
Cholesky
pivots
are
I
factorization
the
on
2
‘9
The
E| |
Ix y]
side is
The left-hand The
30.
second
Without
For the semidefinite
CC™,with
g
95
(lower triangular)
Cholesky
find
the square for
C.
of the
roots
“
ry,
and
the
1 1 2 2). p12 7)
A=|1
that x'Ax
x'LDL'x:
A=
LDL"
y]
bi | F “wal6 a sh
the
means
=
(ac
Test
square! —sin@
@
cos
8
L/D,
C=
ax?-+2bxy-+cy”.The right-hand A
4
A=| |
From
C
L4/D yields
=
LU”:
8
sin 0
with
2, b
=
cos
| E | ae
ee
a a(x+2y)
side is
a
O
2
=
of A. (a) the determinant (c) the eigenvectors of A.
31.
A=
2
from
axes
L/DJ/DL’.
LDL" becomes
=
QO
=|"
pivot completes
multiplying
A.
2
factorization
symmetric
find
0
A=1!101 }0 29.
A
“symmetrized
of C. Find
diagonal
of its
half-lengths
./D./D.)ThenC
D=
| |
c=
A.
329
Definiteness
-
0
3
From
1 and find the
=
factorization
=
for Positive
4,
=
sin
0
@
cos
|
c
y?.
10.
=
find
(b) the eigenvalues of A. (d) areason why A is symmetric positive definite.
matrices
fr
-1)
2-1 A=|-1
x'
write 32.
Axas
a
any three
Apply
—1]
2 -1
p-1 sum
of two
tests
to’ each
I
33.
For
C
whether
they
E4
=
and
A
=
Test
pivots on
the
of
a
matrix
1
1
tridiagonal
one
as
square.
1
1
1
O
212 B=1/1
and
2
i ‘|:
confirm
that
of
nonsingular
to
construct
all greater than —1, 2, —1 matrices. are
(rank),
of the matrices
signs as A. Construct a chain orthogonal Q. Why is it impossible the identity matrix? If the
x! Bx
and
1
Ij,
1
2|
positive definite, positive semidefinite,
are
same
34.
1]
111
squares
A=j{1
decide
1
B=]1
and
(ank2)
2)
1
to
111
1,
are
a
the
C'AC
has
or
indefinite.
eigenvalues
of the
matrices
C(t) linking C to nonsingular chain linking C
eigenvalues
all greater
than
an
to
1?
er6
35.
Matrices
Definite
Positive
Use the
pivots
A
of
whether
to decide
J
—
A has 3
O
3
95
7
0
7
75.
saad
36.
|,
eigenvectors algebraic proof of the law of inertia starts with the orthonormal > to the orthonormal Of A 0, A; eigenvalues eigencorresponding and X1,...,;Xp, < to 0. C'AC of vectors corresponding eigenvalues pu; y1,..., Yq the
that
To prove
that
assume
combination
(b)
deduce
gives
+ apX_p =D,
t-++
;
and b’s
the a’s
p-+gq
2
four vectors
(c) Which Problems 3.
Find
3-5
ask
the SVD
give SVD
for the from
8|
the
Fibonacci 4.
Use
the
SVD
part of the
5.
Compute A’
A and AA’,
MATLAB
Problems 6.
6—13
Suppose
the
Construct u
=
bring
the
out
u),...,uU,
and
the
+(2,2,
8.
Find
9,
Explain
matrix
U
f
v; into u;
rank
1 that
1). Its only singular value
UXV!
A
expresses A
as
=
OjU,V,
of the
page
graphically. for
eigenvectors,
SVD. bases
give Avy
to
Av
has
=
=
for R”.
AU,
u1,...,
for
12u
v
Construct =
(1,
=
1,1,1)
__
w1,...,
+++
of
r
Wy
of
matrices
rank-1
+ O,U;,U
lengths
T re
o1,...,
in
the
Up.
=
sum T
course
A.
is 0)
a
the
on
:
orthonormal
UXV! if A has orthogonal columns how
4
recover
VU, are
vj,...,
Java
QO
underlying ideas each
with
XV"to
.
and unit
1
1
oju;:
0
1
v; and v2
eigenvalues
=
F1
eigshow (or
vectors
:
N(A),C(A‘), N(A')?
C(A),
ATA and Av; A=
same
and their
matrices
A that transforms
matrix 7.
the three
2.
demo
A=
Multiply
of rank
matrix
to find
web.mit.edu/18.06)
for
v1, v2 of
eigenvectors
E v|
0
bases
orthonormal
of matrices
q
0}
op.
equation (3):
and
ter 6
10.
11.
Positive
Definite
Matrices
is
Suppose A eigenvalues
2
a
};
are
Suppose
A is
possible
to
2
by
3 and A
=
invertible
produce
a
matrix
with
—2, what
are
symmetric =
(with 0, > singular matrix
.
Find
12.
(a) If A changes to 4A, what (b) What is the SVD for A‘
is the
Find
the SVD
and the
pseudoinverse 0*
15.
Find
the SVD
and the
pseudoinverse VX*U!
just
1],
matrix Q
o 2
17.
Diagonalize A’ to find its positive decomposition A QS:
A
of the
by
m
n
|c
as
change: v2
|
.
matrix.
zero
of
and c=
|( ; 5
{95:
columns, what is Ot?
has orthonormal
If
n
use
B=
16.
by
V do not
matrix
a
% + J?
14.
m
small
as
in the SVD?
change
doesn’t
an
If its
T
|
ur
Why
11
U and 0;
13.
A={1
up.
and for A~'?
for A + J
the SVD
and
u,
U, ©, and V"?
Ag. Hint:
fa
A=
Ag from
eigenvectors
0). Change A by
>
o2
unit
definite
S
root
square
V=1/*V"
=
and its
polar
=
1
10
| 4
A=
|
18.
What
is the
Ax={1
| 1 You
can compute At, that is in the
solution
or find row
the
0
O|
fc]
0
O|
|D|
1
1
| E
|
general
of A. This
space
8
x*
solution
minimum-length least-squares ‘1
0
|
/10
A*b
=
(a) If A (b) If A In both
has has
{2}.
=
2
7
solution
ATAX
to
problem
|
21.
Is
A
22.
Removing
=
(AB)*
B*
=
that xt
into its
of A and the
fits the best
plane 1).
the
C + Dt + Ez
and ATAxt
space,
=
AT™D.
polar decomposition Q'S’. for
true
r
row
I believe
pseudoinverses?
A
=
where
LU,
rows of U span the
the
row
space.
at the front?
Why
r
not.
columns of Z span the Then A* has the explicit
U"(U U")}(L™L)“"L’.
is A*b in the row that xt = ATD satisfies
Explain why AA* What
A*b is in the
of U leaves
Why 23.
=
reverse
At always rows
zero
column space formula
verify
UXV'
Split
ATb and choose
=
(A7A)7! A?is A™. independent columns, its left-inverse its right-inverse A'(AA‘)7! is At. independent rows,
cases,
20.
following?
fol
tob =Oandalsob=2att=z=O(andb=2att=z= 19.
to the
fundamental
and
space with the normal
A*A
subspaces
are
do
U"
equation
projection they project
as
does
ATAAtb
=
A‘b,
so
it should?
matrices onto?
(and therefore
symmetric).
6.4
>
in the x1-x,
RAE
NCIPLES
6.4 In this section
be
given
by
a
as
we
-tion
linear equations.
for the first time from
escape
to Ax
the solution
b
=
341
Mfinimum Principles
Ax
or
ix.
=
Instead, the
plane. L(x, y) =
The unknow.
vector
x
will be det
minimum
principle. astonishing how many fact that heavy liquids
minimum
principle. is a consequence minimizing their potential energy. And when you sit on a chair or lie on a bed, the springs adjust themselves A straw looks bent because is minimized. so that the energy in a glass of water light reaches highbrow examples: your eye as quickly as possible. Certainly there are more of total energy.” of structural The fundamental principle engineering is the minimization We have to say immediately that these “energies” are nothing but positive definite of a quadratic is linear. We get back to the quadratic functions. And the derivative Our first goal in linear familiar to zero. equations, when we set the first derivatives is to find the minimum this section b, and the principle that is equivalent to Ax Xx. We willbe doing in finite dimensions minimization exactly what equivalentto Ax 0” the theory of optimization does in a continuous problem, where “first derivatives equation. In every problem, we are free to solve the linear equation gives a differential the quadratic. or minimize The first step is straightforward: We want to find the “parabola” P(x) whose miniD. If A is just a scalar, that is easy to do: when Ax mum occurs It is
the
Just
natural
sink
laws
be
can
expressed
as
of
to the bottom
=
=
=
=
1
The
of
graph
P(x)
A~'b point x upward (Figure 6.4). paraboloid). To assure be positive definite ! This
=
=
5
will be In
a
has
minimum
of
slope
zero
if A is
minimum dimensions
more
a
dP
Ax —bx
this
P(x),
when
Then
positive. parabola
not
a
turns
maximum
Ax —-b=0.
7x the
parabola P(x) opens into a parabolic bowl (a or a saddle point, A must
Minimum at
r
Prin
Figure
*
T am
6.4
The
convinced
=
=
A-1b 7
—36°/A
graph
that
2
of
plants
a
positive quadratic P(x)
and
people
also
develop
Prain is
a
parabolic
be
new
—
367 Ab
bowl.
with minimum
in accordance
Perhaps civilization is based on a law of least action. There must principles) to be found in the social sciences and life sciences.
=
laws
principles.
(and minimum
10.
11.
is
Suppose A eigenvalues
3 and A)
=
matrix
with
—2, what
are
symmetric
is invertible (with produce a singular
to
possible
2
by
4,
are
A
Suppose
2
a
=
>
0,
A
Find Ap from
A
U and
| vy
5 2
12.
(a) If A changes to 4A, what is the change in the SVD? (b) What is the SVD for A‘ and for A~!?
13.
Why
doesn’t
14.
Find
the SVD
and the
pseudoinverse OT
15.
Find
the SVD
and the
pseudoinverse VX*
A=[1 m
for A + J
the SVD
just
matrix Q
of the
has orthonormal
17.
A Diagonalize A’ to find its positive definite decomposition A QS:
n
by
m
;
If
by
matrix
a
as
change: v2
|
;
n
matrix.
zero
U! of
|
16.
an
small
as
& + J?
use
1], B=
11
If its
T
Lu ur |
=
by
V do not
O07}
.
up.
U, X, and V"?
Ao. Hint:
matrix
uv, and
eigenvectors
0). Change
>
o2
unit
5
S
root
square
Qt?
is
what
columns,
c= 5
and
VX!/*V"
=
and its
polar
=
6
4 18.
is the
What
9 |
Ax=]|]1 1
can compute
At,
that is in the
solution
find the
or row
0
O|
fc
0
O}
|D|
1
1]
[Ez
general
Atb
=
to
the
following?
0 }2}.
=
2
solution
of A. This
space
x*
solution
minimum-length least-squares (1
You
8]
/190 | O
A’AX
to
problem
A‘b and choose
=
fits the best
the
C + Dt + Ez
plane
tob=OQandalsob=2att=z=O(andb=2att=z=1). 19.
(ATA)~!ATis At. (a) If A has independent columns, its left-inverse its (b) If A has independent rows, right-inverse A'(AA‘)~! is At. In both
20.
Split
21.
Is
22.
Removing
A
of A and the
that x*
in the row AD satisfies
Explain why AAT What
fundamental
row
ATAx+
and
space,
=
A‘D.
I believe
pseudoinverses?
A
where
LU,
=
rows of U span the
the
row
space.
the front?
Why
r
not.
columns
of
L span
the
Then A* has the explicit
U*)- (LPL) 'L".
is Atb =
for
true
r
is in the
polar decomposition QS’.
of U leaves
rows
zero
Atb
=
reverse
Bt At always
=
column space formula UT
23.
that x+
UXV? into its
=
(AB)*
Why
verify
cases,
and
space with the normal
ATA
subspaces
are
do
U’
at
equation projection they project
as
does
A'AA*b
=
A"D, so
it should?
matrices onto?
(and therefore
symmetric).
In this section be
given
as
x will not escape for the first time from linear equations. The unknown = = will be x solution to Ax b or Ax determined Ax. Instead, the vector
we
the
minimum
principle. It is astonishing how many Just the fact that heavy liquids potential energy. And when you
by
a
minimum
principles. of minimizing their sink to the bottom is a consequence sit on a chair or lie on a bed, the springs adjust themselves so that the energy is minimized. looks bent because A straw in a glass of water light reaches highbrow examples: your eye as quickly as possible. Certainly there are more of total energy.* The fundamental principle of structural engineering is the minimization We have to say immediately that these “energies” are nothing but positive definite We get back to the of a quadratic is linear. quadratic functions. And the derivative Our first goal in to zero. linear equations, when we set the first derivatives familiar this section is to find the minimum b, and the principle that is equivalent to Ax dx. We will be doing in finite dimensions minimization exactly what equivalentto Ax 0” the theory of optimization does in a,continuousproblem, where “first derivatives gives a differential equation. In every problem, we are free to solve the linear equation the quadratic. or minimize The first step is straightforward: We want to find the “parabola” P(x) whose minimum when Ax occurs b. If A is just a scalar, that is easy to do: natural
laws
be
can
expressed
as
=
=
=
=
The
of
graph
P(x)
A~'b point x upward (Figure 6.4). paraboloid). To assure be positive definite ! This
=
will be In
dP
1»Ax 5
=
—
if
dimensions
slope
zero
when
A is positive.Then this
of P(x),
minimum
s
has
minimum
a
more
a
bx
turns
parabola
not
a
maximum
Ax —b=0.
in the
parabola P(x) opens into a parabolic bowl (a or a saddle point, A must
:
positive balP(x)= symmetric oH‘IfAiisatthe
ene = AtthatpointPain
minimum pointwhereX= A,
P(x)
$Azx? br
=
—
Minimum at
r
Pmin
Figure6.4
*
T am
The
that
civilization to be
of
A71b
a
on
positive quadratic P(x)
a
law of least
in the social
Prin
Ly
plants and people also develop
is based
found
=
—5b7/A
graph
convinced
Perhaps principles)
=
7
sciences
action.
is
a
parabolic
must
and life sciences.
be
new
—sb'Ab
bowl.
with minimum
in accordance
There
=
laws
principles.
(and minimum
Positive Definite Matrices
ter6
Proof
Suppose Ax
b. For any vector
=
1
PQ)
P(y)-
5y Ay
=
]
y'b
_
show
we
y,
5x
=5
This
509—*)
Minimize the
Linear Ax
=
P(x)
x?
=
—
/dx,
2x,
=
—
+
x,x2
derivatives
partial
OP
AY
negative since A is positive points P(y) is larger than P(x),
all other
set
T
x3
This
to zero.
gives
this
algebra recognizes b gives the minimum.
it is
Ax
Substitute
—
x
In
Pain
applications,+x" Axis
tox automatically20es
=
5
= 0.
At S
x.
—4i
Xj}
knows
is to
b,
_
that
immediately 1
—(A'b)'b
(A~'b)'A(A~'D)
energy and —x"b where the total energy
A~'b,
x
A~'b into P(x):
=
the internal
=
—
approach, by calculus,
x™b, and
1 value
Minimum
at
occurs
y
b:
=
$x’ Ax
as
if
only
zero
2
P(x)
Ax)
=
(1)
—-b, =0
x,
(set b
The usual
bx.
—
P(x):
>
x).
—
the minimum
so
b,x,
—
Ax
5x
definite—and
be
can’t
+
—
i =
1
Ay y'Ax
y
P(y)
Ax +x'b
—
1
that
is
=
the external
P(x)
is
a
5b AD.
—
(3)
The system
work.
minimum.
Minimizing with Constraints d on top of the minimization problem. Many applications add extra equations Cx We minimize to the extra These equations are constraints. P(x) subject requirement b andalso @ extraconstraints d. Cx d. Usually x can’t satisfy n equations Ax Cx unknowns. We have too many equations and we need £ more unknowns Those new ye are called Lagrange multipliers. yj,..., They build the into a function constraint L(x, y). This was the brilliant insight of Lagrange: =
=
=
=
1
L(x, y) That
term
in L is chosen
of L
the derivatives
yl (Cx
P(x) +
=
to
so
that
have
n
exactly zero,
we
Constrained first
represent.
involve
equations
(which only
allows
x
when
Cx
Ax
se
dL
=
=
d. When
=
+ € unknowns
Cx
x
set
we
and y:
unknowns
y. You
(4)
—d
might
well
the constrained y tell how much d) exceeds the unconstrained Pyin
=
Pe ymin
$x?+ }xj. Its
=
=
Cx
ask what
minimum
(allowing
they Pc /min
all
x):
1
mintmum
Suppose P(x1, x2) strained problem has x;
back
yl.
—
Ax+CTy=b
|
Sensitivity of
just gives
—x'b+x'cly
0 brings /dy equations for n
+ £
mysterious
unknowns”
“dual
Those
the
=
gL/ax=0: aL/dy =0:
minimization The
d)
~
n
0 and x2
2, A
=
=
0.
=
Poin +
=
value
smallest
I, and b
=
0. So
5 yi(CA'!b—d)> is
certainly Pmin 0. This the minimizing equation =
(5)
Phin.
uncon-
Ax
=
D
Minimum
6.4
add
Now
constraint
one
old minimizer
The
Sx}+ 5x4+
=
x;
x2
y(cyX1 +
0 is not
dL/ax, 0L/0X2 OL /dy
Substituting
=
x;
and x.
—c,y
£
+
=
=
0
Xp +
CQY
=
0
Cy X1 -
The
=
of P 1
Po/min This
equals
syd as
—
line 2x; is x solution
x
ona
=
and
—
x2
in
—x?
5.
=
d,
(6)
equation gives —c/y cy —
cod
+2
——! 9
=
+402
d.
point:
dd?
1
(8)
(c? c3)’ 20+)
linear
=
7)7
;
solution
at that
reached
+
equation (5),
problem the We are looking
X27 a
==
Lctd?+cd* 2
~
272
what =
0
since
algebra
b
0 and
Pain
0.
=
has solved, if the constraint
for the closest
(2, —1). We expect this shortest
=
vector
point to be
x
(0, 0)
to
this
on
perpendicular
line. to the
keeps The
line,
right.
are
we
predicted
6.5 shows
Figure
5xTxis 1
+ ~x? 2
=
=
cid x (2
ci+c
minimum
constrained
CoX2
into the third
=
>
=
=0
cy
—cyy
plane. L(x, y)
The
xy +
=
line in the x,-x2
a
on
x
0
=
—d
Solution
puts
Lagrangian function 2+ 1 partial derivatives:
the line.
on
d) hasn
—
Cox.
d. This
+ cyx2*=
c)x, =
341
Principles
P
(a2
=
ad
+23)
LTT
ed
Figure
6.5
Least
Squares Again
[ will
all
x
on
the constraint
line 2x,
—
x2
5.
=
big application is least squares. The best x is the vector that minithe squared error E? ||Ax b||*. This is a quadratic and it fits our framework! highlight the parts that look new:
In minimization,
mizes
3||x||°for
Minimizing
our
=
E?
Squarederror Compare
with
5x"'Ax
[A changes The constant
changes,
A
—
to
=
x"b
—
(Ax at
ATA]
—2x'A™b+b".
—b)'(Ax —b) =x'A™Ax
the start
of the section,
[b changes
to
which
[b"b
ATH]
b'D raises the whole graph—this has no effect to A’A and D to A'D, give a new way to reach
led to Ax
on
is
=
b:
added].
the best x. The other the
(9)
two
least-squares equation
Positive
ter6
Matrices
Definite
The
(normal equation).
minimizing equation
needs
The
Rayleigh
Our
second
whole
a
book.
§ A’Ax
Alb.
=
it is pure
We stop while
the
(10)
linear
algebra.
Quotient
goal
is to find
a
minimization
Ax. That problem equivalent to Ax be a quadratic, or its derivative cannot would be nonlinear (A times x). The trick that succeeds is
to minimize easy. The function and the eigenvalue problem is
quadratic by
one
changes into
b
=
equation
Least-squares
Optimization
Ax
another
is not
=
so
linear—
divide
to
one:
|
Rayleigh quotient
TA Minimize
R(x)
=
x'x
———.
when x!x 1, then R(x) is a minimum ||x||? is as large as possible. keep x? Ax from the origin—the We are looking for the point on the ellipsoid x' Ax 1 farthest x of greatest vector length. From our earlier description of the ellipsoid, its longest axis points along the first eigenvector. So R(x) is a minimum at x. Algebraically, we can diagonalize the symmetric A by an orthogonal matrix: A. Then set x Qy and the quotient becomes simple: O'AQ If
we
=
=
=
=
=
R(x)
(Qy)TA(Qy)
_
yTAy
(Oy)"(Qy)
of R is A;,
The minimum At all
=
at
Ai
points
the
point
(oytyy
Aayttees —yTy Yoteet
where
y;
Rayleigh quotient in equation (11) is never eigenvalue). Its minimum is at the eigenvector The
l and
=
y,)
tert
--- H,_;, which can be stored in this factored form (keep only the v’s) and never computed explicitly. That completes from
—
=
Gram—Schmidt. 2.
The
singular value
U'AV
decomposition
&. The
=
matrix
diagonal
© has
the
shape as A, and its entries (the singular values) are the square roots of the can for transformations only prepare eigenvalues of A'A. Since Householder the eigenvalue problem, we cannot expect them to produce &. Instead, they stably produce a bidiagonal matrix, with zeros everywhere except along the main diagonal same
and the The
will
above.
step toward
first
A, and Hx which
one
is
zero
the SVD
below
produce
zeros
the as
pivot. indicated
Tek A>HA=|0 O |
is
exactly
step is
along
the first
&
Ox
*«
*«
*|
oo
in
next
The
ok
*k
as
OX
>
OR above: to
is the first
x
on
the
x
*«
0
0]
10
«
x
«|.
0
«
x
*
multiply
column
right by
an
of H“
row:
HAHY=
(4) |
Computations with Matrices
ter7
Then
transformations
final Householder
two
quickly
achieve
form: bidiagonal
the
—
=
x
H)H,AH®=j|0 10
*
0 0)
«*
x
*k
0
x
*k
ADH, AHVH®
and
=
ol
The 0 A Algorithm for The into the
with
*
oO
10
x
0
0
Ln
—
Computing Eigenvalues
magically simple. It starts with Ao, factors it by Gram—Schmidt the factors: A; RoQo. This new matrix A, is similar OoRo, and then reverses ; 1 Aj. So the process continues Qo(QoRo)Qo original one because Qp) AGOo no change in the eigenvalues: is almost
algorithm
to
=
=
=
All Ax
similar
are
Ax
=
and then
OR;
Ag+
=
(5)
R, Ox.
equation describes the unshifted QR algorithm, and almost always A, approaches Its diagonal entries approach its eigenvalues, which are also the eigena triangular form. values of Ao. If there was already some processing to obtain a tridiagonal form, then Ao A by Q7'AQ to the absolutely original is connected Apo. As it stands, the QR algorithm is good but not very good. To make it special, it
This
=
needs
refinements:
two
1.
The
Shifted
equation (5)
Ay This
matrix
be shifted
oI
—
A;
to
If the number
a;
immediately by
a;
Algorithm.
should
shifts
A;,.; is similar
to
hand
practice
the first to
corner—is
popular choice the symmetric four
in
for the shift case
even
steps of the shifted
Ag (always the
next
is that
the
the
this
convergence, algorithm, the matrix *
*
*
x
*
X%
in
an
ReQy
=
(6)
tayl.
eigenvalues):
same
Ox
1)
=
Apa.
produces quadratic convergence, to the smallest eigenvalue. After A; looks
and in
three
or
like this:
NIX CO;,o CO,x %
,
withe
x
exactly
(2) error
Computations with Matrices
ter7
2
;
it methodequation (1); convergent if andonlyif every eigeniterative TF ‘The 4 of s ‘T's - value IAL< i. Mts rateOfconvergence «onthe m 1 aximum. satisfies depends j
1S
in
=
| Spectral _ os “rho” radius that
Remember
solution
typical
a
=
k steps
after
Error
to e,,,;
ey
S~'Te;,
'T)
is
a
combination
cyAGx, +--+
=
a:
=max|ajl
Cg
the
of
eigenvectors:
Akxy.
(4)
|Ama,| will govern spectral radius o We certainly need p < 1. to zero. the rate at which e, converges converRequirements 1 and 2 above are conflicting. We could achieve immediate § A and T b. 0; the first and only step of the iteration would be Ax; gence with In that case matrix S~'T is zero, its eigenvalues and spectral radius are zero, the error b may be and the rate of convergence (usually defined as —log ¢) is infinite. But Ax; for a splitting. A simple choice of $ can often succeed, hard to solve; that was the reason and we start with three possibilities: The
largest |A;|
will
be dominant,
eventually
so
=
=
=
=
=
1.
§
=
2.
§
=
3.
S
=
diagonal part of A (Jacobi’s method). triangular part of A (Gauss—Seidel method). combination of 1 and 2 (successive overrelaxation called
S is also
a
S is the
Here
(Jacobi)
2
[2
and its choice
preconditioner,
-1
diagonal part
If the components
of
x
are
v
_fo
f2
and
w,
is crucial
in numerical
analysis.
of A:
ap ea _
_f
SOR).
or
ofa the Jacobi
‘im |9 3
1
onl
step Sx;4;
Tx, + bis
=
i) _
:
2Wr+I decisive
The
(one too
more
small
Uk +
=
bo,
]
Ww et
0
5
Ww k
bz /2
S~!T has eigenvalues +4, which means that the error is cut in half binary digit becomes correct) at every step. In this example, which is much matrix
to be
typical,
the convergence
is fast.
larger matrix A, there is a very practical difficulty. The Jacobi iteration of x; is complete. requires us to keep all components of x; until the calculation A much more natural idea, which requires only half as much storage, is to start using aS soon as it is computed; x,4, each component of the new takes the place of x; x;; a component Then x; can be destroyed as fast as x, at a time. is created. The first For
component
a
remains New
x,
as
before: O11
(X1)egy
=
(—-A12X2 —
A43X3
—
++
+
—
Ayn Xn)e +
Dy.
The next
this
with
immediately
step operates
value
new
of
to find
x,,
(x2)x.1:
%
New And
the last
not
bad
=
—
the
When
—
Seidel.
Gn2X2
That
is
2
A=| >| s=| a
r=
2
Q2Vg41 We+ dy =
2Wert The
eigenvalues
Gauss—Seidel
=
+
Ver
of S~'T
step
bit
i
0. The
Since
steps.
0
1
jo
0
by
it is
because
r=
Nie jr
|9
lt LL.
_
Jao.
4 every
time,
so
methods
both
single
a
the
same require theold, and actually save
of
number
to
into
w;
is divided
error
ip
°
_
x
Jacobi
two
v; and
2)°"'™
—-1
and
1
O
or
do,
are
is worth
unknown
apparently of history,
was
|) at
_
step takes the components
Gauss—Seidel
it
fo
0
_[|
single
+ by. Ann—1Xn—1k-+1
Tt
eigenvalues:
i
A
Xn)a + D2.
in x;,; are to the left-hand moved side, S is seen A. On the right-hand side, T is strictly upper triangular.
-1
_f
Gan
exclusively:
surprising
a
—
+++
values
new
even though
S~'T has smaller
Here
use
—
terms
triangular part of
2
step will
method,
by
(003.43
+
(Xn ett —_(1X4
recommended
(Gauss—Seidel)
Xa
Gri
in the iteration
Ann
method.
the lower
(%2)e41
the Gauss-Seidel
and not
a
as
Xn
is called
Gauss
do
equation
New This
9
x2
operations—we just use the new value instead of is better. on storage—the Gauss—Seidel method This rule holds in many applications, even though there are examples in which Jacobi and Gauss—Seidel fails (or conversely). The symmetric case is straightforward: converges When all a;; > 0, Gauss—Seidel if and only if A is positive definite. converges It
discovered
was
is
convergence
speaking, those factor ® moves w@ >
of
during
faster if we go beyond approximations stay on us
closer
exceeds
m never
the
same
side of the solution
With
wm
—
solution.
to the
is known
1, the method
the
computation (probably by accident) that Gauss—Seidel correction x,. Roughly x,4;
of hand
the years
successive
as
in the
2. It is often
1,
=
we
Anoverrelaxation
x.
recover
Gauss—Seidel; with
(SOR).
The
overrelaxation
optimal
choice
of 1.9.
neighborhood
overrelaxation, let D, L, and U be the parts of A on, below, and above A of LDU the diagonal, respectively. (This splitting has nothing to do with the D In fact we now have A elimination. L + D+ U.) The Jacobi method has S To describe
=
=
=
on
the left-hand
S=D-+LandT
side =
and
T=
=
Gi
(Xie
+ Olay)
the
|x,
the
=
is lower
left still
can
on
replace
side.
right-hand
the convergence, oL
[D+
Regardless of w, the matrix on upper triangular. Therefore x;,.; as it is computed. A typical step (Xi eer
U
—
—U. To accelerate
Overrelaxation
ij
L
—
(U1
—
triangular x;,,
—
aU
|x,
and the
component
chose
to
move
we
@)D
Gauss—Seidel
+ wb.
one
on
(5) the
by component,
right as
is
soon
is —
+
+
+
—
Aij—1X%j—-1) ee F (ij
Xj} —
++
+
—
Ain Xn) +
B;).
Computationswith Matrices
©7
If the old
For the
(SOR)
quantity
same,
would stay the
x41
A
same
0
2
2*e
_o»
If
we divide by
@,
is back
to
iteration
optimal w
@
f2a-9
point of overrelaxation det T / det L eigenvalues equals =
is to
|
seU-®) (its spectralradius)
small
as
T; the
—
as
possible. of the
product
The
optimal w.
this
discover
S
1—-w+40°]
a
L
eigenvalueof
The whole
=
Ley
l-o
|
A1—-)|
the largest
makes
splitting A is L=S'T
the S and T in the
ate
0
obo-
|"
—a)
21
matrix Tx, + b. The crucial
=
2
~|—w
The
Sxx+1
7
wo
[20-2) 0
matrices
two
these
20)
—f
_
guess
new
iS each overrelaxation step
E ar
=
the
solution x, then would vanish.
with the true coincide happened xx guess in brackets and the to
det S:
Ayag
detL
=
=
1
w).
—
w)D
det(1 diagonal, and det T because D det L @)”. (This explains Always det S (1 Their product ts detL diagonal. the above too large, and because U lies the eigenvalues would be of The 2. product @ as far as behavior of the eigenvalues: why we never go also get a clue to the We not converge.) 1 so their w the iteration could They must both equal are equal. eigenvalues w the two the sum of the —
=
the
lies below
=
—
=
=
—
At the
optimal
match
product will
with the eigenvalues always agrees
compute, because trace of the diagonal entries (the
of
value
det L. This
sum
Optimal
+ A2
Ar
@
=
(opt
1) + (@opt 1)
J3)
—
1.07.The
©
Wopt quadraticequation gives which is a major reduction .07, 1 @ w approximately the right choice of 1. In this example, at w
This =
7
2
=
—
~
4(2
=
2@opt+ 7
—
from
=
convergence,
(4)
because
.O7. If
~
have
complex conjugate pair—both
further
w is
which
Al =o,
the rate
of
eigenvalues become
a
;
the
;
,
increased,
;
2
|
are
Gauss-Seidel value
the
again doubled
has
(6)
Pop
equal eigenvalues
two
=
—
of L): 1
|
—
to
is easy
w
is
now
increasing with
w.
as if so easily, almost produced improvement that such The discovery activity in numerical analysis. 20 years of enormous for point the starting formula for the optimal by magic, was 1950 thesis—-a simple in Young’s solved of the original The first problem was 4 of L to the eigenvalues(4 the eigenvalues to connect was w. The key step is expressedby U). That connection
could be
an
Jacobi matrix
D~!(—L
—
Formulafor@® of
difference
class finite This is valid for a wide 2 Therefore = u2. Seidel) it yields A” = All the matrices in and A = 0,4
i
1)?
Q+o-
=
matrices,
Oand A
Young’s
=
class
(7)
Aw py’.
=
and if
u2 as have
in
we
take
Example 2,
w
=
1
where
eigenvalues [
that
(Gauss|
=
occur
+3 i
and the
plus—minus pairs, Jacobi
i
corresponding
of convergence.
rate
The
important problem is to choose Young’s equation (7) is exactly our 2 by both equal to w I:
w
so that AmaxWill be minimized.
2
example!
best
The
makes
w
371
b
=
Gauss—Seidel
pe. So
0 and
are
for Ax
Methods
terative
7.4
the
doubles
Fortunately, the two
A
roots
—
(o-1)+(@—-1)=2-20+ For we
of A
w
=
specifies
—
po”,
this pattern will be a single choice of
large matrix, can only make a
1. Since
goal-is
our
choice
the best
aC)
The
make
to
for
repeated w.
Or
small
as
* Be) pairs +.;—and
largest value of possible, that extremal the
largest yz gives
Ay,
2
of different
number
a
f/f
_
O=
as
and
w
pair
Wop: 2(1 (1
Optimal
Wopt
—
=
pe? V1,/1 Mines)
= = —
and
2
=Amax
Umax
=
L.
—
Wopt
(8)
.
of the Thesplittings
Te
of B: matrix these of or dern ‘yield eigenvalues
—1, 2, -1
(= 0, 2,O matrix): _Jacobi ~
Gauss-Seidel (S
This
can
that
means
only
Then
moderate.
/.02
the best
SOR(with
_
=
even
h
be =
error
30 Jacobi
.14, the optimal overrelaxation
steps: is
=
86 1.14
by 25%
(.99)*
diagonal bandwidth!
direction
x
of +4 and four
There
is
no
with
—
-) |
a
single
slow; But
very which cos* zh = .98
since
Amex
sinzth
=
factor
the convergence
1+
=
Wop
is
order 21,
iterations.
will have
step, and
at every
£08
1.75.
=
step is the equivalent of
SOR
=
a
—1, 2, —1 in the
great many
method
.75,
=
(
=
is
,
striking result in one-dimensional problems easy. It is for partial differential important. Changing to —u,z, That
is of
a
i
—
an
s
COs
=
(cos -) /( +sin— -)
Almax
will require Gauss—Seidel
is reduced
has [Alas
by example. Suppose A appreciated cosh = .99, and the Jacobi method
Amax The
«)
a
le
'T has
matrix):S-!T
—1,2,0
=
_
oe
from
like
—
such —u,,
a =
equations f Uyy
combine
off-diagonal
=
simple idea. Its real applications are not f. A tridiagonal system Ax b is already =
leads
to
the
“five-point
ideas) will
scheme.”
The
be
entries
to give a main with —1, 2, —1 in the y direction entries of —1. The matrix A does not have a small
the N* mesh way to number of its neighbors. That is the
stays close to all four parallel computers will partly relieve
(and other
that overrelaxation
it.
points true
in
curse
a
square
so
that each
of dimensionality,
point and
ter 7
Matrices
Computations with
If the
ordering goes
up. The
it to turn
above
neighbor
time, every
at a
row
a
point
matrix”
“five-point
wait
must
has bandwidth
NA —1,2,—linx andy gives —1, —1,4,—1, —1
whole
a
N:
NA
A=
pom
Ln
has had
matrix
This
and been
attention,
more
for the
row
attacked
in
different
more
than
ways, based
any idea
b. The trend now is back to direct methods, on an equation Ax will fall apart when they are dropped the of Golub and Hockney; certain special matrices right way. (It is comparable to the Fast Fourier Transform.) Before that came the iterative of alternating direction, in which the splitting separated the tridiagonal matrix methods
other
linear
=
x
small
entries
LU
the
from
direction
in the
L and
of the true
and it
without
close
We cannot
dead
but
than
iterative, but unlike
suddenly
set
are
while
to zero
completely change from
7.4
This
has
matrix
LoUp, in which
incomplete
alive
much
very
elimination,
the
2
eigenvalues
J/2, 2,and
—
and their
D~'(—L
matrix
-1
eigenvalues, n
by
and the numbers
2
say, the
(D+ L)7!(—U)
matrix
wo, and Amaxfor SOR.
the Jacobi
matrix, describe
n
was
2)
U) and the Gauss-Seidel
—
to
—1
|
the Jacobi
rather
O| 2
O
=
fair to say that it b.
looked
/2:
2 +
A=-|-l1
For this
=
conjugate gradient method, which (Problem 33 gives the steps). It is direct it can be stopped part way. And needless
mentioning
2-1
2.
is S
A. It is called
factoring
idea
new
blem Set
Find
choice
may still appear and win. But it seems the solution of Ax .99 to .75 that revolutionized
a
1.
U
A recent
be terrific.
can
is
in the y direction.
one
matrix
J
D~!(—L
=
—
U):
—l
—] A=
4 —1
2
”
Show with 3.
that
the vector
eigenvalue A;
x, =
|
(sin7rh, sin 27th, cosa/(n + 1).
=
sinnzth)
...,
cosh
is
an
eigenvector
of J
=
(sinkzth, sin 2kith,..., sinnkzh) is an eigenvector find the to of A. Multiply x; by A corresponding eigenvalue a;. Verify that in the 3 2 are J/2, 2,2+ /2. by 3 case these eigenvalues In Problem
2, show
that x,
=
—
Note
Ay
=
The 1—
5Ol
eigenvalues =
cos
kath.
of the Jacobi
They
occur
matrix
J
4(-L
U) in plus—minus pairs and dm, is =
—
=
I cos
—
5A are 7th.
4-5
Problems least radius
require Gershgorin’s
theorem”:
at
Its
=
r;
Proof
373
b
=
Every eigenvalue of A lies in of the circles Cy, ..., Cy, where C; has its center at the diagonal entry a;;. )~ jei |Gij| ts equal to the absolute sum along the rest of the row.
one
“circle
for Ax
Methods
Iterative
7.4
x; is the
Suppose
= Soajyjxj,
—ayi)xi
of
largest component
[A
or
Then
x.
Ax
al aul SY i#i
i#i
Ax leads
=
to
xc)
> Saal
Xj
=r
j#i
;
4.
-
The matrix |
W
NO
A=
diagonallydominant
is called
5.
Write
the
the three
Jacobi matrix
Gershgorin iteration
the Jacobi
6.
The to
LUxg
to
A
b;
=
that
for J.
Ax
LU
—
add the correction
but b
r;. Show A is nonsingular.
Show
is
b
=
everything in double precision, r Compute only one vector as
1
diagonally dominant
J for the
circles
4
converges.
solution
true
because
and conclude
any of the circles,
1
1
2 matrix
b
a
sale a 8.
9,
find
the Jacobi
Find
also the Gauss-Seidel
whether
Amax
Change
Ax
=
=
S~'T
matrix
iteration
—D~'(L
=
its
and
U)
+
eigenvalues
jy;.
L) —!U and its eigenvalues 4, and decide
matrix —(D +
Morag: b tox
matrix
S~'T controls
If A is
an
(I
=
A)x +b.
—
the convergence
eigenvalue of A, then eigenvalues of B have absolute
What of x,4,
is value
less
an
are =
T
S and
(I
—
splitting?
What
A)x, +b?
eigenvalue
than
for this
of
1 if the real
B
I
=
—
A.
eigenvalues
Thereal of A lie
and
between
10.
Show
why
11.
Why
is the
the iteration
=
x,.;
norm
of B*
never
B* the powers below || B|].
approach
zero
(J
—
A)x;, +b does
not
larger than ||B||? Then (convergence). This is
for
converge
||B|| no
all inequalities are strict: x > x from the “interior” where 0 becomes 0. solution a great way to get help from the dual problem in solving the primal The result is now them
problem. this
geometric meaning of linear inequalities. divides n-dimensional An space into a halfspace in which the inequality is satisfied, and a halfspace in which it is not. A typical example is x + 2y > 4. The the two halfspaces is the line x + 2y 4, where the inequality boundary between in three dimensions. The boundary is “tight.” Figure 8.1 would look almost the same a plane like x + 2y + z becomes 4, and above it is the halfspace x + 2y +z > 4. In the “plane” has dimension n 1. n dimensions, One
key inequality
to
chapter
is to
see
the
=
=
—
Linear
er8
and Game Theory
Programming
t+2y=4L+ 2y =0 8.1
Figure
Equations give
nonnegative. Figure 8.2 is x
=
0, and
This
pair
bounded
y > Ois
andplanes.
of
Inequalities give halfspaces.
to linear
is fundamental
constraint
Another
lines
inequalities
x
by the coordinate the halfspace above
> 0 axes:
y
programming: x and y are required to be and y > 0 produces two more halfspaces. x > all points to the right of 0 admits 0.
=
Set and the Cost Function
The Feasible
important step is to impose all three inequalities at once. They combine to give the of the three halfspaces shaded region in Figure 8.2. This feasible set is the intersection x+2y >4,x > 0, and y => 0. A feasible set is composed of the solutions to a family of of m halfspaces). When we also require linear inequalities like Ax > b (the intersection that every component of x is nonnegative (the vector inequality x > 0), this adds n more we impose, the smaller the feasible set. halfspaces. The more constraints The
easily happen that a feasible set is bounded or even empty. If we switch our example to the halfspace x +2y < 4, keeping x > O and y > 0, we get the small triangle OAB. By combining both inequalities x + 2y > 4 and x + 2y < 4, the set shrinks to 4. If we add a contradictory constraint like x + 2y < —2, the a line where x + 2y It
can
=
feasible
set
is empty.
feasible
sets) is one part of our subject. But essential has another linear ingredient: It looks for the feasible point that cost function a certain like 2x + 3y. The problem in linear or minimizes maximizes programming is to find the point that lies in the feasible set and minimizes the cost.
algebra of programming
The
The
gives
a
is
inequalities (or
problem is illustrated by the geometry family of parallel lines. The minimum
the feasible cost
linear
set.
is 2x* +
optimal
That
3y*
because
of the program.
=
intersection
occurs
We denote
Figure
cost
8.2. The
comes
B, where
x*
=
(0, 2) is feasible because
6. The vector it minimizes
at
of
the cost
optimal
vectors
function,
by
an
family
2x +3
when
the first line intersects
O and
y*
y
2; the minimum it lies in the feasible set, it
and the minimum
asterisk.
of costs
=
cost
6
is the value
Linear
8.1
cost
+ sy
22
6
=
3/9
Inequalities
>. ~
~ ~
Qe+3y=Q
>.
/
/
bo
/
Figure 8.2 The
The feasible
optimal
vector
Note If the cost
to be
happened
is x* +
cost
set, the maximum and the maximum cost
feasible
high
linear
Every
2y*,
function
guaranteed by the planes, when we get to
This is
set.
(or the
with
corner
solution
lowest
from
the intersection
would
have
all these
“interior
set.
just
a
A would
cost
of the
corner
one
optimal
! The
solution
no
be
not
B and
between
first contact
In contrast,
cost.
might
edge equals 4 for
which
The
the feasible
inside
at B.
3y, touching
single point. be optimal. On
vectors.
can
go
our
arbitrarily
is infinite.
falls into
programming problem
2.
The
cost
is empty. function is unbounded
3.
The
cost
reaches
The
empty
or
the
problem
The feasible
economics
feasible
the whole
2y,
1.
of three
one
possible categories:
set
(or maximum) should
cases
engineering.
the feasible
on
its minimum
unbounded
and
2x +
set. the feasible up until they intersect The “simplex method” will go from
function, +
x
the
the cost
give
optimal
cost
of
corner
a
it finds
that
approach
minimum
The
until
a different
With
that
steadily boundary!
the next
to
set
methods”
point
its
along
occur
feasible
move
flat sides, and the costs
at
occurs
the lines
geometry, because more unknowns) must
with
set
We expect
a
set. on
be very
the feasible for
uncommon
the
set: a
good
case.
genuine problem
in
solution.
Slack Variables is
There
a
simple
the difference
as
straint
> 4is
x
-+2y
constraints straints
a
row
slack
variable
converted
the into
on
vector
w
>
have
we
that
w
> 4 to
equation. Just introduce 4. This is our equation! The old con0, which matches perfectly the other inequality only equations and simple nonnegativity con-
inequality w x +2y =
> 0, y > 0. Then x, y, w. The variables
“take
+
x
2y
slack”
up the
Minimize
problem contains
c
the costs;
in
our
cx
subject
example,
c
to =
puts the problem into the nonnegative part of R”. Those solutions
to
Ax
an
—
are
included
now
in the
x:
Primal The
change
x
unknown
vector
to
way
=
b. Elimination
is in
danger,
and
a
Ax
=
band
x
> 0.
[2 3 0]. The condition
inequalities cut completely new idea
back
x
> 0
on
the
is needed.
ter8
Linear
and Game Theory
Programming
‘
and Its Dual
The Diet Problem
be put into words. It illustrates the “diet problem” of protein—say steak and peanut butter. Each in linear programming, with two sources pound of peanut butter gives a unit of protein, and each steak gives two units. At least
Our
example with
four
units
and
y steaks
are
negative
cost
x*
=
the
of the
can
in the diet. Therefore
required
is to minimize
the
3y
is constrained
have
cannot
2x +
cost
If diet
whole
containing well as by
of peanut butter O and y > 0. (We
pounds
x
x > 2y > 4, as or peanut butter.) This is the feasible set, and a pound of peanut butter costs $2 and a steak is 2x + 3y. Fortunately, the optimal diet is
by
steak
cost.
diet
a
+
x
the
is
problem $3, then steaks:
two
Oand y* =2. including this one, has a dual. If the original problem is a program, The minimum its dual is a maximization. in the given “primal problem”
linear
Every
minimization,
equals the maximum be explained in Section
in its dual. 8.3.
This
Here
is the
key
stay with
we
to
the
and
linear
programming, problem and try
diet
to
it will
interpret
its dual.
place of the shopper, who buys enough protein at minimal cost, the dual problem is faced by a druggist. Protein pills compete with steak and peanut butter. Immediately the the two ingredients of a typical linear program: The druggist maximizes we meet Synthetic protein must not cost pill price p, but that price is subject to linear constraints. than the protein in peanut butter ($2 a unit) or the protein in steak ($3 for two more units). The price must be nonnegative or the druggist will not sell. Since four units of protein are required, the income to the druggist will be 4p: In
Dual In this
The
Maximize
problem
example
is easier
the dual
4p, subject solve
to
than
the
primal; really active,
that is 2p < 3 is the tight one = $1.50. The maximum protein is p
constraint
synthetic ends
up maximum
paying the same for natural equals minimum.
and
2, 2p 0.
duality
p.
price of shopper theorem:
Typical Applications The next
practical
two to
section
linear
1.
The
solving
linear
situations
in which
we
minimize
or
This
programs.
maximize
a
to describe
is the time
linear
cost
function subject
constraints.
Production
Chevrolet, miles
on
will concentrate
per
plant
Planning.
$300
on
Buick, and $500
each
gallon, respectively, can
assemble
a
and
Chevrolet
is the maximum
3 minutes.
What
Problem
=Maximize
20x
Suppose General
the
+ 17y4+ 14z>
profit
Motors
insists
Congress
in 1 minute,
profit 200x
+
18(%+y4+z),
Cadillac.
each
on
a
in 8 hours
300y
makes
that
the
a
profit
These average
in 2 minutes,
Buick
of $200
subject
x+2y+3z
0.
get 18.
Cadillac
in
Linear
8.1
2.
Portfolio
pay
9%.
to
Federal
Selection.
We
maximize
amounts
buy
can
x,
with two
the interest,
5%, municipals pay 6%, and junk bonds not,exceeding a total of $100,000. The problem is
bonds z
y,
pay
constraints:
(4) no more than $20,000 can be invested in junk bonds, and (ii) the portfolio’s average quality must be no lower than municipals, Problem
5x +
Maximize
x+y+z
z.
to
100,000,
The three
381
Inequalities
20,000,
x,y,z>0.
z
that
solution: Adda
6.
What
the line
=
>6,x
6,2x+y
>
0, y => 0.
>
constrained
by
is the minimum
set, what
2x +
x
the feasible
y?
—
< 3, -3x
+
8y
unbounded,
so
it has
5y
of the
value
that first touches
constant
3x + y and
functions
following problem is feasible Maximize x + y, subjecttox > 0,
shape the
In the feasible leaves the two corners
in
constraint
is the feasible
of
8.
x
+
2y
0,
set
x
to
x
> 0,
but y
0,-3x+2y
> 0, y > 0 such that the feasible
y>
0,z >0,x
no
+y+z
=
set
optimal 0 an eighth of three-dimensional space (the positive octant). How is this cut by planes from the constraints, and what shape is the feasible set’? How do its show that, with only these two constraints, there will be only two kinds of the optimal solution? set
for the General
(Transportation problem)
Motors
barrels
needed
in New
for each
be solved
are
to minimize
the
what
linear
shipping
needed
in
Chicago
producers, respectively;
England 1500, 3000,
barrel-mile,
and Alaska
Suppose Texas, California,
of oil; 800,000 barrels 2000, and 3000 miles from the three
million
unit
+ y
the
Solve
are
feasible x
the cost
set
2y
of this set?
preceding
minimize
the feasible
7.
cars
“corners”
Draw
single inequality one only point.
maximum
9,
y?
+
x
+
x
O, is empty.
Show
5.
constraints
On the
points
that
with
lie at the three
function
set.
4.
set
(Recommended) cost
3.
the feasible
Sketch
and 3700
program cost?
miles
with
produce
at a distance
a
of 1000, and 2,200,000 barrels If
shipments cost equality constraints
away.
five
each
one
must
ter8
Programming and Game Theory
Linear
is about
section
This
programming
with
unknowns
n
previous section we had two variables, problem is not hard to explain, and not easy to best approach is to put the problem into matrix
> b. In the
Ax
The full The
1.
anmbyn
2.
acolumn
3.
arow
matrix vector
vector
c
feasible Minimum
R? it is in 2” of
quarter
meet
vectors
x
x
halfspaces.
constraint
one
constraints
m
+2y
>
4
given A, b,
and
c:
x
is
x
solve. form.
of the
feasible
set
> 0 and Ax
x
it is > b
conditions.
18
the cost
Minimize
> O restricts
x
the cost
cost—and
least
all of the m-+m
This
satisfy
must
plane; being nonnegative. Ax a
O and
We
are
m
problem
The condition
>
components, and (cost vector) with nm components. b with
of
vector
and
x
A,
the vector
To be “feasible,” the
linear
cx
cx,
=
=> b. The
Cyxy +
subject
+++
to x
optimal
vector
4+ CyXp.
> O and
Ax
to the
positive quadrant in n-dimensional an eighth of R°. A random vector has produces m additional halfspaces, and
one
> b.
space. In chance
the feasible
In other
of m-+n words, x lies in the intersection has flat sides; it may be unbounded, and it may be empty.
brings to the problem a family of parallel planes. One plane constant cx 0 goes through the origin. The planes cx give all possible costs. As the cost varies, these planes sweep out the whole n-dimensional space. The optimal x* the feasible set. at the point where the planes first touch (lowest cost) occurs Our aim is to compute x*. We could do it (in principle) by finding all the corners In practice this is impossible. There could of the feasible set, and computing their costs. and we cannot of corners, be billions compute them all. Instead we turn to the simplex ideas in computational mathematics. one of the most celebrated It was developed method, and either by luck or genius by Dantzig as a systematic way to solve linear programs, The steps of the simplex method are summarized it is an astonishing success. later, and first we try to explain them. The cost
function
cx
=
=
The
Geometry:
Movement
Along
Edges that
gives the method away. Phase I simply locates set. The heart of the feasible to corner one corner of the method goes from corner there are n edges to choose from. along the edges of the feasible set. At a typical corner Some edges lead away from the optimal but unknown x™, and others lead gradually corner with a lower cost. There is toward it. Dantzig chose an edge that leads to a new is no possibility of returning to anything more expensive. Eventually a special comer That reached, from which all edges go the wrong way: The cost has been minimized. is the optimal vector corner x*, and the method stops. and edge into linear algebra. A corne} The next problem is to turn the ideas of corner is the meeting point of n different planes. Each plane is given by one equation—jus in three dimensions as three planes (front wall, side wall, and floor) produce a corner from turning n of the n + m inequalities Ax > | set comes Each corner of the feasible and x > 0 into equations, and finding the intersection of these n planes. I think
it is the
geometric explanation
383
Simplex Method
The
8.2
0, and end up at the 0, xX, equations x; point will only be a genuine origin. Like all the other possible choices, this intersection it is not even corner Otherwise If it also satisfies the m remaining inequality constraints. 2 2 variables in the feasible and m set, and is a complete fake. Our example with w constraints has six intersections, illustrated in Figure 8.3. Three of them are actually comers P, Q, R of the feasible set. They are the vectors (0, 6), (2, 2), and (6, 0). One of them must be the optimal vector cost is —oo). The other three, (unless the minimum including the origin, are fakes. One
is to choose
possibility
the
n
=
=
....,
=
=
rigure In
&
generalthere
are
possible intersections.
(n + m)!/n!m!
That
the number
counts
plane equations out of m + m. The size of that binomial coefficient totally impractical for large m and n. It is the task of Phase I computing all corners set is empty. We continue either to find one genuine corner or to establish that the feasible the assumption that a corner has been found. on Suppose one of the n intersecting planes is removed. The points that satisfy the 1 equations form an edge that comes This edge out of the corner. remaining n is of the n 1 planes. To stay in the feasible is the intersection set, only one direction allowed along each edge. But we do have a choice of n different edges, and Phase I to choose
of ways makes
n
—
—
must
that choice.
make
|
this
To describe > b
of A. The
are
equation
x
>
feasible
0, The
|A
w
set
is
> 0. We
simplex —I
|
w
=
Ax
variables
Slack
The
> b in
Ax
a
form
completely parallel
x; > 0. This is the role of the slack variables translated into w, > 0,..., w,, > 0, with
constraints Ax
phase, rewrite
b,
—
Ax
or
give
m
governed by these now have equality
method
is renamed
notices
A
no
—
w
=
difference
‘
slack
one
and
and
n
+
row
form:
=
m
b.
simple inequalities
nonnegativity.
between
is renamed
the
simple
constraints
for every
variable
[A —I|
equations
constraints
Ax
b. The
—
D, goes into matrix
equations m
w
=
to the n
x
x
and
w,
so
we
lc 0]
simplify: is renamed
c.
Linear Programming and Game Theory
er8
Ax
equality constraints are now only trace left of the slack
The The
and the
n-+m,
leaving Minimize
+m
n
cx,
in
Figure
new
system has four unknowns
(x,
matrix
new
of the
The
> 0.
x
A is
by
m
original notation, has become:
problem
6, 2x + y = 6, and slack variables):
cost
>
+ y. The
x
6
0
—l
2
keep happened.
2y
+
x
and two
y,
the
just
b.
=
8.3 has constraints
problem
]
Ax
become
inequalities
m
this much
We
of what
reminder
a
+
n
is in the fact that
w
components.
as
n
The
The
variable
unchanged subject to x > O0and
and
m
has
x
new
b. The
=
Simplex Algorithm
is now a point where equality constraints, the simplex method can begin. A corner n components of the new vector x (the old x and w) are zero. These n components of x or in Ax are the basic variables b. The remaining m components are the free variables b determine Setting the n free variables to zero, the m equations Ax pivot variables. x will be a genuine corner This “basic solution” if its m nonzero the m basic variables.
With
=
=
Then
are
components
feasible
to the
x
set.
postive etones | of Ax= be basic cornersoff thefeasible a ‘The seta2re the solutions feasible | are and it is feasible iis.basic.when.n_of its m+ components A solution when. it satisfies > 0. Phase.I of thesimplex.method finds o1 ne basicfeasible x* PhaseII moves stepby stepto the re
¢
$
=
zero,
x.
“os
|
solution. The
optimal
.
Basic
8.3 is the intersection
Feasible Which
do
corner
we
the two
Since
go
1
Ax
of
the 6s will become
zero
to
become
m
An
entering
variable
and
a
O
m
1 basic
of
subject
7x3—x4—3x5
x3
=
the
from
set
move
X4
be foolish
=
x5 to
=
make
O. This x3
x;
positive,
=
because
will
this case,
move
the other
up
6)
to zero.
us
to a new
+
corner.
x3
+
6x4 + 2x5
8
=
to
8 and x, is feasible, but the
at which
corner
basic.
remain
will
(see Example below) decides which The basic variables are computed by
edge are
(in
adjacent
an
2
enters. x
to
variable
one
components
X2
Start
ph
=
6
edge
an
variables
time,
same
XI
Minimize
6
|6
—1]|
along
move
—
1 basic
variable
leaving
to
At the
—
of
free components
b. The
=
|6
0}
-1
]
want
free (zero).
change but stay positive. The choice one the basis and which leaves variable Ax
2
2
will
solving
y —6=0.
2x +
—
neighbors,
The other
basic.
0 with
=
=
We
next?
are
one
x
0
to
corners
of
0
wy
.
Only from
Figure
((0, 6, 6, 0 ) (two zeros) (positive nonzeros)
Corner
corner.
P in
point
corner
=
its cost
9 zero
are
the basic cost
+. 3X5
++ X3
may
coefficient
variables. not
is +7
At that
be minimal. and
9,
=
we
are
corner
It would
trying
to
The
8.2
the cost.
lower
xs because
We choose
it has the most
385
Simplex Method
—3. The
coefficient
cost
negative
entering variable will be xs. With x; entering the basis, x, or x2 must leave. In the first equation, increase x5 and decrease x, while keeping x; + 2x5 8. Then x; will be down to zero when x5 reaches 4. The second as far as 3. To 9. Here x5 can only increase equation keeps*x, + 3x5 corner has is x2. The new go further would make x2 negative, so the leaving variable (2,0, 0, 0, 3). The cost is down to ~9. x =
=
=
Quick
Way
variable
are
leave.
In Ax
.
7
and
b, the right sides
=
The
smallest
ratio
divided
7
tells
coefficients
the
by
hits
variable
which
zero
We consider
of the
entering
first, and must were —3, then
of x; only positive ratios, because if the coefficient imcreasing x5 would actually increase x2. (At x5 = 10 the second equation would leaves. It also gives x5 = 3. variable Xo 39.) The ratio says that the second If all coefficients we of x5 had been negative, this would be an unbounded case:
give
4
=
make
xs
arbitrarily large,
The
and
bring
the cost
toward
down
can
—oo.
(2, 0, 0, 0, 3). The next step will only be easy if the basic variables (as x; and xz originally did). x, and x5 stand by themselves Therefore, we “pivot” by substituting x5 (9 X2 x3) into the cost function and the step ends
current
at the
new
corner
x
=
=
first
equation.
The
Minimize with
The
7x3
constraints
step is
next
the cost
xi
easy. The The ratios of
now
entering variable. x, the leaving variable.
The
new
—
X4
the
from
problem, starting
new
—
—
—
(9
—
new
X.
Ex)+ 5X3+ 5X2+ 5X3
—-
X3)
—
is:
corner, =
Xp +
6x4 +
x5
8X3 —X4
=
2
=
3.
—-
9
—1 in the cost makes x, the only negative coefficient , and 3, the right sides divided by the x, column, make is x* corner (0,0, 0, 7 3). The new cost —93is the =
minimum.
large problem, a departing variable might reenter the basis later on. But the cost can’t be the the m basic variables keeps going down—except in a degenerate case—so same as before. No corner is ever revisited! The simplex method must end at the optimal is the speed corner (or at —oo if the cost turns out to be unbounded). What is remarkable In
at
a
x* is found.
which
summary
The cost
corner
decided
below.
When
Remark
coefficients
the entering
they
are
all
7, —1, —3
variables.
positive
we
at the first
corner
and
1, 8, —1
at the
second
(These numbers go into r, the crucial vector defined stop.) The ratios decided the leaving variables.
components degenerate if more than the usual variable of x are zero. More than n planes pass through the corner, so a basic happens to and the basis vanish. The ratios that determine will include the leaving variable zeros, might change without actually moving from the corner. In theory, we could stay at a corner and cycle forever in the choice of basis. that commercial codes ignore it. It is so rare Fortunately, cycling does not occur. in applications—if you print the cost Unfortunately, degeneracy is extremely common finds after each simplex step you see it repeat several times before the simplex method a good edge. Then the cost decreases again. on
Degeneracy
A
corner
is
Game Theory
Programmingand
Linear
r8
The Tableau Each
to
to chosen, and they have to be made or tableau: fit A, b, c into a large matrix,
is to
organize the step
At the
variables
basic
the
start,
if necessary, The corner.
give
columns
last
The
corner). unknown cy ], and the [cp
x into
Tableau
are
block
=O
xy
a
=
into
B-'b
cost
p-lyyt
i
CN
pol
rret(T), subtract
=
the
review 3
Example
Constraints
of each
with numbers).
BONxy
+
xz
has been cgXzg + cwxXn
The cost Cost
cx
(cy
=
i
R=
meaning
(following,
0:
=
cpB'b.
=
the
times
cp
top
row:
Fully reduced me
Bxg
|
0
}
~1
Let
into
splits
c
multiplies by Bo:
‘
Cp
echelon form R
row
vector
that
for
matrix
basis
b turns
=
XB=
elimination
when
tableau’
the bottom
from
row
9
.
fully reduced
the
aA
will stand alone
Reduced
To reach
b
poanee
Ax
0. There,
=
xy
cpicn:
variables
The basic
| a
xy).
(xg,
BiN:
T=
at corner
B
the current
at
|
the free variables
At the corner,
(the square matrix cost by n matrix N. The a
m
an
(nonzero) variables
the basic
are X1,...,Xm Suppose of A form first m columns
that
Renumbering
variables.
free
the
with
be mixed
may
and go. One way
come
r=* ar
ism + 1lbym+n-+1
Tableau
and
operations—theentering
row
to be
have
leaving variables
by
followed
decisions
simplex step involves
aaron a
entry
in this
Here
is the
Bb
=
B
~1
ite to
attention
call
also
and
tableau,
algebra:
Corner
=
x3
Bob,
Q)
xy=0.
turned into
—¢pBN)xn at
=
r
at this
Cost
c,B7'b
+
in the
appears
Every important quantity is optimal by looking the corner
Br
weneeee
cpB'b.
=
corner
We
whether
decide
fully reduced tableau R. row. in the middle of the bottom ©B B-N Cn still be reduced. We can make rxy negative, can can
~
is negative, the cost If any entry in will be a componentof xy. That at the start of equation (2), by increasing test, has been found. This is the stopping But if r > 0, the best corner
7
(2)
next
our
step.
optimality
of
|
|
condition: The
|
corner
is
optimalwhen
r
=
cy
—
> 0. Its cost
cpB7'N on which
the cost
is cp
goes down.
to edges r Negative components of correspond the most to negative component of corresponds entering variable «x;
The minus
components
what
it
saves.
of
r
are
the reduced
Computing
7
is called
cost
costs—the
pricing
out
in cy
to
the variables.
use
a
Bob.
The
r.
free
variable
If the direct
cost
The
8.2
(in cy)
increase
to
pay
less than
is
the
then basicvariables),
saving (from reducing
free variable.
that
reduced
Suppose the most negative entering variable, which increases end of the edge).
to
fromzero
have
again
of xg
dropped drops to zero
that
x;
because The
components.
zero
n
component
leaving
is feasible
corner
new
to
the ith component of xy positive value a at the next comer
is the
a
still have
we
that
one
is the
(the
“
gives
(to maintain Ax changes from basic
=
b). The to
free.
because
we
zero.
0. It is basic
>
x
ith component of xy (the other components
zero
0, and it will
Ax
0,
>
w
+
w
in this
cost
in
Figure 8.3,
and
x
+
3y. Verify
that
will
—
the
=
enter meet
never
y goes
c
to —ow.
simplex
method
R is optimal.
corner
problem
u should
column you
(a corner). After changing signs
b
=
of
minimizing
b. Whenever
=
is
vector
rearrangement
that
its
step.
optimal.
sign
feasible solution to Ax 0, consider the auxiliary problem
x
after
which
Compute
the cost
P is
that
so
y,
from
to
the next
on
corner
decide
basic
a
the minimum
solution,
show
the cost
—
and
r
and
climbing
are
x
leave?
rearrangement,
therefore,that
Compute
Example 3, change
takes
is 3x + y. With
> 0 and,
r
P.
corner
compute
corner.
Again
that
for and decide
prepare
function in Example 3 is
cost
Then
another .
preceding simplex step,
Example 3, suppose (0, 1,3, 0). Show
the
—
of x1, x2, x3 should enter the basis, and which of x4, x5 should pair of basic variables, and find the cost at the new corner.
Which
391
Simplex Method
The
8.2
Ax
+ wz +---+
w;
b has
=
a
Wm,
nonnegative
zero—withw™* 0.
will be
=
Dbis both basic and x 0, w problem, the corner feasible. Therefore its Phase Lis already set, and the simplex method can proceed in to find the optimal pair x*, w™. If w* 0, then x* is the required corner the original problem. (b) With A=[1 —1] and b [3], write out the auxiliary problem, its Phase I of the feasible set vector x Find the corner 0, w b, and its optimal vector. 3, x1; > x2 > O, and draw a picture of this set. X1 Xp
(a) Show that, for this
new
=
=
=
|
=
=
=
—
.
If
wanted
we
what
to maximize
would
be the
N to make .
Minimize
Verify Then
=
basic and 2x; +
subject
Suppose
is the correct
we
basis matrix
2X1 4x2 + 3x4 + 6x5 —
Starting
from
x
of zero?
How
far
At that
point,
and what
r,
of B to make
to x;
+ x2 >
show
can
what
cx
=
x,
new
—
=> 0), of the column x
free?
xy +
3x2 > 12,
that BE
has
Bu
—xX2 >
x, =
u
0,x
in its kth
= 0.
column.
E~!B™ is its inverse, and
stop,
+ X45
x?
until
X2,
subject
to
6
—
x3
it be increased is the
b and
correctly.
(0, 0, 6, 12), should
=
4,
=
choose
would
rules
for the next
matrix
to minimize
want
on
equation (5), and
in
E~' updates the basis 10.
test
the column
x2,
the inverse
BE
stopping
(with Ax
the cost
of minimize
instead
(all X45
12 x;
or
the
x.
X2,X3,
be increased
equations
force
X4 >
from x3
or
0). its current x4 down
value to zero?
Linear Programming and Game Theory
ter8
P =]
For the matrix
11.
Px
The
x.
=
x
that if this
under
is in the
x
nullspace
of A, then
projection.
c'x =5x, + 4x2 + 8x3 on the plane x; + x) + x3=3, by P, Q, RK,where the triangle is cut off by the requirement
the cost the vertices
testing
show
nullspace stays unchanged
(a) Minimize
12.
A'(AA')~'A,
—
> 0.
(b) Project
c=
step
mum
Elimination
s
solve
can
the
(5, 4, 8) onto that keeps e
Ax
=
A=[1 nullspace of s Pc nonnegative.
—
b, but the four fundamental
1
1], and find the maxi-
showed
subspaces
that
differ-
a
deeper understanding is possible. It is exactly the same for linear programming. will solve a linear program, of the simplex method The mechanics but duality is really of the underlying theory. Introducing the dual problem is an elegant idea, at the center for the applications. We shall explain as much as we time fundamental and at the same ent
and
understand.
The
theory begins
(P) ~ : _ ~~ Primal The dual
problem
dual
=
y is
unknown
the
and
a
row
of Ax > b. In short, the dual
of
given primal problem:
On . Minimize
from
function
is in the cost
c
starts
with
the
subject
cx,
A, b, and
same
c, and
is in the constraint.
b
with
vector
reverses
In the
Ax
> b.
In the
everything.
dual, b and
c
and the feasible
components,
m
> O and
to x
primal,
switched.
are
set
has
The
yA
Oand
to
yA
Now
y > 0:
b forces
c; is the cost of the jth food, then cost is to be minimized. In
the
j contains
exceed
cannot
this
constraint
vitamin
for
the on
a
total
+
+--+
¢,x,
is
=
at least cx
b; of that vitamin.
is the cost
of the diet.
If
That
selling vitamin pills at prices y; > 0. Since food in the amounts equivalent a;;, the druggist’s price for the vitamin That is the in c. < jth constraint yA grocer’s price c;. Working within vitamin b; of each prices, the druggist can sell the required amount of y,b1 + income + ¥nbm yb—to be maximized.
dual, the vitamins
cyx,;
the diet to include
druggist
+++
=
393
The Dual Problem
8.3
The look completely different. primal and dual problems first is a subset of R”, marked out by x > 0 and Ax > b. The second is a subset of R”, determined by y > 0 and A! and c. The whole theory of linear programming hinges on the relation between result: primal and dual. Here is the fundaniental
The
feasible
sets
for the
h
aD Duality 1 “When vectors, theyhavebothproblems feasible have Theorem.
_ optimalx* and y*.The If
optimal or
empty,
1s
feasible
both
Either
is unbounded
(the maximum
sets
is +00
are
the
or
—oo).
The result
duality theorem is always a tie.
customer
has
The
problem
yb.
income
possibilities:
two
are
is empty and the other
one
minimum
exist, there
do not
vectors
costcx”‘ equals the maximum
minimum
settles
prefer
to
reason
similar
a
and the druggist. the grocer in game theory. The theorem” food, even though the druggist
between
competition
We will find
economic
no
the
“minimax
vitamins
over
even on undercuts guarantees to match the grocer on every food—and expensive foods (like peanut butter). We will show that expensive foods are kept out of the optimal diet, be (and is) a tie. so the outcome can like a total stalemate, but [ hope you will not be fooled. The optimal This may seem the crucial information. In the primal problem, x* tells the purchaser vectors contain
what to
buy. In
should
run.
Insofar
the essential the vitamin be
food
can
food
diets
Proof
that
cx*
=
y*b.
It may
inner
would
numbers
and we
a
problems, then yb
dual
they satisfy
Equally important, any grocer’s price equals the druggist’s price. vectors
the
is
only duality, and
b and
to
for
prove
Sex.
0 and y > 0, those inequalities (multiplying by negative
Since
the true
reflects
the economy
to be made.
cost
program
x
model
at which
prices (shadow prices)
prices y™to meet the grocer, but only one replaced by its vitamin equivalent, with
druggist’s price
_8E_ If
the natural
linear
our
to prove
must
any linear
as
decisions
We want
y* fixes
the dual,
0.
nonzero
must
the
enter
first
the constraints
are
|
inequalities ou
Ax
Proof
> b has
The
slack
First
Second
What X_ >
and 2.
What
solution
variables
that
of the
that minimum
Ax
b
—
is
change
ay
Mt
—]
=b
following problem:
Suppose A is the identity matrix nonnegative. Explain why x* =
(so that
b
is
into
an
of the
Mi
> 0.
duality
x;
to both
Maximize
y2,
81:
Use
equation.
yb
forsomeywith
Minimize
> 11? Find the solution equals maximum.
following problem: both this problem and
b
> Oandyb 4,x;+3x2
verify
w
there
or
y{[A -—I|>[0 0]
leads
is the dual
>0
x
|A
alternative
alternative
It is this result
1.
a
0,
+ x2,
x;
this
its dual,
subject
to y;
> 0, yz => 0,
its dual. m
optimal
=
n), and the in the minimum
vectors
b and
problem,
c
find
are
y*
Linear
ers
Programming
problem, and verify that the two is negative, what are x* and y*?
in the maximum
of b
component Construct _
1
a
Theory
and Game
1
by
problem
is unbounded.
Starting
with
feasible
sets
by
2 matrix
A
> b,x
=> Oand
yA
the 2 Ax
of A, b, and
If all entries
in which
example
choose
y => 0
0 is unfeasible,
> b,x
|) a
=
positive,
are
c
Ax
values
of the
empty.
are
that both
the
primal
and the dual
are
feasible. that
Show
x
(1, 1, 1,0) and
=
y=(1, 1, 0, 1)
in the
feasible
are
primal
and
dual,
with
Oo
1
0
010
A=},
O
1]
rd]
0
i
i
!8Ftah
7,10
|
1001. Then, after computing
in
conditions
and
in the
that the vectors
Verify
cx
equation (2),
SF
yb, explain
previous
how
exercise
and find the
one
you
3| know
they
are
optimal.
satisfy the complementary slackness slack inequality in both the primal and
the dual.
that
Suppose the
verify 10.
1
0 nF |_|]. b=
A=
complementary
andc
Hi
Find
=
(as well
conditions
slackness
the
yb
as
optimal
and
x
y, and
cx).
=
by equations instead of inequalities—Minimize primal problem is constrained b and x > O—then the requirement y > 0 is left out of the cx subject to Ax dual: Maximize inequality yb < cx yb subject to yA < c. Show that the one-sided still holds. Why was y > O needed in equation (1) but not here? This weak duality be completed to full duality. can
If the
=
11.
the cost
minimize
simplex method,
the
(a) Without
+X. +43 > 1, x1 > 0, x2 > 0, x3 = set? What is the shape of the feasible
12.
If the
primal
why x” 13.
14.
< 6. What
x3
If A
still
i 1
=
,
lies inside say b
15.
=
In three
a
are
the
dimensions,
say b vector can
cone =
y
you fills the whole
3x2 + 4x3, subject
to
0.
is its solution
x*,
y?
and then
c
is
changed a little, explain
x;-+x2+x3 following problem: Maximize the optimal x* and y™(if they exist!)?
describe
that cone, (0, 1), what
combinations
and what
problem,
unique optimal solution the optimal solution. remains has
the dual of the
Write
4,
is the dual
+
-
Xp
(b) (c) What
5x;
of
nonnegative
combinations
(3, 2), what is the feasible will satisfy the alternative? find
a
space?
set
What
vector
of six vectors
whose
four
vectors?
about
subject
to
2x; +x2
of the columns.
0
(the alternative
holds):
a)=B),
(Ax > b,x
> 0,
yA
>
0, yb
0 going with the arrows. flow x6; (dotted line), we maximize return the total flow into the sink. to
the main
their solution =
cut capacity 2+34+1
A
Figure
3.5
A 6-node
network
with
edge capacities: the maximal
flow
problem.
equals
node
each
into
is still to be heard
constraint
Another
the
flow
That
out.
It is the “conservation
from.
is Kirchhoff’s
law:
current
Sox —S0xj,=0
Currentlaw
for
law,” that the flow
j=1,2,...,6.
(1)
:
7
|
j from earlier nodes 7. The flows xj, leave node j to later 0, where A is a node— equation (1) can be written as Ax matrix (the transpose of Section 2.5). A has a row for every node and a for every edge:
node x;; enter in k. The balance
flows
The nodes
edge incidence +1, —1 column
=
fr
—1|
|
J
1
—]
1
2
—]
1
A=
Incidence
|
1
3
_]
Matrix
4
1
—]
—|
5
1
—j]
—]
12
of 2
A flow
additional
can
go
on
flow
of 1
can
possible.
How
Trial
and
x,
subject
to Ax
1-2-4-6-1.
path
the lowest
take
35
46
56
A flow
path
1-3-5-6-1.
0
0 and
=
of 3
61
can
The
go total
all
which
across
capacities
filled.
are
That
< x;;
< ¢;;.
An
along 1-3-4-6-1. is 6, and
no
The
is
do you prove that the maximal flow is 6 and not 7? is convincing, but mathematics error is conclusive:
in the network,
cut
the
34
25
Maximize
Flow
Maximal
24
13
6
1,
i
edge
1
node
cut
key
separates
is
more
to find a 5
nodes
that
across the cut have total capacity go forward can no more 6—and 2+3+1 get across! Weak duality says that every cut gives a bound to the total flow, and full duality says that the cut of smallest capacity (the minimal
The
the others.
6 from
and
edges
=
flow.
cut) is filled by the maximal
“8K Max capacity A “cut”
is the
flow-mincut
the minimal
across
splits
of the
have
the
j
in
cut.
a
network
equals
the total
|
groups S and T (source in S and sink in 7). Its capacity of all edges crossing the cut (from S to 7). Several cuts
into two
the nodes
sum
flow maximal The
theorem.
capacities capacity. Certainly the total flow can never be greater than the might cut. The problem, here and in all of duality, is to show the minimal capacity across equality is achieved by the right flow and the right cut.
Proof that be reached
max
from
same
flow the
=
source
could
go with the source more have received
could
have
nodes
to
capacity,
gone further and
equality
min
by into
Supposea
cut
additional the set
flow!
forward has been
flow is maximal.
flow, without S$. The
sink
must
Every edge
across
to a node
T. Thus
in
achieved.
that
might still exceeding any capacities. Those lie in the remaining set 7’, or it
the cut
must
the maximal
Some
total
nodes
be filled, flow
or
does
extra
flow
fill this cut i“
This
suggests
a
unused
capacity. If so, remaining capacities and tional flow is possible. If come
from, you
to
construct
add
flow
way
decide
the maximal
along
that
whether
the
Check
flow:
whether
Then
“augmenting path.’ sink
off from
is cut
the
compute or
source,
you label each node in S' by the previous node to find the path for extra backtrack flow.
can
path
any
has the
addi-
that flow could
“-
4
The
Marriage
Suppose
have
we
ible, others
four
regrettably
contains
are =
not.
are
Some
When
is it
possible
work
i
row
gives
the choices
sixteen
of those to
find
a
in 20-dimensional
couples are compatcomplete matching, with space, it can certainly |
ways to present the problem—in 0 if the ith woman are not and jth man of the
a
matrix
on
or
a
and a;; and column
compatible,
ith woman,
The
graph.
matrix
1 if
they are 7 corresponds
=
man:
A=
.
matrix
left
men.
two
Compatibility
The
and four
algebra can problem of marriage.
a;; to try. Thus
willing to the jth
women
If linear
married? everyone handle the trivial There
a
Problem
graph
in
sink t, it has four
Figure women
8.6 shows on
has 6
s and possible marriages. Ignoring the source the left and four men on the right. The edges correspond to capacities are | marriage. There is no edge between the first
two
the 1s in the matrix, and the woman and fourth man, because the matrix has a,4 It might seem that node M, can’t be reached by
=
flow
compatible pairs.
0. more
flow—but
that is not
so! The
the
to cancel an right goes backward existing marriage. This extra flow makes 3 marriages, which is maximal. The minimal cut is crossed by 3 edges. A complete matching (if it is possible) is a set of four 1s in the matrix. They would from four different come rows and four different columns, since bigamy is not allowed. Tt is like finding a permutation within the nonzero matrix entries of A. On the graph, this four edges with no nodes in common. means The maximal flow is less than 4 exactly _when a complete matching is impossible.
extra
on
~~
cut,
capacity ——
;
Figure 8.6 by adding
Two two
marriages on new marriages
the left, three (maximum) on the right. The third and one divorce (backward flow).
is created
Linear
ter8
In
the
on
The cut
a
That 1.
3.
a
test
can
is decisive.
is
The
It is
Chess)
(For Linear
same
be
can
enough.
them
on
The
like
expressed
put four rooks
The
zeros
1s in the matrix
number
maximum
Every matrix
Algebra)
The
and columns.
to
impossible
that the determinant
Remember
bottom
who among
impossibility
equals the
That
lines.
vertical
of k women impossible.
allowed
are
way to reach four. The minimal from the three men at the top.
no
left to choose—not
man
Matrices)
(For Marriage
at the
women
is
44
squares
capacity
fewer
in different
with
than
the
across
k men,
ways:
1s in A,
so
that
no
rook.
any other
take
two
marriages 1-1, 2-2,
but there
marriages),
subset
complete matching
(or
rook
is
there
Whenever
or
of three
sets
right separates the have only one two women is only 3.
cut
flow is 3, not 4. The
the maximal
example
our
(and several other
2.
and Game Theory
Programming
is
in A make
a
sum
of
with the
of 4!
all 24 terms
can
marriages.
same
24 terms.
=
by three horizontal
be covered
zeros
Each
as
A is singular.
term
uses
all four
rows
zero.
preventing a complete matching! The 2 by 3 submatrix in rows 3, 4 and columns 1, 2, 3 of A is entirely zero. The general rule for an n by n matrix is that a 3, 4 could marry prevents a matching if p +- q >n. Here women p by q block of zeros can and p > n marry only n gq men g (which 1s the only the man 4. If p women block with p +g > n), then a complete matching is impossible. as a zero same does The mathematical problem is to prove the following: Jf every set of p women No block like at least p men, a complete matching is possible. That is Hall’s condition. like at least one man, must must each two women is too large. Each woman of zeros of
A block
is
zeros
—
between
them
like at least
two
men,
and
so
on,
to
—
p
=n.
the proof is simplest if the capacities are n, instead of 1, on all edges across and into the sink are still 1. If the maximal flow middle. The capacities out of the source and into the sink are filled—and the flow produces is n, all those edges from the source flow is below a complete matching is impossible, and the maximal n marriages. When be responsible. cut must n, some it. Suppose p nodes That cut will have capacity below n, so no middle edges cross in the set § with the source. The capacity across on the right are on the left and r nodes to the remaining women, and r from these men to the that cut is m p from the source like only the r men and no others. sink. Since the cut capacity is below n, the p women fails. But the capacity n p +7 is below n exactly when p > r, and Hall’s condition The
—
—
Spanning
Trees
and the
Greedy Aigorithm
model
is the shortest
path problem—in which the edges have to sink. If the edges lengths instead of capacities. We want the shortest path from source are telephone lines and the lengths are delay times, we are finding the quickest route
A fundamental
network
Network
8.4
for
call.
a
If the nodes
are
computers,
we
for the
looking
are
405
Models
perfect message-passing .
protocol. related
A
closely connecting all and
of the network.
the nodes
set of n spanning tree—a of getting quickly between
the shortest Instead
the cost
minimizing the cost of connecting all the nodes. to close a loop is unnecessary. A spanning tree connects
loops,
and
want
Start
from
any
Add
the shortest
sink,
a
without
now
are
we
because
loops,
1.
finds
problem
we
the shortest
s
and
repeat
edge
that
connects
node
Here
one.
the
is
following
1
—
a
There
edges source are
no
the nodes
possible algorithm:
one
step:
the current
node.
to a new
tree
would come in the order 1, 2, 7, 4, 3, 6. The last step skips Figure 8.7, the edge lengths ? We the edge of length 5, which closes a loop. The total length is 23—but is it minimal accepted the edge of length 7 very early, and the second algorithm holds out longer. In
Figure
8.7
A network
and
a
shortest
spanning
of
tree
length
23.
2
Accept edges
same
edges come in the order 1, 2, 3, 4, 6 (again rejecting 5), and 7. They are the edges—although that will not always happen. Their total length is the same—and does always happen. The spanning tree problem is exceptional, because it can be
solved
increasing
order
of length, rejecting edges
that
complete
a
loop.
the
Now
that
in
in
one
pass.
first. language of linear programming, we are finding the optimal corner with no false steps. spanning tree problem is being solved like back-substitution, general approach is called the greedy algorithm. Here is another greedy idea: In the
3.
Build Select
The
steps
algorithm is
in turn
want
1.
and
tree
depend a new
to go
the
on
1. To take
add
the
selection
lengths in algorithm. It sounds a
that list
is
tree.
algorithm
large problem the data structure nodes, there might be nearly a million edges, and you
thousand
through
order
that
To stay with the same tree is 2. To sweep through all the trees
of the trees.
order
the
With
the
following step: minimum-length edge going out from
nodes, by repeating
This
a
so
easy,
thousand
but for
a
times.
Network Models
Further There
any
from alln
critical.
becomes
don’t
trees
The
are
important problems related
to
matching
that
almost
the value
The
optimal assignment problem: a;; measures Assign jobs to maximize the total value—the a;; are 0 or 1, this is the marriage problem.)
are
sum
of the a;;
as
of on
easy:
applicant i in job j. assigned jobs. (If all
Linear
ter8
and Game Theory
Programming
transportation problem: Given supplies at n points and demands at n markets, that minimize the total cost 5 Ci;x;;. choose shipments x;; from suppliers to markets (If all supplies and demands are 1, this is the optimal assignment problem—sending one person to each job.) have capacities c;; as well as costs C;;, mixing cost flow: Now the routes Minimum flow problem with the transportation problem. What is the cheapest the maximal flow, subject to capacity constraints?
2.
The
3.
fascinating part of this subject is the development of algorithms. Instead of a theoretical proof of duality, we use breadth-first search or depth-first search to find the optimal assignment or the cheapest flow. It is like the simplex method, in starting from flow (to move to the next a feasible flow (a comer) and adding a new corner). The algorithms are special because network problems involve incidence matrices. rests on a simple idea: If a path from The technique of dynamic programming be optimal. The solution is built source to sink is optimal, then each part of the path must from the sink, with a multistage decision At each stage, the distance backwards process. of a new distance to the sink is the minimum plus an old distance: A
Bellman
I wish
there
Problem 1. In
Find
3.
4,
add
3 to every
maximal
a
flow
could
change
would
produce
Draw
5-node
network
a
graph,
the minimum
the
max
networks.
capacity.
and minimal
If you
Ina
about
more
over
y of
They
(x-y
+
y-t distances).
simple
are
but beautiful.
Find
the maximal
by inspection
flow
and
cut.
increase
largest possible 5.
for
space
minimum
=
3.4
Figure 8.5,
minimal 2.
were
Set
distance
x-t
equation
the
the
the maximum number
flow—min
cut
of
for the
capacity of any largest increase
with
flow from
cut
capacity |i
node
| to node
number
of
edges
whose
theorem.
paths
—
following
pipe
one
network:
in the network
in the maximal
j|
between
above,
which
flow?
node
i and node
j. Find the
4.
from
removal
s
to ¢ with
disconnects
no
edges equals
common s
from
t.
Relate
this to
Network
8.4
6.
Fitid
a
maximal
of
set
0010
10 101 0 and B=|0 0101 110 i 0 -
0011
0
0
0
.
For
1s in too
their
A in
Which
many lines (horizontal 6? For any matrix, Problem. possible, then it takes at least
How
(a) Suppose every
heavier
lines
Problem 6, which
few columns’?
row
and
p
If
11.
For
a
7
by
7 matrix
has
Hall’s
by
g submatrix
weak
in your
matching.
condition—byhaving
of
needed
are
has p + gq
zeros
to
n?
>
1s in A in marriages are
all the
cover
If k
is true:
duality
all
all the Is.
to cover
column
15 1s, prove
edges
violate
explain why
every
the
rows
and vertical) k lines
on
exactly two
contains
complete matching is possible. (Show than n lines.) an Find 1s (b) example with two or more complete matching is impossible. 10.
0 0
100
010
for B, with
the network
thematrix
se
possible) for
11000
0
11011 A=|]0 1101]
Sketch
if
marriages (a complete matching,
407
Models
that
the
that it allows
be covered
1s cannot
in each
and
row
3
at least
that
1s. Prove
column,
by
a
less
for which
a
marriages.
infinite sets, a complete matching may be impossible even if Hall’s condition is 1, show that any p rows have passed. If the first row is all 1s and then every a;;_; 1s in at least p columns—and yet there is no complete matching. =
12.
If
Figure 8.5
and
13.
minimal
a
15.
lengthsinstead
spanning 1 and
Apply algorithms Problem
14.
shows
of
capacities,
find the shortest
path
from
s
to f,
tree.
2 to
find
shortest
a
spanning
for
tree
the
network of
2.
(a) Why does the greedy algorithm work for the spanning (b) Show by example that the greedy algorithm could fail from s to t, by starting with the shortest edge. If A is the
5
by
5 matrix
with
1s
just
and
above
problem?
tree
to find
the
below
just
the shortest
main
path
diagonal,
find
(a) a set of rows with 1s in too few columns. (b) aset of columns with 1s in too few rows. of zeros with p+ q (c) a p by g submatrix all the 1s. (d) four lines that cover 16.
The ence
maximal between
program.
flow
problem capacities and
has
slack
flows.
>
variables
State
the
5.
w;;
=
problem
c;; of
—
x;; for
Figure
8.5
differ-
the as
a
linear
explain
to
The best
way
players
X and
two-person
a
Y, and the rules
zero-sum
the
are
is to
game
for every
same
give
It has two
example.
an
turn:
5
1
-
yo ¢or two, y. Tftheymakethesame decision, upone‘hand ands§o does x holds wins $10. x wins$10forone handand Iftheymakeopposite for,:. decisions, d.$20. :
>
:
t
“cael Payoff
pe
s
matrix to
X does
to
a
the
single strategy, pattern,
the
choice
same
would
know In
frequency
or
.
x.
probabilities
—10
20
10
—10
ae
one
hand by a
aA :
two hands byY
|
hands ~ one>hand “two
|
OS
until
1
=
can
opponent
you lose” is what to expect. X
strategy,
y, and y)
hand
one
put up
can
with
frequency
and both
x;
hands
with
is random. this decision At every turn Similarly Y can pick 1 y,. None of these probabilitiesshould be 0 or 1; otherwise
x,.
—
independent of the previous turns. If there is some take advantage of it. Even the strategy “stay with obviously fatal. After enough plays, your opponent
be
must
turn
the
exactly
mixed
a
Y
stick every time, Y will copy him and win. Similarly Y cannot X will do the opposite. Both players must a mixed use strategy,
at every
historical
SIs
A
-
thing
same
and the choice
a
alee
X _ (Payments: x)
If
be
=
a
=
—
Y would be losing $20 too often. (He the opponent adjusts and wins. If they equal would lose $20 a quarter of the time, $10 another quarter of the time, and win $10 half than necessary.) But the more Y moves loss of $2.50. This is more the time—an average
ee
toward
a
The
abilities
pure two-hand fundamental x,
and
x»
versa)? Then the far
X is
as
strategy, the
that
payoff
average
and
concerned,
a
find
will
as
to
reached
have
minimum
mixed
reason
no
far
as
toward
move
the best
Y with
present
the game. X is combining the two
point
is to
problem
X will
more
a
hand.
one
strategies. Can his
move
saddle
own
strategy It is
point:
Y is concerned.
X choose
To find
a
prob-
(and vice
maximum
such
a
as
saddle
is to solve
“mixed”
column.
Weights
with
columns
2 and 2 would 3
column
Mixed
weights x; and produce this column: 2|
}—10
5
1
20
—
x;
to
produce
a
new
2
io 5 io BF +
=
that all Against this mixed strategy, Y will always lose $2. This does not mean strategies are optimal for Y! If Y is lazy and stays with one hand, X will change and Y will change, and then X again. Finally, since we assume start winning $20. Then they Y will combine are the rows both intelligent, they settle down to optimal mixtures. with weights y; and 1 y;, trying to produce a new row which is as small as possible: —
Mixed The
row
right mixture equal 2; the mixed
yi
{-10 20]+(—y,)[10 makes row
the two
becomes[2
components
-10] equal,
=
at y;
2]. With this strategy
=
[10—20y, -—10+4 30y,|. z, Then both components
Y cannot
lose
more
than
$2.
Y has minimized
by
X. The
The
the game is minimax maximin of rows mixture might not always
value
of
=
optimal a third strategy
allowed
and $80 when
loss, and that minimax
the maximum
Y
of
up three hands The payoff matrix
two.
puts up
A=|7
found
the maximin
$2.
=
have equal entries! $60 when
to win
holding
with
agrees
Suppose
Y puts up
one
X is
hand
becomes
10
20
#60
10
~-10
80]° x
X will
the
the three-hand
choose
strategy
time, Y always chooses the first minimax $60, but the saddle
same
maximin
In Y’s
column
optimalmixture used
actually
$60 appears
in the
complementary
Viatrix
row;
by
of rows, X. In X’s
that enters
row
over
in the
corner.
1, $60 appears only in the of columns, which was column 3,
purely
was
mixture
optimal
row
strategy. This rule corresponds exactly
of linear
condition
slackness
which
Y’s best
is
point
=
=
time, and win at least $60. At the maximum loss is $60. We still have
(column 3) every
to the
programming.
Games
general “m by n matrix game” is exactly like our example. X has n possible moves (columns of A). Y chooses from the m rows. The entry a;; is the payment when X chooses a payment column to Y. This j and Y chooses row i. A negative entry means one is a zero-sum game. Whatever player loses, the other wins. X is free to choose any mixed strategy x X,). These x; give the frequencies (x1, and they add to 1. At every turn X uses a random device to produce for the m columns y strategy i with frequency x;. Y chooses a vector (jj, Ym), also with y; > 0 and 1, which gives the frequencies for selecting rows. ‘> y; A single play of the game is random. On the average, of column the combination j i for Y will turn up with probability x;y;. When the for X and row it does come up, The X to from the total this is and is combination expected payoff payoff a;;. a;;x;y,;, y AX: expected payoff from each play of the same game is >) a;;x; y; The
most
=
...,
=
...,
=
=
‘aa
xX]
a
Qiy
aig
++
Ain
X2
yAx
Ey
=
cae
Ym
| |
It is this
payoff yAx A is the
Ami
that X wants
by
4X1 Y1 at
=
:
.
m2
***
Amn
to maximize
identity matrix, hoping to hit on
A
|
|
and
=
average
+ AmnXn Vn
payoff.
Ai
to minimize.
Y wants
J. The
aoe
expected payoff
becomes
yJx X is the same choice as Y, to win a;; $1. Y is X1yj + +X,¥,. often than another, Y $0. If X chooses any column more trying to evade X, to pay a;; often. The optimal mixture can is x (1/n,1/n,..., 1/n). Similarly Y escape more cannot (1/n, 1/n,..., 1/n). The overemphasize any row—the optimal mixture is y*
Suppose
n
+++
n
=
=
=
=
=
=
probability
that both
will choose
strategy
i is
(1/n)*, and
the
sum
over
i
is the
expected
Programming and
Linear
r8
is
of the game
The total value
to X.
payoff
Game Theory
(1/n)’, or 1/n:
times
n
|
P
kA
y
=|1/n
Ax
Vy
Ltn.
to escape.
The
; As
Y has
increases,
n
a
chance
better
Al
=
wins =
aj; must
value
1/n
1
=.
down.
goes
matrix, the game fair. A skew-symmetric andi X by Y Then a choice of strategyj by for Y (because by Y andi by X wins the same amount and the expected payoff must be the same, x* and make
J did not symmetric matrix A a completely fair game. —A, means =
The
tet
f=
m
|
1? ~)
1? (-})
1/n|
a
[1/n]
|
1
anda choice of j a;; for X, y* —a;;).The optimal strategies 0. The value of the game, when be y*Ax*
A’
=
—A, is
=
the strategy
But
zero.
is still to be found.
rO Fair
1
fd choose
Y both
X and
In words,
a
OL choice
smaller
3. The
1 and
between
number
-1
0
A=j1
game
—1'1
—-1
the
wins
$1; if they choose the payoff is a3. $1. (If X chooses 2 and Y chooses 3, Neither player can choose a strategy number, we are on the diagonal and nobody wins.) y* = (1,0, 0) are optimal—both players 2 or 3. The pure strategies x* =
=
involving
The
is increased
X wins
that
simply means
by
The Minimax
is
but there
a,
=
0.
a
Theorem mixed
the
place of X, who chooses and choose y Y will eventually recognize that strategy this to maximize An intelligent player X will select x*
Put
the
yourself in
X
Player Therefore
The value
turn.
at every
say a. This of the game
equal entries,
mn
change x* and y™.
to
reason
no
ay
amount
additional
an
=
unchangedhas
all decisions
that leaves
matrix
y* Ax*
is
The value
time.
1 every
choose
same
wins atleast
the
Y does
Y loses
I
hope you see equation (1) that X
the mixture
must
be satisfied
from
x* and Y
can
to
x
the
key result
guaranteed to
lose.
only
max
Then lose
by
win
will to
payment
yAx. (1)
y
strategy
y* Ax
X will
y,
maximize yAx.
this maximum: =
be, if it is
equal
(X1,-++5Xn):
minimum:
x
minimizes
y* that
the
minimize
to
=
max min yAx.
=
chosen
any
than
more
no
what is
y
opposite.For
choose
Y will
yAx*
min
x
strategy
the
min
x
We
true.
amount
the game will be solved: X moving from y*. The existence
(2)
yAx.
max
y
can
in
want
the
equation (2) only
lose
of this saddle
in
amount
that
Y
by moving
point
was
Game
8.5
proved by
411
Theory
Neumann:
von
-maxmin x
;
yAx
yAx
max
=
min
y
of the game.
value
=
“and. all; ye,=cere cS 0, > x; =1, y; => 0, choosing strategies from the “feasible set” of probability vectors: 1. is amazing is What that even Neumann not von did immediately recognize the S* y; two theories as the same. (He proved the minimax theorem in 1928, linear programming began before 1947, and Gale, Kuhn, and Tucker published the first proof of duality in 1951—based on von Neumann’s notes!) We are reversing history by deducing the minimax theorem from duality. Briefly, the minimax theorem can be proved as follows. Let b be the column vector means
For us, the
equality the proof
must
about
is that it
uses
=
of
Is, and
m
be the
c
minimize
(P)
subject
row
vector
of
n
1s. These
linear
(D)
cx
to Ax
>
b,x
are
programs
> 0
maximize
subject
to
dual:
yb yA
y => 0.
x;* = the sums to 1—and the resulting mixed strategies
by S changes optimal. For any Ax* >b The main
other
implies point than
is that
strategies
x
more
y*Ayb=1
y*Ax
> y* S. Division x*/S and y*/S are
maximin
=
minimax
=
1/8.
y*Ax
_
of
and Product
Appendix A_ Intersection, Sum,
Ol.
OA |
TA+2I
|
directions
Both
Ap
=(A@I)+U
—I
=O
ns
—J
A+2I
—]
®@A)=
0
-I
A+2I
|
|.
|
(A @ J) + UJ ® A) is the 9 by 9 matrix for Laplace’s five-point difference 2D). The middle row of equation (Section 1.7 was.for 1D and Section 7.4 mentioned
The
sum
this 9
by
shows
9 matrix
all five Row
Away from boundary (The Fourier tant
matrix
complex matrix to multiply by
way domain”
for
Fourier The
image
dimensional
is
the other
Ay
in
matrix
F.
signal.
For
2D
So.the
FFT
images Inn
we
need
in Section
Transform the
=O —1
—-1
Ol.
F is the most
matrix
Fourier
3.5
domain
“time
transforms
FOr=
=
—-1 4
[0 —-1 0
=
molecule:
five-point
to
imporis a quick frequency
2D transform:
Transform
along
then
each
down
each
row,
column
to know
the
by Fp into a twoThat array can be compressed and transmitted to pixel transform brings us back from Fourier coefficients inverse rule for Kronecker products:
The inverse
of
the matrix
a
two-dimensional
We need
The FFT also
by
that
array of Fourier Then the inverse
and stored. values.
matrix
50f
the
in 2D) The one-dimensional in the world. The Fast Fourier
1D audio
a
from
nonzeros
array of coefficients.
pixel
values.
A ®
B is
It is transformed
the matrix
A~! @ B™.
We just invert in one direction up the 2D inverse transform! direction. We are adding S~ S~ cyee’**e' over k and then 2.
speeds
difference
followed
Aop (A @ I) + CI @ A) has no simple inverse That is why the equation Azpu b hasbeen studied so carefully. One of the formula. fastest methods is to diagonalize Ap by using its eigenvector matrix (which is the Fourier sine matrix § @ S, very similar to Fyp). The eigenvalues of Ayp come immediately from The
Laplace
matrix
=
=
ndixA
the
of
Intersection, Sum, and Product
Spaces
of Axp:
eigenvalues
n? eigenvalues of (A @ I) + UI ® B) are all the sums 4;(A) +4;(B). The n? eigenvalues of A ® B are all the products 4;(A) A;(B). The
of A @ B (the product of its eigenvalues) is If A and B are n by n, the determinant (det A)” (det B)”. The trace of A @ B is (trace A)(trace B). This appendix illustrates both “pure linear algebra” and its crucial applications!
1.
Suppose (a) (b) (c) (d)
2.
What
is the
What
is the smallest
What
is the
The line
3.
plane
are
the
S
perpendicular of those
sums
space
of
SN T?
of S + T? of S + T? of
subspaces?
pairs
of
members
formula
the
are
upper
(0, 1, 1) in R®.
to
subspaces?
the space of all 4 by 4 matrices, let V be the and W the subspace of upper triangular matrices.
whose
(0, 1, 1).
R°.
Within ces
and
(1, 1, 0) and perpendicular
to
8.
=
SN T?
following pairs
and the whole
vector
zero
The
What
of the
T
7 and dim
=
plane and the y-z plane in R?. through (1, 1, 1) and the plane through (1, 0, 0)
x-y
The
of
dimension
largest possible
S
dim
possibledimension
the intersections
The
with
largest possible dimension smallest possible dimension
the
is
are
R’’,
of
subspaces
What
What
(a) (b) (c) (d)
T are
S and
tridiagonal mattithe subspace V-+W, is VMW? Verify
Describe
matrices.
Hessenberg
of
subspace What
(8).
—_
(4. If
VNW contains
fe
_/
dimV
+
only
the
when
dimW.Checkthis
then
vector,
zero
equation
(viis the row Space. aa
ere
oleae
a
w
5.
VAW
If
W
contains
only
the
but V is not
vector,
zero
to W.
{0}, then
V +
W is
called
the direct
of V and W, with
the
special spanned by (1, 1, 1) and (1, 0, 1), choose a subspace W so R?. Explain why any vector x in the direct sum V @ W can be written V@W v + w (with v in V and w in W). and only one way as x =
sum
=
one
=
Find v2
VM
V @ W. If V is
that
in
nd
seomantiennnunestl tanga wet
t
in
an
notation
7.
ait
forwhich Give LR’ ‘anexample orthogonal
6.
Ta
i
(8)becomes dim(V + W) = is the.nullspace-offA, of(A ¢ (/Wi
a
Find
8.
Prove
9.
The
basis
for
the
V + W
sum
W
(1,0, 1, 0) and the space
=
also the dimension from
equation (8)
intersection
C(A)
of that 7
of the
spanned by
VNW and
rank(A C(B)
+
a
V
space
basis
w;
=
spanned by v; (0, 1,0, 1), w.
the
=
=
we
=
a
B) < rank(A) + rank(B).
matches
—
(1,1,0,0 Nee” (0, 0, 1, 1 Nee”
for it.
nullspace
Bx, in the column spaces of both A and B matches 0. Check that Ax, Bx nullspace, because [A B]x Ax;
=
=
of [A x
y
=
=
B]. Each y (x;, —x2) in the (6, 3, 6) matches =
x
=
(1,1, —2, —3), and find
the intersection 1
A=13 2 10.
Multiply
11.
What
12.
Suppose
isthe4.by 4 Fourier Ax
=
components, x,y, then (A @ I)z =A(A)z and 13.
What
would
be the
“three-dimensional”
(«
B=1/0 1}. }O 2]
4] to
Fyp
get AA7! =
@BB“!
F ® F for
F
1 @1
=
=
=
F4
A(B)y. Form a long column x2y, and eventually x, y. Show that z is (A @ B)z X(A)A(B)z.
matrix
421
Spaces
C(B), for
0
Ip. ?
vector
=
an
z
with
eigenvector
n? for
=
seven-point Laplace
ne
N
r3
matrix
A(A)x and By
C(A)
5]
A~' @ B™
A @ B times
of
Intersection, Sum, and Product
Appendix A
is built
from
matrix Kronecker
for
—u,,
—
uyy
—
products using
uz,
=
J and
0?
Ajp.
This
TheJordanForm M~'AM is
M so that A, we want to choose square matrix possible. In the simplest case, A has a complete set of
Given as
a
of M—otherwise
the columns is constructed
entirely
known
from
1
completely achieved. In the missing and a diagonal form
is
that
the theorem
We
I blocks
by
more
as
S. The
J;
eigenvectors formis
Jordan
and the
;,
=
J of
goal
general and more difficult impossible. That case is now is
as
some
case,
M~'!AM
=
=
a
A; it
diagonal matrix eigenvectors
main
our
nearly diagonal they become
and
is are
concern.
to be
proved: repeat Ss a matrix cAhas: “Ifa is. independent S| enitis linearly eigenvector to ai matrix xd similart blocks form, square or that is in.Jordan withs h
thediagona:
in
An
example
of such
a
Jordan
matrix
is |
rg
0
08
J=100
0
0
([f8
0
0
0
0
0
Of}
|
Ty
1
Os
0
0/=
01
00
10
0]
100
0
1
Do |
Js
—
[0]
|
Js)
;
direction eigenvalue A 8 has only a single eigenvector, in the first coordinate e, (1, 0, 0, 0, 0); as a result, A 8 appears only in a single block J,. The triple eigenvalue A 0 has two eigenvectors, e3 and es, which correspond to the two Jordan blocks Jy and J. If A had 5 eigenvectors, all blocks would be 1 by 1 and J would be diagonal. The key questionis this: /f A is some other 5 by 5 matrix, under what conditions The double
=
=
=
=
will its Jordari
formbe
this
same
J
?
When will there
requirement, any similar matrix A must 0, 0. But the diagonal matrix with these eigenvalues really concerns the eigenvectors. As
a
first
exist
share is not
an
M such
the
same
similar
to
that
M~!AM = J?
eigenvalues 8, 8, 0, /—and our question
To
it,
answer
M~'AM
rewrite
we
J in the
=
simpler
AM=MJ:
form Ke3
Xi,
NS
Xq
X35
Xq
=
|
1X
X02
X30
a
8
0 A
oS
Xq
423
Form
The Jordan
Appendix B
0
|
0
0 0
Les
Carrying
Le
J
out
the
multiplications
oJ
column
a
=
|
time,
at a
|
Ax; Ax3 Now
just
can
we
with
one
Ax4
4
Ax, Ox4 +
=
the conditions
recognize
J has. The
as
and
0x3
=
and
8x;
=
on A.
8x4
=
and
X3
It
(11)
AXxs Oxs. =
have
must
(10)
x;
three
genuine eigenvectors,
8 will go into the first column of M, exactly as it would of S: Ax; = 8x,. The other two, which will be named x;
=
have gone into the first column and x5, go into the third and fifth
columns
of
Ax3 Axs5 0. Finally there other special vectors, the generalized eigenvectors x2 and x4. We think of
be two
M:
=
=
must x2
as
belonging to a string of vectors, headed by x; and described by equation (10). In fact, in the string, and the corresponding block J; is of order 2. X2 1s the only other vector Equation (11) describes two different strings, one in which x4 follows x3, and another in which xs is alone; the blocks J and J3 are 2 by 2 and 1 by 1 The search for the Jordan form of A becomes a search for these strings of vectors, each
either The vectors
AX;
A;x;
=
have to show
AX;
or
of M, and each
x; go into the columns
sentially, we the
For every i,
headed by an eigenvector:
one
how these
stringscan
=
AjX; +
(12)
Xj-1.
block
string produces a single
be constructed
the
in J. Es-
for every matrix A. Then if J will be the Jordan form of A.
particular equations (10) and (11), our I think that Filippov’s idea makes the construction as clear and simple as possible.” It proceeds by mathematical induction, starting from the fact that every | by 1 matrix is already in its Jordan form. We may assume that the construction is achieved for all matrices of order less than n—this is the “induction hypothesis’”—and then explain the steps for a matrix of order n. There are threesteps, and after a general description we applythem to a specific strings
match
example.
Step 1 If we Looking only a
Jordan
space
form
such
A
assume
of
_
2
possible—there
is
*
Suppose p.
corresponding
A. F,
from
Filippov,
its column
26
r
?.Pivots are ratios the tenth pivot is near 10-°7/10~* 107": very small.
x
on
5.
=
1
row
—
be —
ad
be.
ad-—
are
determinants
Sequences”:
7
1
A is singular. (AA) =4 (det A‘)(det A): these
determinants
56, 144, 320,
the
even
_
—
(ad—be)?
—6, det(A) 2
det
so
10718 4.8
n
is wrong.
d—be
a
be
—
a
=
=
of determinants, 33.
ad
=
1, det(U)
=
be
36 and determinant
=
For
5 and
=
bh
—
Lad
23.
0 for)
=
-
Gg
det oA)(Aa!
4+ 10
(—1)"(det D)(det C).
=
=
-
21.
1° ~7A
=
also
1, 2 cofactor —1 to find
has
F,
except F, is the usual
a =
1 in column
1, with cofactor
F,,_.».
F,_1 + F,_2. So the determinants F,,_;.
Solutions
Cofactor
det
expansion:
4(3)
=
4(1)
—
s(n
+2n
(c) 11.
1)n! (eachtermn—
—
0
JAB
0|_
A
detA
B=
=
rank(A)
(1).
F ik 3 3
VJA>0.R=
1
|[x™RTRy|? |(Rx)'Ry/*? < (by the ordinary IxTAy|? = @&TRTRx)(y'R*Ry) |Rx|P Ry? (x"Ax)(y"Ay). =
=
P+
notice
R=
3
Schwarz
inequality)
=
|
_
11.
13.
A=
oeJ/2
V2 has 2
|
A
=
1 and 4, axes
(I) x'Ax < Negative definite matrices: the eigenvalues of A satisfy 1; < 0. (I)
1
a
1 and
0 for all
detA;
5
nonzero
(II) 0,detA3 < x.
All 0.
Solutions
(IV)All 15.
the
R with
pivots (without independent columns
False
(Qmust
(Q°AQ
contain
Q7'AQ
=
17. Start from
a;; R. Then det A
_
A); True
=
O
A); True 0).
as >
=
of all the columns
of R. This
product
0
|-1
A=
(same eigenvalues A); True (eigenvalues of e~“ are e~*
to
matrix
—R'R.
A=
of
eigenvectors
a
=
[2-1 19.
that
such
(V) There is
0.