Calculus BLUE Vol I Vectors & Matrices [Vol 1, 3 ed.] 9781944655037


204 10 202MB

English Pages 433 Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
BLUE 1 INTRO
BLUE 1 COVER
Title Page
Table of Contents
Instructions
LET’S GO!
SONG 17
BLUE 1 PROLOGUE
PROLOGUE title
CALCULUS!
CHORUS
Multivariate machines
CHORUS
Surfaces in 3-d
Coordinates in n-d
CHORUS
LINEAR ALGEBRA!
Vectors & matrices
BUT SO WHAT?
CASE: machine learning
CASE: statistics
CASE: geometry
CASE: linear systems
CHORUS
SO MUCH MORE!
ONWARD!
Chapter 1 - lines and planes
Chapter 1 – lines & planes
CHORUS
Remember…
FORMULAE: lines
EXAMPLE: lines
CHORUS
FORMULAE: planes
EXAMPLE: planes
CHORUS
Parametrized lines
EXAMPLE: parametrized lines
CHORUS
I wonder…
The BIG IDEA
BONUS! Hyperplanes
BONUS! Hyperplane classifiers
BONUS! Nonlinear classifiers
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 2 - curves surfaces
Chapter 2 – curves & surfaces
CHORUS
Planar curves
CHORUS
Put on your 3-d glasses!
CHORUS
Surfaces in 3-d
EXAMPLE: surface parametrizations
CHORUS
Let’s see some…
EXAMPLE: spheres
EXAMPLE: ellipsoids
EXAMPLE: hyperboloids
EXAMPLE: hyperboloids
EXAMPLE: elliptic paraboloids
EXAMPLE: hyperbolic paraboloids
EXAMPLE: cones
EXAMPLE: cylinders
RELAX!!!
CHORUS
The BIG PICTURE
PROBLEMS
PROBLEMS
Please sign…
Chapter 3 - coordinates
Chapter 3 – coordinates & points
CHORUS
Coordinates in 2-d
Coordinates in 3-d
But what about…
Coordinates in n-d
BUT SO WHAT?
CHORUS
CASE: robot kinematics
CASE: wireless signals
CASE: customer profiles
CHORUS
Distance in n-d
EXAMPLE: distance in 3-d
EXAMPLE: distance in 8-d
EXAMPLE: distance in 49-d
CHORUS
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 4 - vectors
Chapter 4 - vectors
CHORUS
Vectors
DEFINITION: vectors
Vector components
NOTATION: vectors
CHORUS
Vector algebra
Vector visualization
Vector lengths
CHORUS
Vector parametrizations
CHORUS
Basis vectors
EXAMPLE: basis vectors
EXAMPLE: vector length
CAUTION
CHORUS
FORESHADOWING: vector calculus
The BIG PICTURE
PROBLEMS
PROBLEMS
Please sign…
Chapter 5 - dot products
Chapter 5 – dot product
ALGEBRA!
CHORUS
DEFINITION: dot product
CHORUS
Dot product geometry
EXAMPLE: dot products
Orthogonality
CHORUS
Orthogonal projection
+/- dot products & orientation
EXAMPLE: projections
CHORUS
Hyperplane equations
Hyperplane classifiers
Hyperplane classifiers
CHORUS
LOVE & the dot product
Computational love theory
BONUS!
FORESHADOWING: Fourier theory
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 6 - cross products
Chapter 6 – cross product
CHORUS
DEFINITION: cross product
CHORUS
Orthogonality
Use the RIGHT hand!
EXAMPLE: cross product & planes
CHORUS
Cyclic diagram
Basis vector cross products
EXAMPLE: computing cross product
Cross product geometry
Distances to lines & planes
CHORUS
DEFINITION: scalar triple product
That’s how I spell it…
CHORUS
BONUS!
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 7 - intro to vector calculus
Chapter 7 – vector calculus
CHORUS
EXAMPLE: vectors in physics
HEY WAIT!
CHORUS
Calculus & curves
The derivative as a vector
Tangent vectors
EXAMPLE: velocity & acceleration
CHORUS
EXAMPLE: arclength integrals
EXAMPLE: arclength computation
CHORUS
Vector differentiation rules
EXAMPLE: gravitation
CHORUS
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 8 - vector calculus and motion
Chapter 8 – vector calculus & motion
CHORUS
EXAMPLE: integrating for position
CHORUS
Acceleration: tangents & normals
EXAMPLE: unit tangents & unit normals
Decomposition of acceleration
CHORUS
Curvature: definition & computation
The osculating circle
EXAMPLE: curvature in 2-d
CHORUS
The binormal vector & torsion
BONUS!
CHORUS
YOU NEED MORE!
The BIG PICTURE
PROBLEMS
PROBLEMS
Acknowledgements
BLUE 1 INTERLUDE
THE BEGINNING?
PAUSE...
CHORUS
Curves in the nth dimension
Time series data
…in the nth dimension
Dimension of "physical" systems
CHORUS
It's not obvious
Rates of change?
It's the question
ONWARD!
Chapter 9 - matrices
Chapter 9 - matrices
What is the matrix?
DEFINITION: matrix
Matrix sizes
Matrix forms
Zero & identity matrices
CHORUS
CASE: stress matrix
CASE: digital images
CASE: social networks
CASE: genetic correlations
CASE: Markov chains
CASE: darn you autocomplete!
CHORUS
DEFINITION: transpose
Symmetry of transpose
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 10 - matrix algebra
Chapter 10 – matrix algebra
ALGEBRA!
Matrix addition
Scalar multiplication
CHORUS
Matrix multiplication
EXAMPLE: matrix vector multiplication
CHORUS
Rotation matrix J
A complex representation
CHORUS
Noncommutativity
First underpants…
EXAMPLE: transpose & multiplication
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 11 - matrix equations
Chapter 11 – matrix equations
CHORUS
Linear systems
Ax = b
CHORUS
CASE: chemical reactions
CASE: circuit mesh analysis
CASE: network traffic flow
CASE: commodity production
CASE: spring-mass systems
CHORUS
How to solve?
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 12 - row reduction
Chapter 12 – row reduction
Ax = b
How hard is it?
CHORUS
EXAMPLE: back-substitution
CHORUS
Augmented matrix
The BIG IDEA
Row operation 1
Row operation 2
Row operation 3
EXAMPLE: 3-by-3 system
…continued
CHORUS
Tips & tricks: pivoting
The BIG PICTURE
PROBLEMS
PROBLEMS
Acknowledgements
Chapter 13 - inverse matrices
Chapter 13 – inverse matrices
FAKE!!!
DEFINITION: inverse
Matrix inverse
CHORUS
Noninvertibility
EXAMPLE: 2-by-2 inverse
BUT SO WHAT?
Solving equations
CHORUS
Computing inverses
…continued
Complete row reduction
CHORUS
EXAMPLE: the geometric series! Hooray!
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 14 - linear transformations
Chapter 14 – linear transformations
CHORUS
The BIG IDEA
Linear transformations
Basis vectors suffice
Let’s see some…
CASE: rescaling
CASE: projection
CASE: shearing
CASE: rotation
CHORUS
IMPORTANT!
Composition is matrix multiplication
CHORUS
EXAMPLE: Euler angles
EXAMPLE: Euler angles
EXAMPLE: a 4-d transform
BONUS!
The BIG PICTURE
PROBLEMS
PROBLEMS
Acknowledgements
Chapter 15 - coordinate transformations
Chapter 15 – coordinate transforms
CHORUS
Where’s the best deli?
Nonlinear coordinates
CHORUS
Basis vectors
The BIG IDEA
EXAMPLE: changing coordinates
CHORUS
EXAMPLE: change of basis
Coordinate changes
Coordinate changes
Coordinate changes
CHORUS
EXAMPLE: rotations
EXAMPLE: rotations
FORESHADOWING: nonlinear coordinates
The BIG PICTURE
PROBLEMS
PROBLEMS
Please sign…
Acknowledgements
Chapter 16 - algebraic determinants
Chapter 16 – algebraic determinants
DEFINITION: determinant
Determinants: 2-by-2
CHORUS
Determinants: 3-by-3
CHORUS
Scalar triple & cross products
CHORUS
Minor expansion
…continued
EXAMPLE: 3-by-3 determinant
CHORUS
EXAMPLE: 4-by-4 determinant
Sign convention
CHORUS
TEASER
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 17 - geometric determinants
Chapter 17 – geometric determinants
CHORUS
Determinants & area
Determinants & volume
CHORUS
Cross product & determinants
Cross product & areas
CHORUS
Determinants & n-volumes
WATCH THIS!
Determinant = change in n-volume
CHORUS
Composition & determinants
CHORUS
Theorems on determinants
The BIG PICTURE
PROBLEMS
PROBLEMS
Chapter 18 - computing determinants
Chapter 18 – computing determinants
Computing determinants: O(n!) ?
CHORUS
Triangular is better…
The BIG IDEA
CHORUS
Row operation 1
Row operation 2
Row operation 3
SUMMARY
EXAMPLE: row reduction & determinants
EXAMPLE: determinants & row exchange
CHORUS
ALGORITHM: determinants
CHORUS
Transpose & determinant
BONUS!
The BIG PICTURE
PROBLEMS
PROBLEMS
BLUE 1 EPILOGUE
BLUE 1 EPILOGUE
SO MUCH MORE!
CHORUS
LEARN LINEAR ALGEBRA!
EIGENVALUES
DEFINITION: eigenvalues
What eigenvalues mean
CHORUS
VECTOR SPACES
DEFINITION: vector spaces
BUT SO WHAT?
EXAMPLES: vector spaces
CHORUS
SO MUCH MORE!
BLUE 1 FORESHADOW
BLUE 1 FORESHADOW
CHORUS
How many rates?
CHORUS
It’s a matrix!
CHORUS
Chain Rule!
CHORUS
Gradients!
CHORUS
Taylor series!
CHORUS
Max/min!
CHORUS
Lagrange!
The BIG PICTURE
LET’S GO!
BLUE 1 CLOSE
SONG 18
BLUE 1 COVER
About the Author
REFERENCES
Where Credit is Due
Publisher of Beautiful Mathematics
Recommend Papers

Calculus BLUE Vol I Vectors & Matrices [Vol 1, 3 ed.]
 9781944655037

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

bY

CALCULUS BLUE MULTIVARIABLE VOLUME 1 : VECTORS & MATRICES ROBERT GHRIST 3rd edition, kindle format Copyright © 2019 Robert Ghrist All rights reserved worldwide Agenbyte Press, Jenkintown PA, USA ISBN 978-1-944655-03-7 1st edition © 2016 Robert Ghrist 2nt edition © 2018 Robert Ghrist

prologue chapter 1: lines & planes chapter 2: curves & surfaces chapter 3: coordinates chapter 4: vectors chapter 5: the dot product chapter 6: the cross product chapter 7: intro to vector calculus chapter 8: vector calculus & motion INTERLUDE chapter 9: matrices

chapter 10: matrix algebra chapter 11: matrix equations chapter 12: row reduction chapter 13: inverse matrices chapter 14: linear transformations chapter 15: coordinate changes chapter 16: algebra of determinants chapter 17: geometry of determinants chapter 18: computing determinants epilogue foreshadowing: derivatives

enjoy learning! use your full imagination & read joyfully… this material may seem easy, but it’s not! it takes hard work to learn mathematics well… work with a teacher, tutor, or friends and discuss what you are learning. this text is meant to teach you the big ideas and how they are useful in modern applications; it’s not rigorous, and it’s not comprehensive, but it should inspire you to do things with math… exercises at chapter ends are for you to practice. don’t be too discouraged if some are hard… keep working! keep learning!

the starry floor the watry shore

Presses forward With the analysis of multivariate functions

what does that involve?

multivariate functions can have multiple inputs & multiple outputs

we're going to relearn all of calculus for such functions…

multivariate functions are everywhere

these are simple… & low-dimensional

n

 = { ( x1 , x2 , … , xn-1 , xn ) }

requires several new tools

we're going to sketch a few ideas from

in order to do calculus in dimensions greater than one…

Since much of calculus involves approximating nonlinear functions with linear functions, it makes sense to adopt the mathematics of linear multivariable functions. this subject begins with…

& We will use vectors to encode multiple rates of change

We will use matrices to manipulate vectors & solve equations

why learn about vectors & matrices?

besides being crucial for calculus, these ideas have a life & utility of their own, impacting…

vectors & dot products are useful in machine learning when classifying data

this is most commonly used in fairly high dimensions…

trying to find a meaning within data?

finding a best-fit line or plane is called linear regression & is very useful!

we'll need to learn matrix algebra in order to explore some basic examples of regression

distance, area, volume… hypervolume?

we're going to need matrices & determinants to help compute volumes…

hard work now will reward us later when we want to do high-dimensional integrals…

we will learn how to solve systems of linear equations using vectors & matrices

such systems are ubiquitous in applications, including economics, stoichiometry, traffic control, & mechanics

with a bit of review

You learned about lines & planes in geometry?

let’s review a bit & prepare for what comes next…

y = mx + b

slope-intercept form

y - y0 = m ( x – x0 )

point-slope form

x/a + y/b = 1

intercept form

lines in the plane what are the equations of the lines with the following features? orthogonal slope is the negative reciprocal slope = -1/2 slope is change in y over change in x slope = (4-0)/(1+1) = 2

( y – 3 ) = -4( x – 2 ) y + 4x = 11 (y–5)=-(x–3)/2 y + x/2 = 13/2

or…

or…

( y – 4 ) = 2( x – 1 ) or… ( y - 0 ) = 2( x + 1 ) or… y - 2x = 2

are the natural analogue of lines in 2-d…

It’s Time for some

for a plane in standard coordinates, the points ( x, y, z ) on the plane satisfy…

nx(x - x0) + ny( y - y0) + nz(z - z0) = 0 x y z + + = 1 a b c

parallel planes the plane given by

nx(x - x0) + ny( y - y0) + nz(z - z0) = 0 passes through the point ( x0 , y0 , z0 )

z

2x - 4y + 3z = 8 y

& has a type of “slope” determined by the coefficients ( nx , ny , nz )

2 ( x – 0 ) -4 ( y + 1 ) + 3 ( z – 5 ) = 0 (x 0, y 0, z 0)

x

2x - 4y + 3z = 19

are most easily expressed parametrically

Parameterize each coordinate as a function of some auxiliary variable

x(r) = 3r - 5 y(r) = r + 3 z(r) = -4r + 1

x(s) = 3s - 8 y(s) = s + 2 z(s) = -4s + 5

3

x(t) = 3t - 5 3 y(t) = t + 3 3 z(t) = -4t + 1

Each of these gives a line passing through the point (-5, 3, 1) & with the same “direction”

Parametrized lines in 3-d

base the line at (4, 2, 5) compute directions as changes in coordinates the coefficients of the equation of a plane give the direction of the orthogonal line

x(t) = ( -1 – 4 ) t + 4 = -5t + 4 y(t) = ( 3 – 2 ) t + 2 = t + 2 z(t) = ( 3 – 5 ) t + 5 = -2t + 5 x(t) = 2t + (4) = 2t+4 y(t) = (-1)t + 2 = 2-t z(t) = 3t + 1

the interesting “linear” objects are lines & planes

what happens in higher dimensions?

“DON’T

BLINK!”

if you eventually learn multivariate statistics you will use hyperplanes to do linear regression if you eventually take machine learning, you will use hyperplanes in data mining & more!

In a high dimensional space of data… a “Support vector machine” is a hyperplane that optimally separates two types of data points

These data points correspond to pictures of dogs

These data points correspond to pictures of cats

Classify: Given a new point, on which side of the hyperplane does it lie?

It is unusual for data to be perfectly separated by a linear classifier…

Usually, things are more complicated… And nonlinear!

We need more than lines & planes…

the big picture

lines & planes are the basic linear objects in 3-d, & we will use them extensively as we pass to nonlinear mathematics

1

2

3

4

what are the equations of the lines in the plane that satisfy the following: a) Slope = -1/2 & passing through the point (-2, 5) b) Intersects the x-axis at 4 and the y-axis at -3 c) passing through the points (9, 0) and (4, -3) what are the equations of the planes in 3-d that satisfy the following: a) PARALLEL TO THE PLANE 2x+y-z = 5 & passing through the point (0, 1, 3) b) Intersects the x-axis at -5, the y-axis at 3, & the z-axis at 7 c) passing through the origin and orthogonal to the line connecting the points (4,-5, 0) and (2, -3, 1) You have to be careful with some of these formulae when the lines/planes are parallel to an axis. What are the equations for the following objects? a) Line PARALLEL TO THE x axis & passing through the point (-3, -4) b) Plane orthogonal to the x-axis and passing through the point ( 7, 0, -2 ) What are all possible ways a plane and a line can intersect in 3-d? explain.

5

For what value(s) of a are the planes given by 3ax +16y + az = 5 and 12x + ay + 4z = 17 parallel?

6

Challenge: For what value(s) of a, b are the planes given by 2ax -by + 12z = 15 and bx + 2ay + 3bz = -3 parallel? (soon, this will be easy for you…)

7

Give a parametrization of the line that passes through the PAIR OF points ( 2,-3, 5 ) AND (4, 1, 7) .

8

Give a parametrization of SOME line ORTHOGONAL the plane 4x +12y -5z = 6. HOW MANY SUCH LINES ARE THERE? Give a parametrization of A line PASSING THROUGH THE POINT (5, -1, 4) AND PARALLEL TO THE plane GIVEN BY 4x + 3y - z = 3. WHAT CHOICES DID YOU NEED TO MAKE? At what point does the line given by the parametrization x(t) = 2t-1 , y(t) = 3t + 2 , z(t) = 4t, Intersect the plane given by 2x - 3y + z = 10 ?

9 10

arise in two different ways

The solutions to an equation Yields a curve in the plane

Specifying coordinates as a Function of a parameter

y = x2 - 3

x(t) = t 2 y(t) = t - 3

x2

x(t) = 2  t y(t) = 2  t

+

y2

=4

is not all there is

WE’RE GOING TO WORK IN 3-D

Though perhaps not as familiar…

The solutions to an equation Yield a SURFACE in 3-D

z = 3x2 + y2 - 5 x2 + y2 + z2 = 4

Specifying coordinates as a Function of TWO parameterS

x(s,t) = s y(s,t) = t 2 2 z(s,t) = 3s + t - 5 x(s,t) = 2  s  t y(s,t) = 2  s  t z(s,t) = 2  t

Surface parametrizations Find a parametrization for the surface given by

Find an implicit equation for the surface given by

2x3 – y + 3z2 = 5

x = s 2 y = t z = s2 - 3t2

x = s z = t 2 y = 2s3 + 3t -5

z = x2 – 3y z2 = x2 – 3y

Must be careful with bounds on the parameters! Especially with square roots!

3y = x2 – z2

Are among the nicest to work with

A sphere with radius r and Center (x0, y0, z0) is given by

(x-x0) 2 + (y-y0) 2 + (z-z0) 2 = r2 Or, equivalently, (x-x0) 2 (y-y0) 2 (z-z0) 2 + + = 1 2 2 2 r

r

r

the axis-aligned ellipsoid With semiaxis “radii” a, b, & c > 0 and Center (x0, y0, z0) is given by

(x-x0) 2 (y-y0) 2 (z-z0) 2 + + = 1 2 2 2 a

b

c

If a = b < c this is a prolate spheroid If a = b > c this is an oblate spheroid what happens If a = b = c ?

When one of the signs in The equation of an ellipsoid is negative, One has a 1-sheeted hyperboloid (x-x0) 2 (y-y0) 2 (z-z0) 2 + = 1 2 2 2 a b c

When two of the signs in The equation of an ellipsoid is negative, One has a 2-sheeted hyperboloid (x-x0) 2 (y-y0) 2 (z-z0) 2 = 1 2 2 2 a b c

An elliptic paraboloid opens along One axis & is an ellipse in cross-section to that axis z-z0

2

(x-x0) (y-y0) = + 2 a b2

2

A hyperbolic paraboloid Is parabolic along One axis & is a hyperbola In cross-section to that axis z-z0

(x-x0) 2 (y-y0) 2 = 2 a b2

A cone opens along One axis & is an ellipse In cross-section to that axis (z-z0) 2 (x-x0) 2 (y-y0) 2 = + 2 2 c a b2

This can be viewed as a Degenerate hyperboloid

elliptic

parabolic

hyperbolic

KNOWING HOW TO DRAW PICTURES OF SURFACES IS NOT A “CORE TRUTH” YOU NEED TO HAVE SEEN THE BASIC SURFACE FORMULAE & RECOGNIZE THEM WHEN THEY ARISE LATER...

But not until later in calculus

the big picture

nonlinear objects like curves & surfaces are the motivation for

calculus

in the following, ignore technical issues of parameter ranges

1

Give parametrizations of the following curves in the plane: A) y2 - x = 5 B) xy + x = 2 C) x2 + 2x – y = 7

2

determine an implicit formulation of the following parametrized planar curves: x(t) = 2t - 1 x(t) = 2  t x(t) = t2–2t + 1 A) B) C) y(t) = t + 2 y(t) = 3  t y(t) = t

3

Give parametrizations of the following surfaces, using parameters s and t: A) x2 + z2 = 4

4

B) 2x - y2 + z = 9

C) 3x +2y - z = 8

can you find an implicit formulation of the following parametrized surfaces? x(s, t) = s x(s, t) = 2s - t x(s, t) = s A) y(s, t) = 3  t B) y(s, t) = 3t C) y(s, t) = s + t z(s, t) = 3s2 – 2t2 z(s, t) = 3  t z(s, t) = 4s + t

6

parametric & implicit objects have distinct advantages. given the curve defined by y3 – 4x2 – 3xy2 + 2yx = 7, is the point (-2, 0) on the curve? find a point that is.

7

parametric & implicit objects have distinct advantages. given the curve defined by x(t) = et t – t2 ; y(t) =  (t/2) +  (1+t2), is the point (-2, 0) on the curve? now, how hard is it to find a point that is? hint: pick a t.

8

what are the centers and semiaxis radii of the following ellipsoids? (you may need to “complete the square” to determine this…) A) (x-1)2 + 4y2 + 6z2 = 36

9

B) x2 + 2y + y2 + 2z2 = 15

C) x2 + 7y2 + z2 – 4z = 45

identify the following quadratic surfaces. you may need to do a little algebra to get them into a “normal” form. A) x + 4y2 - 3z2 = 2

B) x2 + 6x - y2 + 4 y + z = 8

C) x2 + 7y2 + z2 – 4z = 45

d) x2 + y2 – 2y - 5z2 = 10

e) x2 + 2y + y2 - z2 = -1

f) -x2 + y - y2 - 3z2 = -9

The relationship between implicit and parametrized surfaces is subtle & deep. These point to a number of larger ideas in Mathematics. Curves (in 1-d) and surfaces (in 2-d) are the images of parametrized spaces, with one or two parameters respectively. What happens when you look to higher dimensions? What is a “surface” in higher dimensions? Images of parametrizations with n inputs are called “manifolds” or “n-manifolds”.

Manifolds are among the most interesting and fundamental spaces that mathematicians (topologists and geometers) investigate. It is only recently that we have classified 3-manifolds (this is a long story), and there are still significant open problems (conjectures) about 4-manifolds. The universe is a strange and wonderful place. Even more interesting is the implicit setting. What happens here in higher dimensions?

If you write down some polynomial equations in several variables, what does the space of solutions to these equations look like? Is it a manifold? What is its dimension? Such spaces are called varieties, and, though not always manifolds, they are still very “nice”. This leads to the subject of algebraic geometry, which has had so much impact in modern physics. There’s so much more to learn!

I have read the above completely and agree to abide by these terms

to higher dimensional spaces…

2

 ={(x,y)}

3

 ={(x,y,z)}

n

 = { ( x1 , x2 , … , xn-1 , xn ) } n

a point is an element of  … …a location in space

Why bother with higher dimensions?

After all, most calculus books mention only dimensions 1, 2, & 3…

are a natural occurrence

A tYpical industrial robot arm has Several rotational joints whose Configuration is specified by a sequence of joint angles

φ1 , φ2 , φ3 , … , φn

φ2

φ3 φn φ1

the wireless receiver in your phone reads the signal strengths from all the nearby wireless access points in a building

x1 , x2 , x3 , … , xn

( & physical space too! )

building a customer profile from survey data gives coordinates in a preference space: each customer is characterized by preferences

p1 , p2 , p3 , … , pn

durian espresso fortnite sushi plini dante cohomology powerpoint Bollywood

with coordinates of points in space?

in 2-d

Q

Q P

in 3-d P

2

2

2

2

(Q1 – P1 ) + (Q2 – P2 )

2

2

(Q1 – P1 ) + (Q2 – P2 ) + (Q3 – P3 ) 2

(Q1 – P1 ) + (Q2 – P2 ) + … + (Qn – Pn ) =

2

#i (Qi – Pi )

2

distance formulae in 3-D what is the distance between a point and a line in 3-d space?

P = ( 2, 3, -1 ) D(t)

x(t) = 4t-3 y(t) = -2t+1 z(t) = 5t-4

compute the distance from a point P to a point on the line at parameter t using the formula for distance D

D=

2

2

( x(t)–2 ) + ( y(t)–3 ) + ( z(t)–(-1) )

2

this is now a problem of single-variable calculus... differentiate, solve for the minimum & verify… there is… see chapter 6…

distance in the 8th dimension… what is the distance between these two configurations of points in the plane?

each configuration of points is specified by eight coordinates… …four pairs of (x, y)

D=

2

2

2

2

2

2

2

2

0 + (-1) + 3 + 0 + (-1) + 1 + (-1) + 2 =

17

distance in the 49th dimension

the diameter equals 2

the diameter equals 7

1

1

49

1

1

though the unit cube seems smaller than a ball of unit radius, the cube has corner-to-corner diameter that can be big!

so, when do we get to the calculus stuff?

the big picture

2-d and 3-d space is not all there is… higher dimensional spaces can be coordinatized and put to good use!

1

what is the distance between the points ( 2, -1, 4 ) and ( 3, 1, 2 ) in 3 ?

2

what is the distance between the points ( 0, 1, 2, 3, 4 ) and ( 0, 2, 4, 6, 8 ) in 5 ?

3

Consider a unit ball inscribed in a cube in n. what is the distance between a corner of the cube and the closest point to the inscribed ball. careful! what do you know about the diameters of the cube and the ball?

4

a challenge: consider the a unit ball inscribed in a cube in n. inscribe a second ball in a corner of the cube so it is tangent to the ball. what is the radius of the second ball as a function of dimension? are you surprised?

5

how many coordinates are needed to describe the position and orientation of a simple robot in the plane with disc-like footprint (think “Roomba” or “dalek”) ? How many coordinates would it take to describe a fleet of five such robots?

6

7

consider the following planar linkages: mechanical devices with fixed-length rods and joints that can rotate in the plane. Assuming that one edge is “fixed” or held down, how many joint angles are needed to specify the shape of the linkage?

(a)

(B)

using signals as coordinates: consider a plane with several fixed beacons, whose locations are known to you on a map. assume you carRY a physical device which gives you the exact distances to each beacon. a) if there are two beacons, how much can you say about your location based on the distance readings? B) how does your answer to a) change if you are in 3-dimensional space? how many beacons would you need to completely determine your location?

are not the only (or even most) important thing

a “difference” between n two points in 

Q

S

=

P

R

only the difference matters – not the points themselves

abstract “things” that can be added…

and rescaled…

so as to obey certain laws

“three units right, & two units up”

3 2

“two units right, one unit back, & two units down”

2 1 -2

a vector v is specified by its components P

Q

v=



_ QP

v1 v2

vn-1 vn

begins with addition & rescaling

( u + v )n = un + vn ( c u )n = c u n

un

& v =

v1 v2 …



u =

u1 u2

vn

only vectors with the same number of components can be added!

u+v =v+u commutativity

u+0 = u

zero

u - v = u + (-v ) subtraction

you can think of vector addition as concatenation of vectors you can “see” commutativity

v

2v

multiplication by a “scalar” (number) changes length

v u

-v

subtraction is “adding a negative”, reversing the vector direction

| v | = #i vi2

“triangle inequality”

| u+v| ≤ | u| +| v| zero length

| v| =0 a “unit vector” is a vector of length = 1

v = 0

| cv | = |c| | v |

rescaling

have nice parametrizations using vectors

use the parameter(s) to rescale “tangent” vectors

z

y (x0, y0)

y

1 parameter

2 parameters 2 vectors

1 vector

x

x(t) = x0 + tv

(x 0, y 0, z 0)

x

x(s,t) = x0 + su+ tv

in general you have to make sure these vectors are “independent” (e.g., not collinear)

Are a useful representation

in n, there are standard basis vectors:

0 j= 1 0

0 k= 0 1

v = v 1 i + v2 j + v3 k

0 ek = 1 0 k = 1…n



1 i= 0 0

0 0



in 3-d, it’s common to decompose vectors into three summands via a set of basis vectors

0 0

th

k

term

using basis vectors

u=

2 -1 0 5 1 -3 9 0 4

u = 2e1 –e2 +5e4 + e5 -3e6 +9e7 + 4e9

v=

1 -7 5

v = i -7j +5k = e1 –7e2 +5e3

Algebra & geometry of vectors u – 2v =

u – 2v 4 u = 0 -3 = 4i – 3k

1 v = 1 -2 = i + j - 2k

4 1 4-2 2 = 0 -2 1 = 0 - 2 = -2 -3 -2 -3 + 4 1

2

2

2

| u – 2v | = 2 + (-2) + 1 = 4+4+1 =3

first, we need a lot more vector algebra & geometry

DIFFERENTIATion ?

the big picture

points are locations, but vectors point: they can be added & rescaled, as algebraic & geometric entities

for the questions below, use the following: 1 t = -2 3

1 u = 0 -5

-3 v = 4 0

w=

2 0 -1

1 4 x = -2 3

y =

0 1 2 5

z =

7 1 3 0

1

write each vector out in terms of standard basis vectors { ek } .

2

compute the following linear combinations: A) t - u + v ; B) 5x + 3y ; C) 3u – 5v + w ; D) x – 2y + 3z

3

compute the lengths of the following vectors: A) t - 2v ; B) x - y ; C) z - 3y + 2x

4

consider a string of binary data (0 & 1 bits) of length n as a binary vector in n each entry either 0 or 1). what is the “expected” (or average) length of (with such a vector ? How does your answer depend on n as n_∞ ?

5

find a vector that is tangent to the line y = 3x - 2 and rewrite the line in a parameterized form using that vector. can you also find a vector that is orthogonal to the line?

6

find a pair of non-parallel vectors tangent to the plane 2x + y - 3z = 5 . use these to give a parametrization of the plane with parameters s and t.

7

Give a vector-based parametrization of the line that passes through the PAIR OF points ( 3,-1, 6 ) AND (2, 0, 8) .

8

using vectors for parameterized lines works very well in dimension >3. write an equation for a line passing through points ( 0,-1, 7, 2, -3 ) AND ( 1, 2, 5, -4, 8) in 5.

9

parametrize the plane in 3 PASSING THROUGH THE POINTs (3, 1, 4), (-1, 2, 3), AND (5, 1, 0). can you also find an implicit equation for this plane? can you parametrize the line which is the intersection of planes 3x + y + z =0 and x - y + 4z = 6 ? hint: start by settings x(t) = t & substituting…

10

The treatment of vectors here given is for the bolstering of intuition and computation skills and is by no means a rigorous or comprehensive survey. It is often said that vectors are quantities that possess both magnitude and direction. This is suboptimal: there are “things” with magnitude and direction that are not vectors and vectors that have no implicit direction. A more proper treatment of vectors awaits you in a linear algebra course.

As a sample of what you would learn in such a course... A vector space V is a collection of objects along with: (1) a binary operation, +, which satisfies for all u, v, and w in V: (a) u+v = v+u (b) (u+v)+w = u+(v+w) (c) there is a “0” in V with u+0 = u for all u (d) there is, for each v in V an inverse “-v” such that v + (-v) = 0 in addition…

(2) a scalar multiplication * of the reals R on V such that for all a, b in R and u, v in V: (a)a(b*v) = (ab)*v (b)a*(u+v) = a*u + a*v (c)(a+b)*u = a*u + b*u (d)1*v = v The “Euclidean” vectors presented here form a vector space, but so do many other things. Examples include polynomials, solutions to linear ODEs, oriented loops in electrical circuits, and much more. You can work with Euclidean vectors: just know that more is “out there”.

I have read the above completely and agree to abide by these terms

is an important operation on vectors

u * v = u1v1 + u2v2 + … + unvn n

u*v = v*u

= # u v i i i=1

un

& v =

v1 v2 …



u =

u1 u2

only compatiblysized vectors can be multiplied!

vn

commutativity

u*0 = 0 v*v = | v|

zero 2 lengths

is the initial reason to investigate dot products

n

given two vectors u and v in  they span some plane in which their angle θ is well-defined

u*v =| u| | v| θ angle between vectors this is really useful!

dot products & angles

u =

1 0 -1 4

& v =

-1 2 3 2

u * v = | u | | v | cos θ u * v = -1 + 0 – 3 + 8 = 4 | u | = 1 + 0 + 1 + 16 = 18 | v | = 1 + 4 + 9 + 4 = 18 u*v = 4 = 2 cos θ = 18 9 | u| | v| θ = 1.3467… ≅ 77.2•

assume u , v ≠ 0

u⊥v cos θ = 0 u*v=0

in the plane, the vectors θ u= θ

θ

- θ v= θ are always orthogonal

the basis vectors { e1 , e2 , … , en } n in  are mutually orthogonal

is an important interpretation of the dot product

u = unit vector u * v = oriented, projected length of v along the “u axis”

positive dot product

negative dot product

be careful with the orientation! the sign of the dot product matters…

projection of vectors

a =

5 -1 2

0 b = 3 -4

component = a * ( b / | b | ) =(a*b)/| b| | b | = 0 + 9 + 16 = 5 a * b = 5 * 0 + (-1 * 3 ) + ( 2 * -4 ) = -11 = -11/5 = -2.2

have nice implicit equations

recall the (vague) notion of a hyperplane as a flat separating subset of a space

the hyperplane passing through a point x0 and orthogonal to a vector n is… 1-D line in 2-d

x-x0 y-y0

*

nx =0 ny

2-d plane in 3-d

x-x0 y-y0 z-z0

*

nx ny = 0 nz

hyperplane in arbitrary dimension

( x - x0 ) * n = 0

this is useful in machine learning

say you have a “Support vector machine” a hyperplane that optimally separates two types of data points

problem: Given a new point, on which side of the hyperplane does it lie?

( x - x0 ) * n > 0 ( x - x0 ) * n < 0 all you need to know is the location of one point on the hyperplane and a normal vector to the hyperplane. the rest is as easy as taking a dot product

problem: Given a new point, on which side of the hyperplane does it lie?

dot products are incredibly useful!

consider the “vector space” of possible interests or likes, each with an axis of intensity ranging from negative (hate it!) to positive (love it!)

the waste land napoleon dynamite legend of zelda porcupine tree jamon iberico

both you and that prospective “special someone” define vectors in this preference space… so, to measure compatibility, take your dot product!

large, positive dot product means lots of likes/dislikes in common

near-zero dot product means a mixture of similar & dissimilar interests

large, negative dot product means lots of anti-interests

if you go on to learn harmonic analysis or fourier theory, you will learn how to think of functions as “infinite-dimensional” vectors! the algebra & geometry of these are very important in signal processing, medical imaging & more…

integration _ dot product “f⊥g” _

π

f(x) g(x) dx = 0

x=-π

the big picture

dot products give a comparison between two vectors, blending algebra & geometry

1

in what direction does v = 3i – 4j + 5k point? Convert it to a unit vector.

2

which pair of the following vectors is “closest”? and which pair is “farthest”? (as measured by dot product) a = 2i – 5j + 7k : b = i + 3j - 3k : c = -5i + 2j + k : d = 4i + 3j -2k

3

consider the vectors u = 3i + 4j and v = -2i + j in the plane. Determine a decomposition of v into v = x + y where x is parallel to u and y is orthogonal to x . (hint: start by finding x ).

4

consider a 3-dimensional unit cube. what is the angle between the “grand diagonal” (connecting opposite corners) and an incident side edge? hint: use a dot product. now do this for higher-dimensional cubes!

5

compute the projection of the 4-d vector v = 8e1 + 2e2 - 5e3 + 6e4 onto the unit vector in the direction of w = e1 - e2 + e3 - e4 .

6

recall the equation of a line in 2-d passing through point P = ( x0, y0 ) and having slope m. What is the vector n orthogonal to this line?

7

what is the vector n orthogonal to the plane 2x – 3y + 7z = 9 in 3 ?

8

for which constant(s) λ is the plane 2λ x – y + λ2z = -6 orthogonal to the plane x + 5λ y - 3z = 12 in 3 ?

9

in chapter 4, problem 3, you computed the expected length of an n-bit binary vector. what is the expected dot product of two such binary vectors in n ?

10

consider the following vectors in 4 4 0 compute the projected length of u onto v . -5 3 compute the projected length of v onto u . u = v = 2 -4 are these the same or different? -2 0

and that is a very special world…

u2v3-u3v2 u × v = u3v1-u1v3 u1v2-u2v1

u×v =-v×u anti - commutativity

u×0 =0 u1 u = u2 u3

v1 v = v2 v3

(this also works in 2-d if you set the third component to zero)

v×v =-v×v = 0

zero

Has a strongly geometric meaning

u × v ⊥ u ( and v ) u * (u × v) = u1 (u2v3-u3v2) + u2 (u3v1-u1v3) + u3 ( u1v2-u2v1) = u1 u2v3 - u1u3v2 + u2u3v1 - u2u1v3 + u3u1v2 – u3u2v1 =0

cross products & planes in 3-d

4 _ PQ = 0 1

P = ( 0 , -1 , 1 ) Q = ( 4 , -1 , 2 ) R = (2,1,2)

the cross product is orthogonal to the plane spanned

R P

Q you can simplify this…

2 _ PR = 2 1

0-2 -2 _ _ PQ × PR = 2 - 4 = -2 8-0 8

-2(x-0) -2(y+1) + 8(z-1) = 0

it sure seems hard to remember

using standard basis vectors makes it easy to remember

i×j = k j×k = i k×i = j 1 i= 0 0

j × i = -k k × j = -i i × k = -j

0 j= 1 0

0 k= 0 1

j k

i

i×j = k

k×i = j

j × i = -k

j×k = i

i × k = -j

k × j = -i

computing cross products

the cross product u × v 4 u= 0 = 4i+2k 2 0 v= 3 = 3j-k -1

u × v = ( 4 i + 2 k ) × (3 j - k ) = 12 i × j -4 i × k +6 k × j -2 k × k = -6 i + 4 j + 12 k

the length of u × v gives information about how u and v lie in the plane they span angle between vectors

| u×v| =| u| | v| θ =

area of parallelogram spanned by vectors

the distance from a point P to a line containing point Q parallel to a vector v dist =

_ | QP × v | | v|

P

the distance from a point P to a plane containing point Q normal to a vector n dist = n

v

Q

Q

_ | QP * n | | n|

P

is used all the time in 3-d

u * ( v × w ) = u1 v2w3 - u1v3w2 + u2v3w1 - u2v1w3 + u3v1w2 – u3v2w1

u*(v×w) = v*(w×u) = w*(u×v) =-u*(w×v) =-w*(v×u) =-v*(u×w)

symmetric under cyclic permutation antisymmetric if cycle reversed

this is called a

volume = u * ( v × w ) absolute value

it’s kind of special…

unless you are in the 7th dimension… there is an unusual vector product 7 that works only on  . it’s cool! but, no, I’m not telling you more about octonions

the big picture

in 3-d, there are several specialized vector products (cross, scalar-triple) that help with geometry

1

compute the cross product ( ( i × 3k ) - ( 2j × i ) ) × 4k . can you do it in your head?

2

what is the equation of a plane in 3-d that passes through the points P = ( 1 ,-2 , 1 ), P = ( 3, -1 , 1 ), and R= ( 2 , -1 , 3 ).

3

if a force of 12 newtons is applied to the end of a 15 cm long wrench at an angle in the plane that is 10 degrees away from perpendicular to the wrench, how much torque is applied? (compute the length of the torque vector)

4

what is the equation of a plane in 3-d that passes through the point P = ( 0 ,-2 , 5 ) and contains the vectors 3 i + 2 k and i - j + k .

5

Compute the volume of the 3-d parallelepiped determined by 3 i + 2 j , 2 i + j + 2 k , and 3 j + 4 k .

6

prove that the cross product of parallel vectors is zero.

6

if you know that u , v , and w are all mutually orthogonal, what can you say about u × v × w ?

7

what is the distance from the point P = ( 4,-3 , 1 ) to the line through the points Q = ( 4, 2 , 3 ) and R= ( 0, -1 , 2 ) ?

8

use the cross product to determine what is the area of a triangle in the plane passing through the points P = ( 2 ,-3 ) , Q = ( -1 , 2 ) and R = ( 4,5 ).

9

compute the scalar triple product u * ( v × w ) of 1 -3 2 Verify that: A) u * ( v × w ) = - v * ( u × w ) u = -2 v = 4 w= 0 B) u * ( u × w ) = 0 3 0 -5

10

For what value(s) of a are the planes given by ax - y + 2az = 5 and ax + 3ay - z = 11 orthogonal?

are used all over the place in applications

vectors in elementary physics

W = F*d

T = r×F

F = ma

work is the dot product of force and displacement vectors

torque is the cross product of force and position vectors

force is the scalar product of mass & the acceleration vector

lead quickly to calculus with vectors

3

γ:_ x(t) y(t) z(t)

γ(t) =

x' y' z'

γ'=

x '' y '' z ''

γ '' =

r

v

a

position vector

velocity vector

acceleration vector

the definition is exactly what you think it is…

r(t+h) - r(t) v = r ' = h _0 h

The velocity vector has direction tangent to the curve, oriented by increasing parameter The magnitude of the velocity vector is the speed at which a point along the (time-) parametrized curve moves

velocity & acceleration γ(t) =

γ '( t ) =

γ ''( t ) =

t tt 2 t t 2 t - 2 t -2 t  t

- t -4  t  t 2 2 t -2 2 t

describe the position, r(t) , velocity, v(t) , and acceleration, a(t) as a function of time t

can be useful in calculus

arclength & integration n

given a parametrized curve γ :  _  arclength element: dl in the plane, recall:



l = dl

dl

dy

dx

2

dl = dx2 + dy2 dl =

2

dx dy + dt dt

2

dt

dl = | γ ' | dt

The LENGTH of the velocity vector

arclength & integration 3

what is the length of the helix in  ? 4t γ(t)= 4t 3t -4  t γ '( t ) = 4  t 3 | γ'| =

2

t = 0…6π

t = 0…6π

2

16  t + 16  t + 9 = 5

∫ ∫ 5 dt = 30π

l = dl l =



t=0

dl = | γ ' | dt

to a limited extent, Yes

for vector functions u(t), v(t) :

( u * v ) ' = u'* v + u * v ' ( u × v ) ' = u' × v + u × v '

some celestial mechanics r | r| F=ma GMm r =| r |2 | r |

d (r×v) = (r'×v)+(r×v') dt = (v×v)+(r×a) 0 GM r =0 = r× 3 | r|

to many applications of vector calculus

the big picture derivatives of parametrized curves yield velocity & acceleration vectors… vector algebra assists with doing calculus on curves

1

compute the velocity and acceleration functions of a particle with position: A) r(t) = t2 i – t3 j + et k B) r(t) = (x0+u0t) i + (y0 + v0t - g t2/2) j C) r(t) = a  t e1 + a  t e2 + b  t e3 + b  t e4

2

recall: we computed the velocity and acceleration of this curve: t a) when are the position and velocity orthogonal? γ(t) =  t t b) when are the velocity and acceleration orthogonal? 2 t c) is there any way to obtain your answer for part b) without direct computation? (i.e., using derivative rules…)

3

consider a particle moving along the curve given by r(t) = et i – t3 j +  t k . Is there ever a time when the acceleration and velocity vectors are parallel?

4

What is the work done by the force vector F = i - 5j + 3k in moving an object from point (1, 2, 4) to (0, -3, 7) along a straight line?

with position r(t): For t=0…4π For t=0…1 For t=0…T

5

Compute following arc lengths of the curves A) r(t) =  2t i + 3t j +  2t k b) r(t) = 4t3/2 i + 2  3t j + 2  3t k c) r(t) = i + 3 j + 7t2 k

6

consider 2n coordinatized as { (xi , yi ) : i = 1…n }. Let γ be a curve in which projects to a circle of radius Ri in each (xi , yi ) plane (say, centered at each origin). A) parameterize γ = γ(t) as simply as possible. B) guess what the arclength is. then compute it via an integral.

7

Prove that if a particle in the plane has position vector always orthogonal to velocity, then it is moving along a circular path. (Hint: consider the dot product of position with itself). Does your argument work in 3-d?

8

use the ordinary product rule for derivatives to prove that d ( f(t) r(t) ) = f'(t) r(t) + f(t) v (t) for f a scalar function of t dt

was an early motivation for vector calculus

integrating for position a particle starts at

2 4 -1

r( 0 ) =

& moves with acceleration

a( t ) =

-(t) 3e-t 6t

and initial velocity

v( 0 ) =

-1 0 2

where is the particle at time t ?

r( t ) =

@@ t

s

s=0

u=0

-(u) 3e-u du ds = 6u

@

t

s=0

-1 2 -(s) (t)-t-1 -3e-s+3 + 0 ds = 3e-t+3t-3 + 4 3 2 2 -1 3s t +2t

is especially worth focusing on

has a magnitude

and a direction

what do these mean for a curve? n

given a parametrized curve γ :  _ 

velocity, v(t) , & acceleration, a(t) , are often expressed as a combination of two orthogonal unit vectors for each time t unit tangent vector to v at time t unit normal vector to v at time t

T'(t) N(t) = | T'(t)||

*

=0

these are orthogonal unit vectors

a

unit tangent & normal vectors this curve is a helix

r( t ) =

compute the unit tangent & normal vectors

(3t) 4t-1 (3t)

-3(3t) v 1 4 T( t ) = = | v|| 5 3(3t)

v( t ) =

-3(3t) 4 3(3t)

a( t ) =

-9(3t) 0 -9(3t)

-9(3t) 1 T' ( t ) 0 N( t ) = = | T' ( t )|| © -9(3t) normalization

ahha!

-(3t) 0 = -(3t)

one decomposes the acceleration vector into its “tangential” and “normal” components these are useful!

a

tangential acceleration indicates the rate of change of speed

normal acceleration indicates how the curve bends away from tangency

a = aT T + aN N =(

d | v| T+ dt

) (

2

)

| v| N

tells you how “tightly” you turn

given a parametrized curve γ :  _ n the curvature, κ(t), measures how much the curve “bends”…

curvature is the angular rate of change of the tangent vector with respect to arclength

κ( t ) = d T(l) = d T(t) dl dt

dl dt

not so easy to use in practice…

small curvature means a “flatter” curve

for a curve in 3-d, this simplifies

| v×a| a*N κ( t ) = = 3 v*v | v|

large curvature means a “tighter” curve…

the “best fit circle” to a curve at a point lies in the normal-tangent plane and has radius the reciprocal of the curvature

1 r(t) + κ(t) N(t)

a

: latin, “kiss”

computing curvature in the plane compute the curvature of the hanging cable given by the graph of

r( t ) =

t  t

v( t ) =

1  t

a( t ) =

0  t

y =  x remember these?

2-d curves are simple…

in 2-d, the cross product gives a scalar function

| v×a|  t  t 2 κ( t ) = = = =  t 3 2 3/2 3 | v| (1+ t)  t

because the acceleration vector is also changing

this “twist” is along the unit…

= × and the strength of twisting out of the osculating plane is given by the

τ( t ) = -N(t) * d B(l) dl torsion is an analogue of curvature out of the osculating plane

the frenetserret equations is a set of coupled differential equations that dictate how the tangent, normal, & binormal vectors evolve

dT = κN dl dN = -κT+τB dl dB = -τN dl

there are various formulae for computing torsion based on a parametrization of a curve

(r' × r'') * r''' τ = 2 | r' × r''|| such are not essential to calculus, but, if you like this, check out differential geometry…

n

do you wonder what happens in  ?

the big picture derivatives of parametrized curves lead to notions of curvature, torsion, & more these are the beginnings of differential geometry & lots of applications in 3-d

1

FOR EACH OF THE FOLLOWING CURVES, compute the unit tangent, T(t), unit NORMAL, N(t), and binormal, B(t), VECTORS at the given parameter t: A) B) C) d)

2

r(t) r(t) r(t) r(t)

= = = =

t2 i – t3 j + et k 6t i – (8  t) j - (8  t) k t2 i + ( t) j + t k ( 2t) i + ( 2t) j + t k

t=1 all t t=1 t=0

a formula for curvature in 3 using velocity, v(t), and acceleration, a(t) IS: | v(t) × a(t) | κ(t) = | v(t) | 3 Use this to compute the curvature of the curveS A) r(t) = t i + t2 j + t3 k at the point (1, 1, 1) B) r(t) = (et  t) i + 2et j + (et  t) k at the point (0, 2, 1) c) r(t) = t et i + e2t j – e-3t k at the point (0, 1, -1)

3

Compute the tangential & normal components of acceleration of the curves with position r(t): can you do this without A) r(t) = ( 2t) i + ( 2t) j + et k at t=0 computing the unit tangent & 2 b) r(t) = t i - 2t j + ( t) k at t=1 normal vectors explicitly? hint: start by computing speed c) r(t) = ( t + t  t) i + ( t - t  t) j all t

4

using the definition of the unit normal vector N (the normalized derivative of the unit tangent vector) prove that N * T = 0. Hint: differentiate (T * T)

5

use the curvature formula from problem 2 to show that, for a planar curve of the form r(t) = f(t) i + g(t) j , the curvature equals | f' g'' - f'' g' | κ(t) = ( ( f' )2 + (g' )2 )3/2

6

use the result of problem 5 to compute the equation of the osculating circle for the ellipse r(t) = (3 t) i + (4  t) j . what do you notice about your answer?

n

for curves in all  ?

consider what happens when you are given data that comes as a function of time…

x

this is a 1-dimensional output : one variable changes

…but what happens with multiple time series?

x1 assuming a consistent time, one can think of all the time series together

x2 as a time-parametrized curve in a space whose dimension

x3 equals the number of time series

x4

what are some realistic cases in which multiple timeseries occur? & what does this tell us about the dimensionality of “physical” systems?

robotics

finance

neuroscience

it takes six variables to assign position and orientation data of a single flying drone… what about a swarm of them?

keeping track of the S&P500 companies? the time-series of those stock prices gives a curve in a 500 dimensional space!

each neuron has a time series of electrical activity. how many neurons do you have? that’s high dimensional!

surfaces? or other “things” in higher-dimensional space?

Even in simple cases where we can graph a 2-d surface… it’s not obvious!

What is the derivative? the slope? the rate of change?

for an n-dimensional m

“surface” in 

e

×

e



3

E

1



« x



π

0 F

Φ

nk





e

ж

Φ ◊



« ∂

2

A-1

*

I

e

×



3

E

1



« x



π

0 F

Φ

n2





AT

e

ж

Φ ◊



« ∂

2

E-1

*

I

A MATRIX IS A 2-D ARRAY of things, such as numbers, variables, or functions

1

2

1 0 1 3

-1 2 4 7

1 8 -1 0

3 …

n

columns

1 2 …

-3 2 6 0

m

Aij = ith row; jth column an m × n matrix has m rows and n columns 1 × n matrix

m × 1 matrix

A =

1 4 -1 5

rows

a matrix is a 2-d Array

square matrices have equal numbers of rows and columns the nonzero terms can sometimes be fit into triangular or diagonal form

Z

“zero” matrix: any size all zeros

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

I “identity” matrix: square, all zeros but ones on the diagonal

1 0 0 0 0 0

0 1 0 0 0 0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

being data structures, are everywhere

the CAuchy stress matrix fix coordinate axes xi at a point in a substance

stress along xj axis σij = acting in the plane orthogonal to xi axis

σ=

σ11 σ12 σ13 σ21 σ22 σ23 σ31 σ32 σ33

Digital images as matrices

i Aij =

Greyscale intensity Of pixel at ( i , j )

j

social networks & binary relations

i 1 : if i friends j Aij = 0 : otherwise

j 1 0 0 0 0 1 1 0 1 0 1 0 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0

0 0 0 0 0 1

correlation in biological systems gen-7 gen-6 gen-5 gen-4 gen-3 gen-2 gen-1

Aij =

correlation between genetic traits i and j

gen-1 gen-2 gen-3 gen-4 gen-5 gen-6 gen-7

markov chains & probability

Aij =

probability that state changes to state i at the next time

A =

0.7 0.1 0.1 0.1 0.6 0.4 0.2 0.3 0.5

j

word-following frequencies

list all the words used in a fixed piece of text

Aij =

probability that word j follows word

i

can be interchanged…

T

( A )ij = Aji -3 5 2 6 -1 0 8 12 -4 3 0 10

transpose is involutive

T

=

2 -1 8 3 -3 0 12 0 5 6 -4 10

T T

(A ) =A for all A

it undoes itself

flip symmetry about the diagonal

T

A = A symmetric matrices include: identity matrices correlation matrices facebook “friend” matrices stress matrices

the big picture

matrices are a tYpe of data structure that show up everywhere in mathematics & its applications

in the questions below, use the following matrices: 8 -3 5 2 9 3 -3 5 5 6

1 -3 7 0 0 -2 0 12 -4 -2 -4 10

9 2 -1 8 -2

-1 6 0 2 0

2 6 -7 7 5

8 3 4 7 16 -5 -3 0 1 6

1

for each matrix above, state its size in terms of “m-by-n”.

2

state which of the above matrices are: A) square; B) triangular; or C) symmetric.

3

for each matrix above, write down its transpose.

1

5 0 -7 0 2 -1 0 0 3

4 6 -7 11 -2

the following problems are more interpretive & based on these matrices:

A=

1 1 1 0 1 0

1 1 1 0 0 0

1 1 1 1 1 1

0 0 1 1 0 0

1 0 1 0 1 0

0 0 1 0 0 1

1 0.2 0 B= 1 0.9 0

0.2 1 0.2 0.8 0 0.9

0 1 0.2 0.8 1 0.15 0.15 1 0 0.1 1 0.15

0.9 0 0 0.1 1 0

0 0.9 1 0.15 0 1

4

assume the matrix A encodes “who is friends with whom” on a social network. how well can you Rank the people (person 1, 2, …, 6) in terms of “popularity”.

5

assume the matrix B gives correlations between product sales, in that people who buy the product of row i also buy a product of row j with probability Bi,j. How would you put the products into “groups” for a recommendation engine?

(A + B)ij = Aij + Bij 1 -3 1 -1 0 4 1 -1 2 2 -1 5 1 -3 -1 -3 6 -9 2 0

+

5 1 -1 5

-2 1 3 1 2 4 0 8 6 1 4 -1 0 3 7 -3

=

6 -5 5 3 -2 11 2 6

2 2 1 3 2 10 2 1 -2 6 9 -3

this is just like scalar or vector addition : easy!

(cA)ij = cAij

3*

5 1 -1 5

-2 1 3 1 2 4 0 8 6 1 4 -1 0 3 7 -3

=

15 3 -3 15

-6 6 18 0

3 12 3 18

9 3 0 24 12 -3 21 -9

this is just like scalar or vector multiplication : easy!

is the most important operation

(A B)ij = Σ Aik Bkj k=1...n

=

( ith

row

only compatiblysized matrices can be multiplied!

A) * (jth

col B)

A

1 0 1 3

m×n

4 2 0 2 8 -1 6 1 3 -1 5 0 3 -1 0 -1 1 2 8 4 -1 7 0

10 38 -5 5

-4 2 12 26 26 1 48 7

-2 -2 15 27

9 -2 4 17

B

n×p

AB

m×p

matrix-vector multiplication matrix -times- column vector = column vector

0 1 -1 0 1 -1 2 5 2 1 -3 0

2 1 0 -1

0 1 -1 0 1 -1 2 5 2 1 -3 0

2 1 0 -1 1 -4 5

0 = 2 1 2

1 + 1 -1 1

-1 +0 2 -3

0 -1 5 0

are a lot like numbers…

J=

-1 0 0 -1

J -I

I -J

0 -1 1 0

0 1 -1 0 1 0 0 1

2

J = -I 3

J = -J 4

J =I

since

2

J = -I

one has

J = -I

we have a “representation” of the complex numbers in the space of real 2-by-2 matrices

a + bi _

a -b b a

you can check that addition & multiplication “work the same” in each representation…

oh, we’ve just begun…

A

B 0 1 2 3

A 2 1 1 0

2 5 0 1

AB ≠ BA

AB

BA

B

2 1 1 0

0 1 2 3

1 0 7 2

u×v = v×u

  yx

“free sugar”

=

is not realLy the same as

y_0 x_0

  yx

x_0 y_0

“sugar free”

first socks then shoes! oh no, not again…

transpose & multiplication

T

T

T

(A B) = B A

T

(A B)ij = (A B)ji = k=1...n Σ Ajk Bki (A B)ij = Σ Aik Bkj k=1...n

you may want to remember this… it will be useful later!

T T

T T

Σ Bik Akj ( B A )ij = k=1...n = k=1...n Σ Bki Ajk

the big picture

matrices are like numbers… you can do algebra with them, and they can surprise you with their form & nature

1

compute the following matrix products, if it is possible! 2 3 0 -1 1 3 3 1 5 4

2

-1 7 2 -1

0 3 2 0 -4 -1 8 0 1

0 -3 5 1 5 7

8 3 -3 0 16

-1 1

3 5 9 4

4 -3

7 9

try to do the following matrix-vector products in your head: 4 0 7 5

3

5 0 2 1 2 1 0 3 -4

-1 2

5 0 7 1 2 11 0 2 13

1 2 0

2 0 1 1 0 3 2 -1 0 0 7 5

find a counterexample to the claim that (AB)T = ATBT.

2 -1 0 1

2 6 7 11 -1

4

computing powers of square matrices can be…difficult. try to guess what is… n λ 1 for λ a constant and n>0 0 λ

5

here is a technique that justifies what you did above using the binomial theorem: n n n n-1 λ 1 λ 0 0 1 = + 0 1 = λ 0 +n λ 0 + … 0 λ 0 λ 0 0 0 λ 0 λ 0 0 what happens to the higher-order terms in this series?

6

give an example of a pair of distinct and non-diagonal 2-by-2 matrices that do commute under multiplication.

7

is it the case for n×n matrices A and B that ( A + B )2 = A2 + 2AB + B2 ?

8

is it possible for a square matrix to satisfy A2 = 0 but with A ≠ 0 ? Explain.

can be expressed in matrix-vector form

x – 3y + 3z = 8 5x + y - 2z = 7 -2x + y - z = -6 this sYstem has a unique solution

x =

2 -1 1

x y

1 -3 3 5 1 -2 -2 1 -1

A

z

x

Ax=b

= =

8 7 -6

b

is ubiquitous in applications

balancing chemical reactions remember doing this “ad hoc” in chemistry class?

C5H8 + O2 _ CO2 + H2O a1C5H8 + a2O2 _ a3CO2 + a4H2O balancing gives a linear equation for each element…

H

8a1 = 2a4

C

5a1 = a3

O

2a2 = 2a3 + a4

8 0 0 -2 5 0 -1 0 0 2 -2 -1

a1 a2 a3 a4

=

0 0 0

R1

Kirchhoff’s laws lead to equations

i1 = i2 + i3 i3 = i4 + i5 R1i1 + R2i2 = E R2i2 - R3i3 – R4i4 = 0 R4i4 – R5i5 = 0

i1

R3 i2

R5

i3 i4

R2

E 1 0 R1 0 0

i5

-1 -1 0 0 0 1 -1 -1 R2 0 0 0 R2 -R3 -R4 0 0 0 R4 -R5

R4

i1 i2 i3 i4 i5

=

0 0 E 0 0

never learned Kirchhoff’s laws? don’t worry… move on!

mesh analysis for circuits

network traffic flow the same conservation laws apply to internet data flow

xi = “data flow rate”

x1

x3

F

across wire i

x1 = x3 + x4 x2 + x3 = x5 F = x1 + x2 x4 + x5 = F

x4

x2

1 0 1 0

1 1 0 0

0 -1 1 0

F x5

0 0 -1 0 0 -1 1 1

x1 x2 x3 = x4 x5

F 0 0 F

commodity production processes

p1 p2 p3 p4 p5

i1 i2 i3 i4 i5

a linear system relates ingredient amounts to product amounts

1 0 0 0 3

2 0 2 3 0

2 0 0 0 0

2 0 1 4 0

2 1 1 3 0

p1 p2 p3 p4 p5

=

i1 i2 i3 i4 i5

linear spring-mass systems u1

F1

u2

F2

u3

F3

κ4 κ1

κ2

κ3

κ1+ κ2+ κ4 -κ2 -κ4 -κ2 κ2+κ3 -κ3 -κ4 -κ3 κ3+κ4

u1 F1 u2 = F2 u3 F3

given a collection of (linear) springs and masses, hooke’s law gives a system of linear equations

but we are left with a question

can we actually solve any of these interesting problems? what if the matrices are very large?

the study of solutions to large linear systems belongs to the subject of “numerical linear algebra” and is very useful

the big picture

matrix-vector multiplication lets us express a variety of interesting linear sYstems in a comMON framework

1

rewrite the following systems of linear equations in the 2x – y + 3z = 2 a+c = 7 5x + z = 10 b – 5c = -4 a) b) c) x + y – 2z = 4 c + 2a – 3b = 5 carefully identify the individual terms: A, x, & b .

form Ax = b : x1 – x2 + x3 = 3 x2 – x3 + x4 = -1 x3 – x4 + x1 = 6 x4 – x1 + x2 = 7

2

convert the following chemical reactions into linear systems for balancing: a) Au2S3 + H2 _ Au + H2S B) H Cl O4 + P4O10 _ H3PO4 + Cl2O7

3

assume the following recipes for baking: (units are in g = grams) cookies: 500g flour, 225g sugar, 200g butter, 80g eggs bread: 850g flour, 500g water, 15g sugar, 10g yeast bagels: 475g flour, 225g water, 50g eggs, 10g yeast write out the system Ax = b, where b = amount of ingredients used. what is the interpretation of x?

4

5

linear systems arise in forcebalancing problems in statics. for each of the loaded-cable scenarios shown, use the fact that forces sum to zero in equilibrium to set up systems for the tensions in the wires.

60•

a)

T1

90•

30•

45•

T1

B)

T2

60•

T2 T3

45•

T4

25lb

10lb

use kirchhoff’s laws to derive equations of the form Ax = b for these circuits:

20Ω

a) i1 100V

50Ω i2

b) i3

10Ω

30Ω i1

30Ω

220V

i2 10Ω

20Ω i5

i3 i4 40Ω

20Ω

is especially interesting…

solving a triangular system given an upper-triangular linear system

1 0 0 0 0

3 -4 0 0 0

1 0 6 0 0

-5 0 2 1 4 -1 -2 -3 0 2

x1 x2 x3 = x4 x5

8 5 1 1 6

5

2x5 = 6

4

-2x4 -3x5 = 1 -2x4 -9 = 1

3

6x3 -20 - 3 = 1

2 start from the last equation… solve, substitute, and repeat

6x3 + 4x4 - x5 = 1

x1 -12 x2 -3 x3 = 4 x4 -5 x5 3

-4x2 + 0x3 + 2x4 + x5 = 5 -4x2 -10 + 3 = 5

1

x1 + 3x2 + x3 - 5x4 + 0x5 = 8 x1 - 9 + 4 + 25 + 0 = 8

every system were triangular

1 0 0 0 0

2 0 0 2 0

0 0 6 3 0 0

3 2 4 2 -1 0

0 1 -2 -1 4 2

x1 x2 x3 x4 x5

0 -2 = 2 4 3 10

1 0 0 0 0

2 0 0 2 0

0 0 6 3 0 0

3 2 4 2 -1 0

0 0 1 -2 -2 -1 2 4 4 3 2 10

if you exchange the rows of the augmented matrix... you do not change the solutions – it’s the same sYstem!

you can do things to an augmented matrix… & not change the solutions

1 2 1 5 1

-3 -4 5 0 -1

1 0 1 7 3

-1 2 4 -2 4

2 8 -1 -9 0

1 0 -1 0 3

_ R5 R2←

1 1 1 5 2

-3 -1 5 0 -4

1 3 1 7 0

-1 4 4 -2 2

2 0 -1 -9 8

1 3 -1 0 0

1 2 1 5 1

-3 -4 5 0 -1

1 0 1 7 3

-1 2 4 -2 4

2 8 -1 -9 0

1 0 -1 0 3

1/2 R 2

1 1 1 5 1

-3 -2 5 0 -1

1 0 1 7 3

-1 1 4 -2 4

2 4 -1 -9 0

1 0 -1 0 3

1 2 1 5 1

-3 -4 5 0 -1

1 0 1 7 3

-1 2 4 -2 4

2 8 -1 -9 0

1 0 -1 0 3

R 2- 2R 1

1 0 1 5 1

-3 2 5 0 -1

1 -1 -2 4 1 4 7 -2 3 4

2 4 -1 -9 0

1 -2 -1 0 3

solving a 3-by-3 system do row reduction to get simplified form

1 3 1 2 3 -1 -4 0 2

x y z

1 3 1 2 2 3 -1 -5 -4 0 2 -2

2 = -5 -2 1 3 1 2 0 -3 -3 -9 -4 0 2 -2

1 3 1 2 0 -3 -3 -9 0 12 6 6

notice how nice it is to have a “1” in the upper left-hand corner…

solving a 3-by-3 system do row reduction to get simplified form

1 3 1 2 3 -1 -4 0 2

x y z

1 3 1 2 0 -3 -3 -9 0 12 6 6

x y z

2 = -5 -2 1 3 1 2 0 -1 -1 -3 0 2 1 1

=

3 -2 5

1 3 1 2 0 -1 -1 -3 0 0 -1 -5

once you have an upper-triangular matrix, you can back-substitute

to perform row reduction

0 3 5 -9 1

4 2 3 0 5

-1 6 0 1 3

4 8 0 -2 7

-1 9 2 3 -7

6 -5 12 -1 0

pivot first, then zero out the first column… repeat with the next column until simplified

the big picture

row reduction simplifies matrices and, thus, sYSTems of linear equations

1

2

use back-substitution to solve the following systems easily: x 3 2 -5 3 A) 0 3 -1 B) y = -8 0 0 -2 z 2

use row reduction of an augmented matrix to solve the following systems of equations A)

3

2 0 -3 -1 0 1 4 -3 0 0 2 4 0 0 0 3

x – y + 3z = 2 5x + z = 10 x + y – 2z = 4

u – 2v + 3w = 5 2u + 3v - w = 17 B) -3u + 4v + 2w = -8

x1 x2 x3 x4

=

-7 1 8 15

x–y+z–u+v = 1 x+z–u+v = 2 x–y+u-v = 3 x+y-z+v = 4 C) x–y+z–u = 5

Challenge: the approach to row reduction given here uses three elementary operations. is that necessary? Can you express one of the operations in terms of a sequence of the other two?

4

interesting things can happen when solving linear systems… what happens when you rowreduce to solve this? how many solutions do you get?

5

u–v+w = 2 B) 5u + v + w = 6 7u – v + 3w = 9

more interesting things can happen when solving linear systems… what happens when you rowreduce to solve this? how many solutions do you get?

6

2x + 2y -4z = 5 - 4x + 7x = 3 A) 2x - 2y + 3z = 4 3x - y + z = 2 A) x – 2y + z = 10 5x - 5y + 3z = 22

B)

u–v = 7 v-w=5 u + v – 2w = 17

one (useful!) way to think about row-reduction 1 0 0 0 operations is VIA matrices. for example, rescaling 0 1 0 0 0 0 c 0 the third row of a 4-by-4 matrix can be obtained 0 0 0 1 by multiplying on the left by the matrix shown. can you find matrices that perform the other two operations? think!

Ax =b (A x) /A = b /A x = b /A

the inverse of a matrix “undoes” how a matrix acts on a vector

you cannot divide by a matrix, but you can multiply by something like a reciprocal…

an inverse of A -1 is a matrix, A satisfying -1

-1

AA = I = A A

the inverse is unique (if it exists)

if AB=I=BA and AC=I=CA then B = I B = (CA) B = C (AB) = CI = C

cannot be taken for granted!

the inverse of

A =

a b 0 0

does not exist for any a, b

if Ax=0 has a solution for -1 x≠0, then A does not exist -1

if Ax=0 and A exists, then -1

-1

-1

x = I x = (A A) x = A ( Ax ) = A 0 = 0 this implies that any matrix which can be row-reduced to have an all-zero row or column is not invertible

a general 2-by-2 matrix inverse d -c

the inverse of

A =

-1

A =

a b c d

is

a b c d

d -b 1 ad-bc -c a

ad-bc

0 a c

d -b -c a

ad-bc

0

-b a 0

ad-bc

b d 0

ad-bc

ad-bc ≠ 0

what are inverse matrices good for?

inverse matrices automatically solve linear systems

Ax -1 A (Ax) -1 (A A)x (I)x x

=b -1 =A b -1 =A b -1 =A b -1 =A b

this may be useful when solving over & over with different right-hand sides

but how do you compute them?

-1

computing the inverse A

is like solving n instances of

Ax = b -1

AA = I

A

1 3 1 2 3 -1 -4 0 2

a11 a12 a13 a21 a22 a23 a31 a32 a33 1 0 0 0 1 0 0 0 1

-1

A

1 3 1 2 3 -1 -4 0 2

a11 a21 = a31

1 0 0

1 3 1 2 3 -1 -4 0 2

a12 a22 = a32

0 1 0

1 3 1 2 3 -1 -4 0 2

a13 a23 = a33

0 0 1

1 3 1 1 0 0 2 3 -1 0 1 0 -4 0 2 0 0 1

R 3+4R 2 R 1-R 3 R 2-R 3

1 3 1 1 0 0 0 -3 -3 -2 1 0 0 0 -6 -4 -4 1

R 2-2R 1 R 3+4R 1

1 3 1 1 0 0 0 -3 -3 -2 1 0 0 12 6 4 0 1

-R 2/3 -R 3/6

1 3 1 0 1 1 0 0 1

1 3 0 1 3 2 3 1 6 R1-3R2 0 1 0 0 13 16 0 0 1 2 3 -2 3 -1 6

1

0 0 2 -1 3 3 0

2 -2 -1 3 3 6

1 0 0 1 3 -1 3 -1 3 0 1 0 0 13 16 0 0 1 2 3 -2 3 -1 6

A

I

I

-1

A

there are nice formulae for inverses

perturbations of the identity -1

2

3

4

5

( I-A ) = I + A + A + A + A + A + … 2

3

4

5

( I-A ) * ( I + A + A + A + A + A + … ) = 2 3 4 5 = (I+A+A +A +A +A +…) 2 3 4 5 -(A+A +A +A +A +…) = I

( & vice versa )

oh… hello it has been too long, my old friend…

the big picture

you cannot “divide” by a matrix, but you can multiply by the inverse… if it exists!

1

compute the inverses of the following matrices using the 2-by-2 formula A)

3 0 12 -1

B)

1 3 1 4

C)

3 2 12 8

D)

2 -1 -1 2

2

invert the following matrices 0 1 1 1 2 A) B) 2 3 1 -1 0 1 0 -1 3 1

3

use a 2-by-2 matrix inverse to give general solutions to these systems (finding x and y as a function of a and b). A)

4

2x – y = a 5x + y = b

B)

by row-reducing the identity-augmented matrix 3 1 2 2 c) 1 1 -2 2 2 3 -1 5

4x + 3y = a x+y = b

c)

3x + 4y = a 2x + 3y = b

what is the inverse of a diagonal matrix? when is such a matrix invertible?

5

assume that A and B are both invertible n×n matrices. Prove that ( AB )-1 = B-1A-1 (use the definition of the inverse. this is not a hard problem!)

6

a permutation matrix is a square matrix P in which each row and column has a single “ 1 ”, all other entries being “0”. prove that for such a matrix P, its inverse is equal to its transpose: P-1 = PT

7

the following matrix is called a rotation matrix: (for reasons to be seen soon…)

Rθ =

 θ - θ θ θ

compute its inverse: what pattern do you notice?

8

assume that a square matrix A is invertible and satisfies A2 = A. what can you conclude about A ?

9

assume that a square matrix A is invertible. what is the inverse of A for natural numbers k>0 ? are all such powers invertible?

k

IS VERY POWERFUL, BUT…

A MATRIX Is a linear transFormation, sending vectors to vectors

Every matrix is really a function on vectors

n

m

for A an m × n matrix, A : IR _ Ir

A ( cx ) = c Ax A ( x + y ) = Ax + Ay

A diagonal matrix acts by rescaling each axis independently

2 0 0 -1/2

Because it is a diagonal matrix, The axes remain “invariant”, though stretched, compressed, or flipped

Note that negative scalings reverse direction, just as with vectors

Taken to an extreme, a rescaling can send one dimension to zero, acting as a projection

0 0 0 1

IN HIGHER DIMENSIONS A PROJECTION CAN SEND ALL OF n TO A LOWER-DIMENSIONAL “SUBSPACE”

Some matrices can project along non-axis aligned directions, e.g.,

0 1 0 1

Off-diagonal terms act to “slide” or “shear” in certain directions

1 0

You might mistake this for a rotation, but it’s not: horizontal lines are kept horizontal

1 1

A vertical shear can be enacted by the transpose

1 0 1 1

Special matrices rotate rigidly by some angle

 θ - θ

θ

θ θ



Note how this matches with what should happen when the angle is zero or π

Compare with that wonderful matrix J that represented the imaginary i

0 -1 1 0

ok, Matrices are really geometric transformations

Do this…

then this…

1 1 0 1

The composition is…

0 -1 1 0

0 -1 1 1

It’s matrix multiplication!

0 -1 0 -1 = 1 1 1 0

1 1 0 1

extends to full generality

Euler angles & 3-d Rotation matrices Consider an object (vehicle, robot, or your favorite sword) and fix a coordinate frame…

R z:α =

then rotate about each axis (ccw) by an angle

These are often called

The matrices are 3-d

cos α -sin α 0 sin α cos α 0

0

0

1

cos β 0 sin β

R y:β =

0 1 -sin β 0 cos β

R x:γ =

1 0 0 0 cos γ -sin γ 0 sin γ cos γ

0

Euler angles & 3-d Rotation matrices the rotation is counterclockwise about each axis

R y:β

R x:γ 1 0 0 0 cos γ -sin γ 0 sin γ cos γ

cos β 0 sin β

0 1 -sin β 0 cos β 0

R z:α cos α -sin α 0 sin α cos α 0

0

0

1

transforming in the 4th dimension (!)  α - α 0 notice how A breaks up the 4-d (x1 , x2 , x3 , x4) space into 2-by-2 with independent actions along planes

A=

0 α α 0 0 0 0 1 1 0 0 0 1

this type of matrix is called

it’s a matrix of matrices!

A

A

rotate by α in the x1-x2 plane

simple shear in the x3-x4 plane

there’s a lot more to learn in linear algebra about linear transformations : n

n

any linear transformation IR _ IR is a composition of scalings, shears, rotations, and projections 3

the composition of two rotations IR _ IR is again a rotation about some axis

3

the big picture

“matrices are verbs as well as nouns” they act on vectors & vector spaces as composable transformationS

1

consider the picture below and draw its image in the plane under the following linear transformations: A)

1 0 1 2

-1/2 0 B) 0 -2

c)

0 2 -1 0

D)

0 0 1 2

note the position of the origin and be careful in your drawings

2

give a 2-by-2 matrix that does the following to the plane: 1) flips along the vertical axis 2) rotates counter-clockwise by a quarter-turn now, do the same thing, but reverse the order of the operations (rotate then flip). did you get the same matrix? have you seen this before?

3

give a 2-by-2 matrix that projects the plane onto the diagonal y = x. note: there are multiple ways to do this!

4

assume that A is a 2-by-2 linear transformation that has the following effect: A

based on this, draw similar cartoon pictures for the following matrices: A) 2A

; b) A2

; c) A-1

5

there are many, many settings in which one would use rotation matrices to specify/describe/control rotation of an object (for example, a virtual reality headset). list at least 6 other examples you can think of. be creative!

6

assume that A and B act on the plane as follows: draw a cartoon of the effect of AB A B

are really useful for getting around

is a minimal set of vectors that generates coordinates

if the basis vectors are mutually orthogonal to one another

if it is orthogonal and the basis vectors are all of unit length

given a basis B = { uk } k=1…n any vector can be uniquely expressed as

v = c1 u1 + c2 u2 + c3 u3 + … + cn un where the { ci } are the if you have an

of v in B

basis, coordinates are easy to compute

ck = v * uk

changing coordinates

2 v = -4 5 =2i–4j+5k

u1 = in the coordinate frame that rotates the standard frame 60 degrees about the z-axis

v = ( 1–2 3 ) u1 + ( 3 - 2) u2 + 5 u3

1/2 3/2

0

- 3/2

u2 =

1/2

0

u3 =

(coordinates in “old” i, j, k basis)

c1 = v * u1 = 1 – 2 3 c2 = v * u2 = 3 - 2 c3 = v * u3 = 5

0 0 1

is enacted via a square matrix

change of basis

3

x=3

2

y=2

2 1

&

1 1

= u

2 1

+v

u=1 & v=1

1 1

=

2

1

u

1

1

v

let’s redo this graphically, thinking in terms of linear transformations

this point has coordinates x=3 , y=2

2 1 it’s obvious from a picture that the transformed coordinates are u = 1 , v = 1

1 1

square matrices transform coordinate systems u -1 x =A y v x u =A y v

“change of basis”

A =

1 -1 -1 2

A=

2 1

-1

1 1

columns = new basis vectors in old coordinates

the old “standard” basis is transformed to u & v coordinates

1 -1

-1 2

thinking in terms of a linear transformation is helpful…

are crucial in all kinds of applications

rotations in a reference frame

R z:α =

cos α -sin α

0 sin α cos α 0 0 0 1

R y:β =

cos β

0

-sin β

0 sin β 0 1 0 cos β

R x:γ =

1 0 0

0

0

cos γ -sin γ sin γ

cos γ

A = R z:α R y:β R x:γ =

αβ αβ

αβγ-αγ αβγ+αγ

αβγ+αγ αβγ-αγ

- β

βγ

βγ

sometimes called a rotation matrix

A = R z:α R y:β R x:γ R x:γ R y:β

R z:α

the big picture

invertible matrices encode changes of coordinates from one basis to another

1

which of the following bases are orthogonal? 1 0 2

2 -1 4

2 0 -1

3 B) -2 -1

1 0 3

6 7 -2

4 0 1 0 -4 1 2 2 -10 4 1 A) C) 5 D) -2 2 0 -4 1 -4 2 4 -1 -2 5 4 10 1 0 12 convert any such orthogonal bases into orthonormal bases by rescaling.

-2 3 -1 0

2

a square matrix Q is said to be an orthogonal matrix if Q-1 = QT. Show that the columns of an orthogonal matrix Q form an orthonormal basis.

3

show that the product of two orthogonal matrices is again orthogonal. (recall how products work with transposes and inverses…)

4

given a pair of vectors, u, v, in 3 satisfying u * v = 0, how would you modify these & choose a third vector w to make the collection an orthonormal basis?

5

what are the coordinates of the vector 3i -4j in the coordinates system with basis u = 2i – j & v = i + 3j ?

6

what are the coordinates of the vector i - 2j + 3k in the coordinates system with basis u = 2i + k , v = i - j , & w = i + j + k ?

7

what are the coordinates of the vector? ( 3, 5, 0 ) in the coordinates system with orthonormal basis as follows: A)

2

1 3 -1

2

8

1 5

1 2 0

1 35

-4 2 5

B)

3

1 5 0

4

-4

1 25 5

3

20 25 2 25 -15 1

write down a coordinate change that transforms the planar vector 3i +5j in “standard” coordinates to the “new coordinates” vector -2i + 3j. Give the coordinate change as a 2-by-2 matrix A. is such a matrix uniquely defined?

The use of terms like “basis” and “coordinate change” is suggestive rather than precise. Proper definitions have not been given yet are crucial to further study.

Likewise, a collection of vectors in Rn is said to span Rn if every vector in Rn can be expressed as a linear combination of elements in the collection.

The use of this terminology is critical to linear algebra, in which one speaks not only of a basis for Rn, but for an abstract vector space or a subspace thereof.

In brief, a collection of n vectors in Rn forms a basis if they are linearly independent and span Rn.

A basis for Rn, therefore, is a minimal collection of vectors that span Rn, or, equivalently, a maximal collection of vectors that are linearly independent.

In this context, the number of elements of a basis is equal to the dimension of the vector space they span.

A collection of vectors is linearly independent if no element of the collection can be expressed as a linear combination of the other elements in the collection.

The number of vectors in any basis for Rn must be exactly n. Fewer will not span; more will not be independent.

Further progress in linear algebra requires mastery of these and other basic definitions. For more information, please take a linear algebra course as soon as possible.

I have read the above completely and agree to abide by these terms

a determinant is something like a “masS” of a square matrix

|[a]| = a

| |

A =

a b c d

the determinant is

det A = | A | = ad - bc

notice that 1-by-1 and 2-by-2 matrices are invertible if & only if

det(A) ≠ 0

what happens in dimension three?

for

A =

a11 a12 a13 a21 a22 a23 a31 a32 a33

+ + a11 a21 a31

+ a12 a13 a22 a23 a32 a33

a11 a21 a31

- a12 a22 a32

( a11a22a33 + a12a23a31 + a13a21a32 ) |A| = - ( a13a22a31 + a12a21a33 + a11a23a32 )

this is often remembered by means of a cyclic rewriting of the matrix, using triples of diagonals

that formula looks familiar…

u * ( v × w ) = det u v w

v × w = det

i j v w k

To compute determinants in 3-d

the determinant can be computed via “minor expansion”

A =

a11 a12 a13 a21 a22 a23 a31 a32 a33

these submatrices are called “minors”

| A | = a11*|M11| - a12*|M12| + a13*|M13| a22 a23 a21 a23 a21 a22 = a11 a a - a12 a a + a13 a a 32 33 31 33 31 32

expansion about the top row…

the determinant can be computed via “minor expansion” minors are defined via row/column deletion

Mij =

delete row i & column j from A

the determinant is an alternating sum of minor determinants along a row or column

|A| = + - + - + + - +

alternating sign of weighted minors…

= a11*|M11| - a12*|M12| + a13*|M13| = a31*|M31| - a32*|M32| + a33*|M33| = -a12*|M12| + a22*|M22| - a32*|M32|

this works for any row (or column) by using an alternating sign…

3-by-3 determinants via minors

A =

11 8 3 0 2 0 -1 15 0

11

2 0 15 0

-8

3

0 2 -1 15

-0

0 0 -1 0

+3

+0

0 2 -1 15

=6

=6

you can compute determinants inductively

4-by-4 determinants

| A | = a11*|A11| - a12*|A12| + a13*|A13| - a14*|A14|

this all is fairly unintuitive & confusing

T

det A = det A det (AB) = det A det B A is invertible if and only if det A ≠ 0

the big picture

the determinant is an algebraically-complicated notion of “mass” for a square matrix

1

2

3

compute the determinants 10 -3 1 A) B) 0 6 -2 5

of the following matrices 0 0 7 -2 5 C) 0 2 1 3 -2 2 1 -1 0 1

for large matrices, choosing the right expansion makes a difference. try computing the determinants of these matrices & be careful…

A)

2 0 -3 -1 3 1 4 -2 1 0 2 4 0 1 0 3

D)

b)

2 0 6 2 1 2

1 2 3 4 5 6 7 8 9 7 -3 0 2 0 -3 0 -1 4 -3 0 1

2 -1 0 -1 0 -1

3 0 0 3 1 0

0 0 1 0 5 0

knowing what you know about minor expansions, argue that if a square matrix has two rows equal, then the determinant is zero. if you can’t see how to get started, try some explicit 3-by-3 examples…

4

It might seem like it is difficult to compute the determinant of a large matrix, But if the structure is right, it’s really. Easy. Using what you know, compute the following determinants:

A)

5

2 1 6 2 9 21 5 15

0 0 3 0 0 -3 0 -1 4 -3 0 1 3 16 -7 11

0 0 0 0 0 0 -1 0 6 1 13 0 -1 7 -1 -5

0 0 0 0 0 4 12 4

0 0 0 0 0 0 -1 9

0 0 0 0 0 0 0 2

b)

3 1 0 0 0 0 0 0

4 0 0 2 0 0 0 -3 -5 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 1 1 0 0

0 0 0 0 0 0 0 0 6 0 4 0 0 -1 0 -2

0 0 0 0 0 0 4 7

It is true (but not obvious) that det A = det AT. Show that this is true for a matrix A that is upper-triangular.

what is a determinant, really?

subtract triangles from the rectangle

b+d

c d a b

b c

area = det

a c b d

absolute value

a+c

area = (a+c) (b+d) - (a+c) b - (b+d) c = ad - bc

recall the scalar triple product from blue 1 chapter 6

volume = det u v w absolute value

the determinant relates to area & volume

u × v is orthogonal to both u and v v

u

how much 3-d volume is in this planar parallelopiped?

the dot product:

u * ( u × v ) = det u u v =0 = v*(u×v)

3

the parallelogram in  spanned by v and w has area area = | v × w | let u = ( v × w ) / | v × w | orthogonal unit vector

w v u

volume

the scalar triple product gives the volume = base * height

=1

= area * height = u*(v×w) (v×w) = *(v×w) | v×w| 2 = | v×w| /| v×w| = | v×w|

in higher dimensions?

n

the parallelopiped in  spanned by the n columns of A has n-volume = | det(A) | n

 vn v1 v2 v3 … vn

v3 v2 v1

this can be hard to visualize…

the unit cube is sent to the parallelopiped

area = 1

unit vectors are sent to the matrix columns

area = | det |

because it leads to an immediate conclusion

linear transformations are linear! you can compose them…

& when you do, what happens to volumes is multiplicative

A

B

AB det( AB ) = det( A ) det( B )

all the “mYsterious” results become clear

-1

if A is invertible. then det A = hint: use composition and what you know about the identity matrix…

1 det A

A is invertible if & only if det(A) ≠ 0 hint: use what you know about determinant as a volume. invertibility means no nonzero vector is sent to zero…

the big picture

the determinant is an “oriented volume” of the shape built from the column vectors

1

compute the 3-d volume of the parallelopiped spanned by: u = 3i + 4j + 5k v = -2i + j + 3k & w = i+j

2

repeat problem 1 above with the vectors u = 3i - 2j + k v = 4i + j + k & w = 5i + 3j are you surprised that you get the same answer as problem 1 ?

3

what is the area of the parallelogram in the plane with vertices at (2, 3), (3, 5), (7, 4), & (8, 6)

4

what is the area of the parallelogram in 3-d with vertices at (1, 2, 3), (1, 1, 1), (3, 3, 5), & (3, 4, 7)

5

for what constant c do the following three vectors lie within a plane? 2c c 1 hint: think in terms of the volume of u = 3 v = 1 w= 0 the parallelepiped they span… -1 2 -2

6

what is the area of the triangle in 3-d with vertices at (3, 1, -2), (5, 3, 1), and (3, -1, 0) ? hint: a triangle is half of a parallelogram…

7

four non-coplanar vertices in 3 generate a tetrahedron (also known as a 3-simplex). FACT: any such tetrahedron has volume 1/6th that of the paralellopiped of which it is a “corner”. given this, compute the volume of the 3-simplex with vertices at (2, -1, 3), (3, 4, 7), (-1, 2, 3), & (0, 3, -4). does it matter which of the 4 vertices is chosen to be the corner?

8

now, try to argue that the factor of 1/6 from problem 7 is correct. this can be done with some geometric reasoning: try working with the figure above. your answer may not be rigorous, & that’s ok if you get to the “ahha” stage.

9

challenge: can you generalize the situation of problems 7/8 to an n-dimensional simplex in n ? what does the 1/6 become? think about the 2-d case for a hint…

has a simple resolution

for

A

|A| =

triangular

product of diagonal terms

A = A0

Careful! Multiply on the left…

R1

A1

R2

A2

R3



Rm

Rm * … * R2 * R1* A = U det( R m ) … det( R 2 ) det( R 1 ) det( A ) = det( U )

U = Am

IS effected bY MATRIX MULTIPLICATION

R is the identity matrix with two rows switched det R = -1

0 1 0 1 0 0 0 0 1 0 0 0

0 0 0 1

4 1 0 5

-3 -4 5 0

1 0 1 7

-1 1 4 -2

1 4 0 5

-4 -3 5 0

0 1 1 7

1 -1 4 -2

switching rows (or columns) reverses the “orientation” of the volume

R is the identity matrix with a “c” multiple det R = c

1 0 0 0 0 1 0 0 0 0 3 0 0 0 0 1

4 1 0 5

-3 -4 5 0

1 0 1 7

-1 1 4 -2

4 1 0 5

-3 -4 15 0

1 0 3 7

-1 1 12 -2

tripling the length of one side triples the entire volume…

R is the identity matrix with a “c” multiple of another row det R = 1

1 0 0 0 0 1 0 0 0 0 1 0 0 -5 0 1

4 1 0 5

-3 -4 5 0

1 0 1 7

-1 1 4 -2

4 1 0 0

-3 -4 5 20

1 0 1 7

-1 1 4 -7

adding another row acts like a “shearing”, leaving the volume the same

the three row operations correspond to matrix multiplication (on the left)

exchange

rescale

combine

0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0

1 0 0 0 0 1 0 0 0 0 C 0 0 0 0 1

1 0 0 0 0 1 0 0 0 0 1 0 0 C 0 1

R1 _ R4 R4 _ R1

R3 _ CR3

R4 _ R4+CR2

ahha! To get the matrix corresponding to a particular row operation, simply apply that operation to the identity!

row-reduction for determinants

1 3 1 2 3 -1 -4 0 2

1 3 1 2 3 -1 -4 0 2

det =

1 3 1 0 -3 -3 -4 0 2

= 18

1 3 1 0 -3 -3 0 12 6

1 3 1 0 -3 -3 0 0 -6

this may not seem “easy”, but for large matrices, this is a lifesaver!

determinants and row exchange what is the determinant of…

0 0 3 0 0

2 0 -1 0 0

-1 0 0 0 4

0 2 4 0 5

3 -1 7 -1 -2

3 0 0 0 0

-1 0 2 0 0

0 0 -1 4 0

4 2 0 5 0

each switch contributes a -1 to the determinant, so be careful to count these correctly!

7 -1 3 -2 -1

3 0 0 0 0

-1 2 0 0 0

0 -1 4 0 0

4 0 5 2 0

7 3 -2 -1 -1

you can compute determinants “quickly”

Often, you can avoid rescaling and switching rows…

> GIVEN:

square

> RowReduce

A

> DET(A) = … product of

->

matrix U

A

triangular

DIAGONALS

of

U

TIMES -1 for each row switch DIVIDED BY c for each row rescale

Helps not only with computation…

T

det A = det A

0 1 0 1 0 0 0 0 1 0 0 0

it’s true for triangular matrices… It’s true for each row-reduction matrix R determinant is multiplicative T

T

T

T

T

T

U = ( Rm * … * R 2 * R 1 * A ) = A * R 1 * R 2 * … * Rm

0 0 0 1

you don’t need to understand them… yet! but they will become very useful if you take differential equations, dynamical systems, linear algebra, or optimization…

the big picture

computing determinants is best done via row reduction to a triangular form

1

compute the determinants of the following matrices 2 13 3 via row reduction: it’s A) 0 3 -1 not 1 0 -3 easy, but is it easier than expansion via minors?

2 0 -3 7 0 1 5 -3 B) 1 0 3 4 0 0 0 3

C)

1 2 0 1 1

0 -3 1 -1 2 -2 0 3 2 -1

2 0 4 -1 0

2

compute the determinant of the following matrix, written as a product: 2 -6 9 2 0 0 hint: you might not want to multiply 0 5 17 9 1 0 those two matrices together first… 0 0 -1 7 13 4

3

if any row of a matrix is a multiple of another row, then the determinant is zero. give two different arguments for this proposition: one based on geometry and one based on row reduction and algebra.

5 3 5 0 1

4

compute the following determinant using intelligent row reduction. can you guess at a pattern here?

1 1 1 det x y z x2 y2 z2

= (x – y) (y – z) (z – x)

5

find an explicit counterexample to the claim that det (A+B) = det A + det B.

6

compute the determinants of the following block-diagonal matrices:

A)

5 2 0 0 0

3 0 1 0 0 -2 0 -3 0 0

0 0 4 7 0

0 0 0 0 5

B)

0 4 2 0 0

1 3 5 -2 -1 5 0 0 0 0

0 0 0 3 5

0 0 0 1 2

C)

using what you know about row-reduction, you should see both a pattern and a quick way to compute…

3 6 0 0 0 0

2 10 0 0 0 0

0 0 5 6 0 0

0 0 9 11 0 0

0 0 0 0 2 4

0 0 0 0 -1 3

is just the beginning…

a square matrix A has eigenvalue λ if & only if

to find all the eigenvalues λ solve the equation

Av = λ v

| ( A – λI ) | = 0

for some nonzero v which is an eigenvector an n-by-n matrix has n eigenvalues…

since determinant zero means not invertible

| A– λI | = 0 2-λ 1 0 1-λ

2 1 0 1

=0

λ2 – 3λ + 2 = 0

λ = 1, 2

| A– λI | = 0 0 -1 1 0

J

-λ -1 = (-λ)2 –(-1) = 0 1 -λ λ2 = -1

λ = ±i

is the key to linear algebra

a set  of elements is a vector space if there are operations + : ×_ and * : ×_ satisfying… u+v = v+u u+(v+w)=(u+v)+w there is a “0” in  with u+0=u there is a “-u” in  with u + (-u) = 0

for every u, v, w in  for every a, b in 

a* ( u + v ) = a * u + a * v a* ( bu ) = (ab) * u (a + b) * u = a * u + b * u 1*u=u

Why bother with vector spaces? so much abstraction!

After all, most applications seem to use explicit coordinates…

let = set of polynomials of degree ≤ n can these can be added, subtracted, and multiplied by a scalar? is there a “0”?

let = set of solutions to a linear ODE

ax** + bx* + cx = 0

can these can be added, subtracted, and multiplied by a scalar? is there a “0”?

do these vector spaces have notions of…

is not merely interesting vector spaces

are, of course, rates of change

f

are interpreted as vectors…

the derivative of f at a is a linear transformation [Df]a

h

a

f

f(a)

[Df]a [Df]ah

sending rates of change at a to rates of change at f(a)

follow familiar patterns…

arise as the derivative of a scalar field

gradients are the vectorized derivative of a scalar function

whose lengths indicate rates of change…

& whose directions are that of “steepest increase”

to approximate functions

about x=0



f( ) = sum over “multi-indices”

1

Df 0

comes the ability to optimize…

at a critical point

f ( a + h ) = f ( a ) + [ Df ]a h +

2 1 T h [ D f ] h a 2!

3

+ O( | h | )

THE SECOND DERIVATIVE DOMINATES LOCAL BEHAVIOR

will take us to the boundary of what we can do

L A G R A N G E E Q U A T I O N S

[Df] = λ[DG] G=0 to optimize f(x) constrained to the level set of G(x)

the big picture functions with multiple inputs & outputs require matrices to organize the derivative data & matrix algebra to do calculus

bY

Robert ghrist Is the andrea Mitchell professor Of mathematics and Electrical & systems engineering at the university of pennsylvania

He’s an award-winning researcher, teacher, writer, & speaker No awards for art, though… that’s done just for fun

Good textbooks on calculus that use vectors, matrices, & matrix algebra: Colley, S. J., Vector Calculus, 4th ed., Pearson, 2011. Hubbard, J. and Hubbard, B. B., Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach, 5th ed., Matrix Editions, 2015. Good introduction to applied linear algebra Boyd, S., and Vandeberghe, L., Introduction to Applied Linear Algebra – Vectors, Matrices, & Least Squares, Cambridge, to appear, 2018. Good rigorous introduction to linear algebra & determinants: Treil, S., Linear Algebra Done Wrong, web site, http://www.math.brown.edu/~treil/papers/LADW/LADW.html

all writing, design, drawing, & layout by prof/g [Robert ghrist] prof/g acknowledges the support of andrea Mitchell & the fantastic engineering students at the university of pennsylvania during the writing of calculus blue, prof/g’s research was generously supported by the united states department of defense through the ASDR&E vannevar bush faculty fellowship