332 99 12MB
English Pages 308 [328]
Charles i HacCluer
Digitized by the Internet Archive in 2020 with funding from Kahle/Austin Foundation
https://archive.org/details/industrialmathemOOOOmacc
f
Industrial Mathematics Modeling in Industry, Science, and Government
Charles R. MacCluer Michigan State University
PRENTICE HALL, Upper Saddle River, New Jersey 07458
Library of Congress Cataloging-in-Publication Data
MacCluer, C. R. Industrial Mathematics : modeling in industry, science, and government / Charles R. MacCluer. p. cm. Includes bibliographical references (p. ISBN: 0-13-949199-6 1. Mathematical models. I. Title. QA401.M33 2000 99-27073 511’ .8-dc21 CIP
Acquisitions Editor: George Lobell Editorial/Production Supervision: Rick DeLorenzo/Bayani Mendoza de Leon Editor-in-Chief: Jerome Grant Assistant Vice President of Production and Manufacturing: David W. Riccardi Senior Managing Editor: Linda Mihatov Behrens Executive Managing Editor: Kathleen Schiaparelli Manufacturing Buyer: Alan Fischer Manufacturing Manager: Trudy Pisciotti Marketing Manager: Melody Marcus Marketing Assistant: Amy Lysik Art Director: Jayne Conte Assistant to the Art Director: Bruce Kenselaar Editorial Assistant: Gale Epps Cover photo: From Murphy/Jahn Architects, Inc., “The Master Architect Series, Murphy/Jahn, Selected and Current Works,” copyright 1995 Images Publishing Group Pty Ltd., render/photograph by Michael Budilovsky. ©2000 by Prentice-Hall, Inc. Upper Saddle River, New Jersey 07458 All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequen¬ tial damages in connection with, or arising out of, the furnishing, performance, or use of these programs. Printed in the United States of America 10 98765432
ISBN 0-13-949199-6
Prentice-Hall International (UK) Limited, London Prentice-Hall of Australia Pty. Limited, Sydney Prentice-Hall Canada Inc., Toronto Prentice-Hall Hispanoamericana, S.A., Mexico Prentice-Hall of India Private Limited, New Delhi Prentice-Hall of Japan, Inc., Tokyo Prentice-Hall (Singapore) Pte. Ltd. Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro
Dedicated to my mother and father, Colen and Theron.
■
Contents Preface
ix
Acknowledgments 1 Statistical Reasoning
xii 1
1.1 Random Variables
1
1.2 Uniform Distributions
5
1.3 Gaussian Distributions
6
1.4 The Binomial Distribution
7
1.5 The Poisson Distribution
10
1.6 Taguchi Quality Control
12
Exercises
16
2 The Monte Carlo Method
21
2.1 Computing Integrals
21
2.2 Mean Time Between Failures
23
2.3 Servicing Requests
24
2.4 The Newsboy Problem (Reprise)
27
Exercises
28
3 Data Acquisition and Manipulation
31
3.1 The z-Transform
31
3.2 Linear Recursions
34
3.3 Filters
36
3 4 Stability
39
3.5 Polar and Bode Plots
40
3.6 Aliasing
46
3.7 Closing the Loop
47
3.8 Why Decibels?
51
Exercises
53
4 The Discrete Fourier Transform
59
4.1 Real Time Processing
59
4.2 Properties of the DFT
61
4.3 Filter Design
63
4.4 The Fast Fourier Transform
66
4.5 Image Processing
70
Exercises
74
5 Linear Programming
77
5.1 Optimization
77
5.2 The Diet Problem
80
5.3 The Simplex Algorithm
81
Exercises
86 v
Contents
VI
6 Regression
89
6.1 Best Fit to Discrete Data
89
6.2 Norms on Rn
93
6.3 Hilbert Space
94
6.4 Gram’s Theorem on Regression
97
Exercises
101
7 Cost-Benefit Analysis
105
7.1 Present Value
105
7.2 Life-Cycle Savings
106
Exercises 8 Microeconomics
108 111
8.1 Supply and Demand
111
8.2 Revenue, Cost, and Profit
113
8.3 Elasticity of Demand
115
8.4 Duopolistic Competition
116
8.5 Theory of Production
118
8.6 Leontiev Input/Output
119
Exercises 9 Ordinary Differential Equations
121 123
9.1 Separation of Variables
123
9.2 Mechanics
127
9.3 Linear ODEs with Constant Coefficients
130
9.4 Systems
135
Exercises 10 Frequency-Domain Methods
142 149
10.1 The Frequency Domain
149
10.2 Generalized Signals
153
10.3 Plants in Cascade
157
10.4 Surge Impedance
159
10.5 Stability
161
10.6 Filters
164
10.7 Feedback and Root Locus
169
10.8 Nyquist Analysis
173
10.9 Control
179
Exercises
184
Contents
vji
11 Partial Differential Equations
191
11.1 Lumped versus Distributed
191
11.2 The Big Six PDEs
192
11.3 Separation of Variables
194
11.4 Unbounded Spatial Domains
213
11.5 Periodic Steady State
215
11.6 Other Distributed Models
217
Exercises 12 Divided Differences
223 231
12.1 Euler’s Method
231
12.2 Systems
234
12.3 PDEs
235
12.4 Runge-Kutta Method
240
Exercises 13 Galerkin’s Method
240 243
13.1 Galerkin’s Requirement
243
13.2 Eigenvalue Problems
247
13.3 Steady Problems
249
13.4 Transient Problems
250
13.5 Finite Elements
252
13.6 Why So Effective?
259
Exercises 14 Splines
262 265
14.1 Why Cubics?
265
14.2 ra-Splines
267
14.3 Cubic Splines
269
Exercises
274
15 Report Writing
277
15.1 The formal Technical Report
277
15.2 The Memo
282
15.3 The Progress Report
284
15.4 The Executive Summary
284
15.5 The Problem Statement
285
15.6 Overhead Projector Presentations
286
15.7 Approaching a Writing Task
287
15.8 Style
287
15.9 Writer’s checklist
291
References
293
Index
299
'
t
Preface About This Book Mathematics is unreasonably effective in resolving seemingly intracta¬ ble problems.
The process proceeds in three steps: model the ex¬
ternal world problem as a mathematical problem, solve the math¬ ematical problem, then interpret the results.
A mathematician in
government or industry will be involved at all three steps. This book is for students about to enter the workforce.
They
may be well grounded in the fundamentals of mathematics but not in its practice.
Although changing of late through the efforts of
COMAP, SIAM, and NSA, the graduating student has little expe¬ rience in modeling or in the particular extensions of mathematics useful in industrial problems. They may know power series but not the z-transform, orthogonal matrices but not factor analysis, Laplace transforms but not Bode plots. Most certainly they will have no ex¬ perience with problems incorporating the unit $. Mathematicians in industry must be able to see their work from an economic viewpoint. They must also be able to communicate with engineers using their common dialect, the dialect of this book. Each chapter begins with a brief review of some relevant mathe¬ matics which may require further elaboration by the instructor. Then the industrial extension of this same material is introduced via typ¬ ical applications.
The routines which occur in the flow of text are
not merely enrichment but instead are an integral part of the text itself. One central thrust of this book is to demonstrate the power of interweaving analytic with computing methods during problem solving. Many exercises require the student to experiment with, or to modify, the MATLAB routines provided.
Tedious retyping of the
routines is unnecessary since all routines will be available at our anonymous ftp site math.msu.edu down the path pub/maccluer/mm. Other exercises ask the student to generate code themselves. A cer¬ tain number of exercises are in fact projects, requiring data collection, experimentation, or consultation with industrial experts. This book is aimed at the senior undergraduate or Master’s stu¬ dent of mathematics, engineering, or science. The writing style is by design sparse and brief.
IX
Preface
X
To the Instructor Let me tell you how I use this book. I feel it is crucial that students obtain experience in group project development. The nearly univer¬ sal opening gambit of job interviewers is to ask student applicants to describe their group project experience.1 To provide this experi¬ ence I require three projects to be completed during the course — the first two done by groups of size two or three, the last solo. The “deliverable” from each project is a formal technical report described in Chapter 15. The projects are chosen with my advice and consent.
Often,
student groups will propose their own project or a variation on a project from the book. I have had to maintain an open door policy for frequent consultations with the groups as they develop their projects. I return flawed reports and ask that they be resubmitted. But in the end I have been astonished by the quality of many of the finished reports. Once reports have been completed, I ask each group to select a member to deliver their report to the class as a 12-minute overhead slide presentation (after reading the do’s and don’ts of such talks in Chapter 15). This experience is transformational for the student. I strongly believe in weekly homework — lots of it. I encourage my students to work in study groups but require write-ups to be done individually. Above all, each student must do simulation and numerical experimentation individually.
The worst case is for one
member of a study group to do all the computer work for the group. Simulation homework is in the form of a memo, plus source code, plus data.
I point out that source code is more individual than
fingerprints and have been able to head off this problem from the start. A major objective of my course is for each student to develop a symbiosis with their silicon-based helpmate. Students lack experience in taking “first cuts,” at biting off pieces of a problem. I constantly urge them to cut the problem down to solvable size, estimate, do a special case. I ask, “What could you have on my desk before 5:00?” I encourage running off to the computer. Insisting on elegant analytic solutions is not cost-effective and is not in the spirit of this course (or of industry).
lrThe second question is usually about how they handled group members who did not carry their load.
Preface
xi
Ideally, this course would be followed by an Industrial Projects tutorial course, where local industrial representatives would propose problems, then serve (with a faculty member) as liaison to a student project group as they develop a project during the term. At term’s end the group would present a formal technical report to both the participating faculty group and to the industrial group. This is pat¬ terned after a very successful summer program developed by H. T. Banks and H. T. Tran of North Carolina State University in cooper¬ ation with the National Security Agency.
Chapter Interdependence This book is in large part a collection of independent topics, a sur¬ vey of the mathematics essential to an industrial mathematician. The only iron-clad dependence is Chapter 2 on Chapter 1.
Chap¬
ter 10 depends on some basic notions from a sophomore differential equations course reviewed in Chapter 9. Inner product notation is introduced in Chapter 6 and used in Chapters 11-14, but could be introduced as needed. The numerical methods described in Chapters 12-14 could be taught without covering Chapters 9 and 11 by relying only on student experiences from previous courses.
The Symbol * A single asterisk (*) beside an exercise indicates that the exercise re¬ quires some result or technique from an advanced senior or first-year graduate course, or that it may be a bit more difficult. A double as¬ terisk signals that the exercise is quite difficult. On several occasions an asterisk is used to indicate a section or a proof at an advanced level.
Close the Loop Please let me know about your successes and failures with this book. Above all, tell me about successful projects not suggested in the book. I will post your projects on my Web site: http://www. math. msu. edu/ maccluer/index.html
Acknowledgments This book grew from conversations with George Lobell, mathemat¬ ics editor of Prentice Hall, but many others contributed to its final form.
I am grateful for expert advice from R. Aliprantis, T. V.
Atkinson, M. C. Belcher, R. V. Erickson, D. Gilliland, E. D. Good¬ man, Jon Hall, Prank Hatfield, T. J. Hinds, F. C. Hoppensteadt, R. J. LaLonde, D. Manderscheid, L. Y. Manderscheid, Mark S. Mc¬ Cormick, R. Narasimhan, G. L. Park, R. E. Phillips, Jacob Plotkin, P. A. Ruben, W. E. Saul, V. Sisiopiku, Lee Sonneborn, V. P. Sreedharan, G. C. Stockman, T. Volkening, and John Weng. I am also indebted to the four reviewers who provided sugges¬ tions for improving the material covered in this book.
They are
Lester F. Caudill, Jr., University of Richmond; Gary Ganser, West Virginia University; Reinhard Illner, University of Victoria; Anne Morlet, Argonne National Laboratory; and Alan Struthers, Michi¬ gan Technological University. Clark J. Radcliffe was an especially good resource for engineering matters. Special thanks to Karen Holt, Technical Writing Consultant of the Naval Undersea Warfare Center Division, for assistance with Chapter 15. Many undergraduate students helped shape this book; I thank M. Alexander, M. S. Bakker, Richelle Brown, C. Crews, R. L. En¬ nis, J. M. Foster, P. Franckowski, M. C. Hilbert, Matt Johnson, D. Karaaslanli, J. A. Leikert Jr., K. S. Little, Shana Ostwald, D. M. Padilha, Faisal Shakeel, J. K. R. Shirley, J. R. Shoskes, R. D. Smith, K. A. Tillman, and B. R. Tyszka. But much credit belongs to a talented group of graduate students: E. Andries, Reena Chakraborty, K. L. Eastman, M. B. Hall, E-J Kim, J-T Kim, F. Kivanc, S. A. Knowles, J. R. Lortie, T-W Park, and R. L. Rapoport. Good luck to you in your industrial careers.
C. R. MacCluer
Chapter 1
Statistical Reasoning Individual behavior may be erratic, but aggregate behavior is often quite predictable.
Statistics is second only to differential equations in the power to model the world about us. Statistics enables us to predict the be¬ havior on average of systems subject to random disturbances. This chapter begins with a review of random variables and their cumu¬ lative distributions. It is followed by four sections defining the four most useful distributions: the uniform, Gaussian, binomial, and Pois¬ son. The power of statistical reasoning will be apparent in the models constructed in each section. The concluding section, §1.6, is a brief introduction to the Taguchi off-line quality control approach that has revolutionized Japanese industry.
§1.1 Random Variables An experiment is performed repeatedly, say a person is selected from a large population. height.
A measurement X is taken, say the person’s
After all experimentation is done, we have statistical data
about the population — the random variable X gives rise to a cu¬ mulative probability distribution function F(x): F(x) = probability that X < x such as shown in Figure 1.1.
(1.1)
We assume that the populations are
large and that measurements are made with infinite precision so that we are justified in replacing bins of measurement intervals, each with its own incidence of occurrence, with a continuum of probabilities as shown in Figure 1.1. Thus
a < X < b) = f dF(x) = F(b) - F(a~) Ja
l
(1.
Chapter 1. Statistical Reasoning
2 and more generally,
(1.3)
Js
y
=
o
X
Figure 1.1. Continuous cumulative distribution function. Example.
What is the life span T (in hours) of a typical 60 W
hotel hallway light bulb?
The associated cumulative distribution
function F(t) is the percentage of the bulbs that have burned out at or before time t. We can with confidence anticipate that F(t) is 0 before t = 0, rising slowly at first, then more quickly as we near and pass the typical life span (1000 hours), thereafter leveling off to approach 1 asymptotically from below.
Our intuition is confirmed
by the following mortality data supplied by General Electric: hrs fail
400 0%
500 2%
600 5%
700 10%
800 20%
900 30%
1000 50%
1100 70%
1200 80%
1300 90%
1400 95%
The dF(x) in (1.3) reflects the intuition that since X takes on some values more often than others, it gives rise to a new measure on intervals: the new length of the interval [a, b\ is the probability that the values of X fall into [a, b], thus the equation (1.2).
Em¬
ploying this new measure of intervals, one constructs the (Stieltjes) integral as before but now with the infinitesimal dF(x) rather than dx ([Cramer]). For any function Y = g(X) of the random variable X, say the conversion from British to metric units, the expected value of the random variable Y = g(X) is
(1.4) In particular,
§1.1 Random Variables
3
/
is the mean or center of mass of the random variable X, while OO
(x - H)2 dF(x) = E[X2} - M2
(1.6)
-oo
is the variance or moment of inertia about the center of mass, and the quantity cr =
is the standard deviation of X. Examples are
given below. If F(x) has no jumps and is in fact differentiable, the derived function f(x) = F'(x)
(1.7)
is called the probability density function, so that prob(g(X) 6 S) = f g(x) dF(x) = [ g(x)f(x) dx.
Js
(1.8)
Js
Example. Due to a fundamental problem of measurement, the loca¬ tion and momentum of a very small particle cannot both be predicted with certainty. All that is knowable is its wave function if and hence a probability density function f(x) —
\if\2.
(Think of crimes making
up a crime wave.) This wave function if is a time-weighted superposi¬ tion of eigenfunctions (stationary states) of the Hamiltonian operator H, the instrument that observes total energy. For example, the state of lowest energy (ground state) of a quantum particle trapped in the potential free interval [0,1] is ifo = a/2 sin ttx [MacCluer, 1994, p. 134]. Thus the expected (mean) position of a particle in this state is
/
oo
-oo
/* oo
n\
x dF{x) = / xf(x)dx = 2 / x sin2 irx dx = ••• =0.5. J — oo 0
You cannot hope to dodge some working knowledge of quantum me¬ chanics since it has become routine in industrial applications as mun¬ dane as measuring NO* in engine emissions. More on this will appear in Chapter 11. At the other extreme, ar cumulative distribution F may be dis¬ crete, consisting of steps of height mi at x = Xi so that prob(#(X) G S) = f g{x) dF(x) = ^ g{xi)mi, JS
Xi£S
(1.9)
Chapter 1. Statistical Reasoning
4
as shown in Figure 1.2. 1
mi+3\
mi+2 !
%.+!
xi
1
XU2
Xi+1
xi+3
Figure 1.2. Discrete cumulative probability distribution function.
Example. Write out 7r in base 2. It is true but difficult that 0 and 1 occur with equal frequency in this expansion. Thus the cumulative distribution F(x) of the random variable X that returns the entry in a random digit of the expansion is f 0 F(x) =