Advances in Game Theory. (AM-52), Volume 52 9781400882014

The description for this book, Advances in Game Theory. (AM-52), Volume 52, will be forthcoming.

118 32 36MB

English Pages 691 [692] Year 2016

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
CONTENTS
Preface
1. Some Topics in Two-Person Games
2. Games With a Random Move
3. A Search Game
4. The Rendezvous Value of a Metric Space
5. Generalized Gross Substitutability and Extremization
6. Adaptive Competitive Decision
7. Infinite Games of Perfect Information
8. Continuous Games of Perfect Information
9. A Theory of Pursuit and Evasion
10. A Variational Approach to Differential Games
11. A Differential Game Without Pure Strategy Solutions on an Open Set
12. The Convergence Problem for Differential Games, II
13. Markov Games
14. Homogeneous Games, III
15. Solutions of Compound Simple Games
16. The Tensor Composition of Nonnegative Games
17. On the Cardinality of Solutions of Four-Person Constant-Sum Games
18. The Doubly Discriminatory Solutions of the Four-Person Constant-Sum Game
19. Three-Person Cooperative Games Without Side Payments
20. Some Thoughts on the Theory of Cooperative Games
21. The Bargaining Set for Cooperative Games
22. Stable Payoff Configurations for Quota Games
23. On the Bargaining Set M0 of m-Quota Games
24. A Property of Stability Possessed by Certain Imputations
25. Coalition Bargaining in n-Person Games
26. The n-Person Bargaining Game
27. Valuation of n-Person Games
28. Mixed and Behavior Strategies in Infinite Extensive Games
29. A General Solution for Finite Noncooperative Games Based on Risk-Dominance
Recommend Papers

Advances in Game Theory. (AM-52), Volume 52
 9781400882014

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Annals of Mathematics Studies Number 52

ADVANCES IN GAME THEORY R. J .

AUMANN

L. D. BERKOVITZ M. W.

H.

DAVIS

F L E M IN G

A. R. GALM ARIN O O. GROSS J.

C. H ARSAN YI M. J.

HEBERT

R. ISBELL

G. JE N T Z S C H S. M .

J-

M Y C IE L S K I

E. D. NERING H.

NIKAIDO

G. OWEN B.

PELEG

H. RADSTROM R. A. RESTREPO

J^C.

L. ROSENFELD R Y L L -N A R D Z E W S K I

R. SELTEN L. S. S H A P L E Y

JO H N SO N

M.

MASCHLER

R.

E. STEARNS

K.

M IYASAW A

L.

E. ZACHRISSON

EDITED BY

M. Dresher, L. S. Shapley, A . W . Tucker

PRINCETON, NEW JERSEY PRINCETON UNIVERSITY PRESS * 964

Copyright © 1964, by Princeton University Press All Rights Reserved L. C. Card 63-9985 ISBN 0-691-07902-1 Second Printing, 1971

Printed in the United States of America

PREFACE This Study carries on the tradition of the four volumes entitled Contributions to the Theory of Games;* it is similar in format, but has a different reason for existence.

The field of game theory is now well

established and widely diffused through the mathematical world, thanks in part to the success of the earlier volumes; papers on game theory regularly appear in many scientific journals.

But with no single, specialized journal

as focal point, there remains a serious problem of communication within the field.

It is often difficult,

newcomers,

for isolated workers and especially for

to get abreast of developments,

find out where the significant

new problems lie, and avoid needless duplications of effort. Accordingly,

this Study was conceived as a one-volume cross section

of current activity in the theory of games.

Contributions were invited from

a widely distributed and, it is hoped, representative group of investigators, and twenty-nine papers were finally accepted.

Most of these might otherwise

have appeared in one or another of the regular mathematics, economics, or operations research journals.

It is hoped that the authors will agree that

publication of their work in the present volume is a step toward improving the accessibility of the expanding frontiers of the theory. The scope of this collection was arbitrarily drawn to exclude theoretical essays of primarily nonmathematica1 content, as well as mathema­ tical applications of primarily nontheoretical interest. expository articles were solicited.

Also, no purely

Within these limits, however,

the

editors welcomed variety, not only in subject matter, but also in the level of the mathematical techniques invoked and in the level of abstractness or particularity in the approach.

As the browsing reader will quickly discover,

the contents of this book are far from homogeneous either in style, Annals of Mathematics Studies 24 (1950), v

in

28 (1953), 39 (1957), 40 (1959).

PREFACE prerequisites, or in pertinence to any one area of interest. As for subject matter, the editors can point to a good balance, achieved without premeditation, on many different accounts:

two-person vs.

n-person games; finite vs. infinite games; extensive vs. normal form; sidepayment vs. no-side-payment games; cooperative vs. noncooperative games; existing theory vs. new departures; etc.

This evidence of activity on all

fronts speaks well for the vitality of the field as a whole, as does the large international authorship of this volume. The contributions are arranged not by order of receipt, but by subject.

It no longer seems appropriate, however, to organize the book into

sections, as in earlier volumes, nor to attempt in the Preface to survey the contents from a unified viewpoint.

Nevertheless, a brief description of the

subjects that are covered will be given here to provide a bird's-eye view of the book as a whole and to supplement the bare titles in the Table of Contents. The first thirteen papers deal with two-person games.

In the first,

SHAPLEY explores several topics growing out of the theory of matrix games, particularly their ordinal properties.

Papers

2 and 3, by RESTREPO

and

JOHNSON, concern the solving of particular two-person games; the first is an abstract poker model, the second is a mathematical form of "hide and seek". Papers 4 and 5, by GROSS and NIKAIDO, apply the minimax theorem to point-set topology and mathematical economics, respectively.

In Paper 6, ROSENFELD

investigates the repeated play of a game matrix in which the values of some of the entries are not known to the players in advance.

In Papers 7 and 8,

DAVIS and MYCIELSKI carry on the study of infinite games of perfect informa­ tion begun by Gale and Stewart in Study 28; the second paper makes an application of that discrete-time model to what could be called positional games with continuous time.

This type of game is treated systematically in

Paper 9 by RYLL-NARDZEWSKI, using general topological and set theoretical notions, and in Papers 10, 11, and 12 by BERKOVITZ and FLEMING, using techniques from the calculus of variations.

Finally, ZACHRISSON,

in Paper 13,

considers games in which the strategies of the two players generate a Markov process; again information is essentially perfect and time may be a continuous parameter.

vi

PREFACE The last sixteen papers are in the field of n-person games. Paper 14, ISBELL determines the existence, particular kind of n-person simple game.

for various values of

n, of a

Papers 15 and 16, by SHAPLEY and

OWEN, study the solutions of a class of games, mostly simple, certain compound structure.

In

that exhibit a

The next two papers, by GALMARINO and HEBERT,

study two aspects of the solutions of all four-person constant-sum games. STEARNS,

in Paper 19, obtains all solutions of all three-person games in the

no-side-payment theory.

In Paper 20, JENTZSCH considers the general theory

of cooperative n-person games, with and without side payments.

Papers 21,

22, and 23, by AUMANN and MASCHLER, MASCHLER, and PELEG, develop and apply a new solution concept of "bargaining set" for cooperative games in characteristic-function form.

RADSTROM,

in Paper 24, introduces a solution concept

leading to certain "basic" imputations, each being associated with a hierarchical coalition structure of a certain kind.

NERING,

in Paper 25,

advances still another solution concept for characteristic function games, based on the merging of players into successively larger coalitions.

A

common feature of these new solution concepts is that they all generalize the celebrated three-point classical solution to the zero-sum three-person game. MIYASAWA,

in Paper 26, formulates a bargaining model for general cooperative

games in normal form that leads to a one-point solution, or "value".

SELTEN,

in Paper 27, axiomatizes the valuation of n-person games in the classical theory, basing his considerations primarily on the extensive form.

AUMANN,

in Paper 28, considers infinite games in extensive form, in which continua of choices at each move as well as plays of infinite length are permitted; his results on behavior strategies would apply most directly to the noncooperative theory.

Finally, HARSANYI,

in the concluding paper, undertakes to analyze

and refine the equilibrium-point concept with the goal of providing a unique solution to noncooperative games. *

*

*

The editing of this volume was done primarily at The RAND Corporation, which provided the services of two of the editors, as well as secretarial help.

The Office of Naval Research supported the work of the third editor

through its Logistics project at Princeton University.

Thanks are also due

that project for providing the English translation of Paper 18, and to The vii

PREFACE RAND Corporation for making available its translation of Paper 20.

The

burden assumed by several other authors in supplying texts in English, not their native language, should not be overlooked. referees

A host of anonymous

(including a majority of the contributors as well as several others)

gave willing assistance; many of them performed beyond the call of duty.

The

typing of the master copy has been the painstaking and efficient work of Mrs. Margaret Wray.

Finally, the Princeton University Press, through its

Science Editor Jay Wilson, has been constantly interested and helpful throughout the preparation of this volume.

To all those who have given their

assistance, the editors offer their sincere thanks.

NOTE A few of the papers in this Study were among the 39 papers presented at the Princeton University Conference on Recent Advances in Game Theory, October 4-6, 1961, directed by Oskar M o r g e n s t e m and A. W. Tucker.

Proceed­

ings of this Conference were prepared by Michael Maschler for the Econometric Research Program, Department of Economics, Princeton University, and were privately printed for members of The Princeton University Conference.

M. Dresher L. S. Shapley A. W. Tucker

December 1963

viii

CONTENTS

V

Preface Paper 1.

Some Topics in Two-Person Games By L. S. Shapley

1

2.

Games With a Random Move By Rodrigo A. Restrepo

29

3.

A Search Game By Selmer M. Johnson

39

4.

The Rendezvous Value of a Metric Space By 0. Gross

49

5-

Generalized Gross Substitutability and Extremization by Hukukane Nikaido

55

6.

Adaptive Competitive Decision By Jack L. Rosenfeld

69

7.

Infinite Games of Perfect Information By Morton Davis

85

8.

Continuous Games of Perfect Information By Jan Mycielski

103

9.

A Theory of Pursuit and Evasion By C. Ryll-Nardzewski

113

10.

A Variational Approach to Differential Games By Leonard D. Berkovitz

127

11.

A Differential Game Without Pure Strategy Solutions on an Open Set By Leonard D. Berkovitz

175

12.

The Convergence Problem for Differential Games, II By Wendell H. Fleming

195

13.

Markov Games By Lars Erik Zachrisson

211

14.

Homogeneous Games, By J. R. Isbell

255

15.

Solutions of Compound Simple Games By L. S. Shapley

267

16.

The Tensor Composition of Nonnegative Games By Guillermo Owen

307

III

ix

CONTENTS Paper 17.

18.

On the Cardinality of Solutions of Four-Person ConstantSum Games By Alberto Raul Galmarino

327

The Doubly Discriminatory Solutions of the Four-Person Constant-Sum Game By Michael H. Hebert

345

19.

Three-Person Cooperative Games Without Side Payments By R. E. Stearns

377

20 .

Some Thoughts on the Theory of Cooperative Games By Gerd Jentzsch

407

The Bargaining Set for Cooperative Games By Robert J. Aumann and Michael Maschler

443

Stable Payoff Configurations for Quota Games By Michael Maschler

477

23.

On the Bargaining Set By Bezalel Peleg

501

24.

A Property of Stability Possessed by Certain Imputations By Hans Radstrom

513

25.

Coalition Bargaining in n-Person Games By Evar D. Nering

531

26.

The n-Person Bargaining Game By Koichi Miyasawa

547

27.

Valuation of n-Person Games By Reinhard Selten

577

28.

Mixed and Behavior Strategies in Infinite Extensive Games By Robert J. Aumann

627

29.

A General Solution for Finite Noncooperative Games Based on Risk-Dominance By John C. Harsanyi

651

21

.

22

.

9fHn

of m-Quota Games

x

ADVANCES IN GAME THEORY

SOME TOPICS IN TWO-PERSON GAMES L- S. Shapley

INTRODUCTION

This note reports on half-a-dozen loosely related excursions into the theory of finite, two-person games, both zero-sum and non-zero-sum.

The

connecting thread is a general predilection for results that do not depend upon the full linear structure of the real numbers.

Thus, most of our theor­

ems and examples are invariant under order-preserving transformations applied to the payoff spaces, while a few (in § 1) are invariant under the group of transformations that commute with multiplication by

— 1.

We should make it clear that we are not interested in "ordinal" utility, as such, but rather the ordinal properties of "cardinal" utility. The former would require a conceptual reorientation which we do not wish to undertake here.

Nevertheless, the ordinalist may find useful ideas in this

paper. Rather than summarize the whole paper here, we shall merely take a sample; for a more synoptic view the reader is invited to scan the section headings.

Consider first the matrix game shown in the margin.

The solution is easily found as game is symmetric in the players.

soon as we recognize that the Indeed,

player's it^1 strategy into the other's

if we map

0 - 1 0

each

i+lst (mod 4), we

merely reverse the signs of the

payoffs.

and that there is a solution of

the form

1 2 - 1

It follows that thevalue

1-2

2 - 1 0 1 1

-

2-1

is

(a,b,a,b), (b,a,b,a). (See §

0

0 1.)

Next, consider the class of matrix games in which the payoffs are ordered like those in the matrix at left

(next page).

In all these games,

player I's third strategy is never playable, although it is not dominated in the usual sense.

To verify this, observe that if the value of the 2-by-2 1

2

SHAPLEY

10

0

8

subgame in the upper left c o m e r is greater than "3" the third

1

9

7

row is dominated by a linear combination of the first two rows,

2

3

6

while if it is less than "7" the third column can be dropped,

3

4

5

and then the third row.

(See § 3.)

Finally, consider the non-zero-sum game with outcome matrix as shown at right.

Player I rates the outcomes

rates them

B > A > C.

A > B > C; player II

If we apply the algorithm of "fictitious

play" to this game, a strange thing happens.

Rather than con­

A

C

B

B

A

C

C

B

A

verging to the unique equilibrium point (at which all probabilities are equal), the sequence of mixed-strategy pairs generated by the algorithm oscillates around it, keeping a finite distance away.

(See § 5.)

The five main sections of this paper are essentially independent, both logically and topically.

Our reason for combining them into a single

paper is the hope that they will appeal to a single audience.

Much of this

work has already appeared in short RAND Memoranda [15], and some of it has been cited in the published literature [5],

[11].

In reworking this material,

however, we have added many new results.

§ 1.

1.1.

SYMMETRIC GAMES

Discussion It is easy to see that a two-person zero-sum game can be symmetric

in the players without having a skew-symmetric payoff matrix.

"Matching

Pennies" is a simple example; another is shown at the right; and another is given in the Introduction.

The point is, of course,

that an automorphism of the game that permutes the players can simultaneously shuffle the labels of the pure strategies.

1 1 - 1 -1

0 -1

- 1 1 1

It would be inter­

esting to know something about the abstract structure of such automorphisms.* As a first step, we shall show that the matrix of a symmetric game can be decomposed into an array of square blocks in such a way that (a) block has constant diagonals

(in one direction),

skew-symmetric in a certain sense, and (c)

(b)

each

the array as a whole is

the size of each block is a power

The narrow "skew-symmetric" definition is most often seen in the liter­ ature (e.g., [19], [10], [7]). But Nash uses the more general form in [13]; see also [19], p. 166.

TWO-PERSON GAMES 3

2

of 2.

1 -1

1

given above.

1 1 3

-1

1

1

2

-1 -1

0

This is illustrated at left for the 3-by-3 example

The "power of 2" property,

(c), is quite interesting.

It

tells us, for example, that the 6-by-6 game illustrated below,

which is obviously symmetric and has constant diagonals, can nevertheless be decomposed into smaller blocks.

The 4-by-4 and 8-by-8 analogues of this

matrix, on the other hand, do not decompose.

1

1.2.

2

3

1

1

2

-1

3

-2-1

4

-3-2-1

4

5

6

2

2 3 -3 -2 -1

1

1 23 - 3 - 2

4

1

5

3

2 -2 -2

6

4

1

3 -1

-3

1

2 -1

3

1 -3

2 -2

3 -1

2

3-3

2

1

2

3

5

-3

1 -2

-1

3

5

3

-3 -2 -1 1

2

3

6

2

3 -3 -2 -1

1

6

1 -3

3 -1

2 -1

2 -2

1 -3 -3

3

1 -2

2

The Main Theorem Let

and let

A, B,

P, Q, R,

...

...

denote n-by-n game matrices

(n

fixed throughout),

denote permutation matrices of the same size.

Primes

will denote transposition, which is equivalent to inversion for permutation matrices.

We define the following matrix properties:

equivalence: A

=

B A = PBQ'

symmetry:

A

e

2 A == —A 1 .

conjugates:

P

Certain subclasses of

Q

P = RQR'

These subclasses exhaust

If

P, Q.

for some R.

2, will also be of interest:

A e 2 (P,Q) B = — P B 1Q 1

LEMMA 1.

for some

for some

2 , but are not disjoint.

PQ w RS

then

B = A.

Indeed, we have

2(P,Q) = 2(R,S).

4

SHAPLEY PROOF.

that since

Given

B = — RB'S'.

PQ = TRSTr

and

A = — PA'Q1, we require

As it happens,, B = T'AP'TR

B = A

serves the purpose.

such

Indeed,

P r = QTS'R'T', we have

B = T '(— PA'Q') (QTS1R'T')TR = -T'PA'TS' = - (T1A P 1T) 'S 1 = -RB 'S'.

Note that where

I

Z(P,Q)

depends only on

PQ.

If we define

2(P) = 2(P,I),

is the identity, then Lemma 1 may be restated:

LEMMA l 1.

If

P ~ Q

then

Z(P) = 2 (Q).

It can be shown that the converse is valid.

Since two permutations

are conjugate if and only if their cyclic factors have matching periods, there is thus a one-to-one correspondence between the classes partitions of

n.

LEMMA 2.

PROOF. all k. fore

But

If

Q

Suppose

is an odd power of

A = — PA' .

P

A = — P(— P A 1)' = P A P 1; hence

and the

2(Q) 3 2 (P)-

A e Z(P

21,4-1

)

A = PkAP'k = - Pk+1A ' P ,k.

for There­

A € Z(Pk+1, Pk ) = Z(p2k+1).

THEOREM 1.1.

Every symmetric game

A e 2

to a game

satisfying

for some permutation

PROOF. k

c2 , with

A e 2(PC ).

1.3.

then

We must show that

B

B = — RB'

R, the order of which is a power of

form

2(P)

This stronger result is not used in what follows.

Thus

Let c

A e 2(P). odd.

Pc

2.

The order of

Then the order of

will serve as the

R

is equivalent

P P

c

can be represented in the will be

k 2 .

of the theorem.

By Lemma 2,

Q.E.D.

Block Decomposition The decomposition into blocks can now be described.

By proper choice

TWO-PERSON GAMES ofB, we can give

5

R the form:

R = (1 2 3 A^)(A j+l . . . X j+7v2) ••• (••• n),

where of

the periods

A^-by-A^

A^ are powers of

blocks

ant*

by) thestructure set forth

2.

B

the equation

now breaks up into B = — RB'

in the following theorem.

a square array

implies (and The proof

is

implied

is straight­

forward.

THEOREM 1.2.

A = min(A

, A ). N l~i v'

Let

has constant diagonals, . ..,

exist such that for

b^j

(If

A^

sub-blocks of size block

B

B

|jv

ij

in that block,*

if i — j = h (mod A ) .

=

A^, then

The block

in the sense that numbers

B

breaks

A .)

In

up into identical square

thesymmetrically located

the same numbers appear; we have

by

= -P A_ h+1

if

i - j = h (mod A ) .

In particular, along the main diagonal of the array (\d = v ) ,

we have

COROLLARY 1. exist only for

COROLLARY 2.

for

h = 1, . . ., A.

Indecomposable symmetric n

a power of

n-by-n games

2.

Every symmetric game of odd size has a

zero in its payoff matrix.

1.4.

Solutions To solve a symmetric game we may (a)

TF--------------We write ij

for the ordered pair

replace the

(i, j).

B^v

by their

6

SHAPLEY

average values,

(b)

solve the resulting skew-symmetric matrix,* and (c)

distribute the mixed-strategy probabilities for each block equally among its constituent pure strategies.**

Not every solution of the

original can be obtained in this way, however. at the right, for example,

(0, 2/3, 1/3, 0)

In the game

-1

-2

-2

2

2

4 -4 -4

4

Symmetric Nonzero-sum Games There is a direct extension to nonzero-sum games.

matrix pairs A^ = PB-^Q1 and

1

2 -2

is a basic

(extreme) optimal strategy of each player.

1.5.

2 -2

1 -1

(A^, A 2) and

( A ^ A^)

and

(B^, B 2)

A 2 = PI^Q' •

equivalent if, for some

Let us call

are equivalent.

Let us call the

(A^, A

P, Q, both

symmetric if

(A^,

Then the following counterpart to Theorem 1.1

can be established by essentially the same proof:

THEOREM 1.3.

Every symmetric nonzero-sum games (A-^, A

is equivalent to a game for some permutation

of the form

(B^, B 2)

R of order a power of

=(RB^ RB^)

2.

The description of the block decomposition remains much as before, though the "main diagonal" loses some of its special significance.

Corollary

1 remains valid, but not Corollary 2.

§ 2.

2.1.

SOME THEOREMS ABOUT SADDLEPOINTS

A Condition for the Existence of a Saddlepoint

THEOREM 2.1.

If

A

is

the matrix of a zero-sum two-

person game, and if every 2-by-2 submatrix of saddlepoint, then

PROOF.

Let

A

A

has a

has a saddlepoint.

val[A] = v.

Let

j

be the index of a column having

the minimum number, n , of entries greater than

v.

Suppose

tt

> 0; then, for

See Kaplansky [10] and Gale, Kuhn, and Tucker [7]. **

See for example, Gale, Kuhn, and Tucker [8 ], application (e).

TWO-PERSON GAMES some

i

have

a^.. , < v

7r

we have

a^

> v.

for some

entries greater than

maining entries

> v

a i'j' ^ v — a i'j’

has no saddlepoint,

Since the value of the game is only j '.

But the column indexed by

2.2.

jf

v, we must

has at least

v, too many to be paired off against the

of the other column.

Thus., for some

i'

7T— 1

re­

we have

Since the 2-by-2 submatrix: j

j'

i

>V

i'

0

v.

was incorrect.

Hence there is a

Similarly there is a row with no

Q.E.D.

Detached Rows and Columns The hypothesis of Theorem 2.1 actually imposes a very special struc-

ture on the matrix

A.

Let us say that the p

max a . < max

j Similarly,

the

th

row of

A

is detached if

min a ...

PJ “ it*

1J

j

column is detached if

min a . > min i lq J

max a ... i 1J

Detachment obviously implies domination.

For 2-by-2 matrices,

the existence

of a detached row or column is equivalent to the existence of a saddlepoint.

THEOREM 2.2.

If every 2-by-2 submatrix of

saddlepoint, then

PROOF. saddlepoints.

A

By Theorem 2.1, both

val[A'].

If

A

and

Hence there is a column of

val[A], as well as a column of

A

has a

has a detached row or column.

A

A

(row of

A'

(itstranspose)

have

with noentries greater

than

A 1) with no entries less than

val[A] < val[A'], then these columns are distinct, and the

latter is a detached column.

Similarly,

if

val[A] > val[A'], there is a

8

SHAPLEY

detached row.

If

val[A] = val[A'] = v, then there is either a detached row

or column, or a saddlepoint case, a^j = a ^

= v, all

obtained by deleting row either is

p

> v

2.3.

or or

q

pq

common to both

A

and

A 1.

In the latter

i,j, and we can use the fact that the submatrix p

and column

q

has a saddlepoint to show that

is detached, depending on whether the value of the submatrix

< v.

Q.E.D.

A Generalization Theorem 2.1 can be generalized, after a fashion.

Let us say that a

matrix is "in general position" if no two collinear entries are equal.

THEOREM 2.3.

Let

position,

and let

submatrix

of A

A

be an m-by-n matrix in general

2 < r < m, 2 < s < n. has a saddlepoint, then

If every r-by-s A

has a saddle-

po int.

PROOF.

It suffices to prove the case where

r = m > 2

s = n— 1 > 2; the rest will follow by induction and symmetry.

Let

and A4

denote

___ A

with the q

of

A4

jq's

column deleted.

(which is unique, since are distinct, for

Let A

i j be the location of the saddlepoint q q is in general position). If all of the

q = 1, ..., n, then every

column of

A

will contain

one of the points

i j. Since each a. . is a column maximum, one of them QQ 1 1 44 qJ q will be the maximum of the whole matrix. On the other hand, it must also be the (strict) minimum of its row in This impossibility implies that the q f

t-

A.

Q.E.D.

A 4 , which contains at least two entries. Jq's

Then it is apparent that the point

are nQt a H

distinct.

iqjq = itj t

At right we illustrate what can happen if two collinear entries are equal.

Every 3-by-2 submatrix has a saddlepoint,

but not the full matrix.

Definitions.

= j

1 0 - 1 -2

0

2

2 - 1 3

§ 3.

3.1.

Let

is a saddlepoint of

ORDER MATRICES

The Saddle

By a line of a matrix we shall mean either a row or a column.

Two

TWO-PERSON GAMES

9

numerical matrices of the same size will be called order equivalent if the elements of corresponding lines are ordered alike.

An order matrix,

an equivalence class of order-equivalent numerical matrices.

ft, is

Abstractly,

it

may be regarded as a partial ordering pairs

< on the set I (ft) of all index Ct ij, with the property that collinear points are always comparable,

while noncollinear points are never comparable, except as a result of trans­ itivity. If

K c

I (ft)

is a set of index pairs, we shall

set of first members, ^ In other words, columns, and

for the set of

second members, and

is the smallest set of rows,

K

write K-^ K

for the

for

K-^ x

the smallest set of

the smallest submatrix, that covers

K.

If

K = K

then

K

is rectangular. A generalized saddle point gular

set

K c= I (ft)

(GSP) of an order matrix

such that (1)for each

i ^ K-^

ft is a rectan­

there is a

pe

pj > ij for all j e K 9, and (2) for each j ^ K 9 there is a q e K 9 Ct __ z z with iq < ij for all i e K-. .* (Note the strict inequalities.) A GSP that Ct contains no other GSP is called a saddle. with

THEOREM 3.1.

Every order matrix has a unique saddle.

The proof consists in showing that the intersection of two GSP's is a GSP.

Let

K

L

and

matrix

K

and

belonging to

thereis a

L

be two GSPs of

K 0 L f , since

Then certainly

p

in

ft.**It will suffice to

K-^ ft

with

find a

p 1 e K-^ with that

we can

find a

thenp ,M e K-j^

p" e L-^

pj > ij

property, since with

p"j > p 1j

showthat for all K

j e ^

is a GSP.

for all

n L-^

that majorizes row

The saddle of

ft

sequenceterminates. i

on

will be denoted

for each H L2 * If

p'

j e K2 H

can be found bearing the same relation to

strict inequalities ensure that the row in

ft.

must both carry the optimal mixed strategies ofany numerical

L2 . by S(ft).

p", etc.

i ^ K-^ But we can £ L-^, then

If

p" £ K^, The

That is, there

is a

Q.E.D. It contains

everything

Compare [5], pp. 41-42. ** Bass [1] found a way to prove to optimal strategies.

K fl L =f (|)

directly, without reference

10

SHAPLEY

relevant to the solutions of the numerical games particular,

A

belonging to

ft.

In

it contains Bohnenblust1s "essential" submatrix* and the Shapley-

Snow "kernels."** We may define a weak GSP by using nonstrict inequalities

>, ,

where

y '(uu)

is like

y(uu)

except for

y^(uu) = 0

and

y^ (uu) = yg (^) + y^C^)-

Hence, by (3.2)

val[A(uu)] > val[A^] + P||y* — y 1(uo)|| — 4a^/uu,

andy '(uu) y (uj) But

converges toy*,

- y*> 0.

A(uu) e Cfc

Thus both

by Lemma 4. r

and

for all positive

LEMMA 6 .

If

®

uu.

Since

yq(w) "* 0* we

s are active in Hence

is obtained from

A(uu)

rs e C(Ct).

have

for large

uu.

Q.E.D.

Ct by the deletion (as

in Lemma 5) of a strictly majorized row or a strictly minorized column, then

PROOF.

C((B) £ C(Ct).

A strictly dominated strategy can never be active.

THEOREM 3-4.

If

Ct

is in general position, then

R# (Ct) c C(O).

PROOF.

The theorem follows directly from Lemma 5 (and its "row"

counterpart), Lemma 6 , and the definition of restricted residual.

3.7.

The Antisaddle We close this part with a simple result concerning antisaddlepoints,

i.e., saddlepoints of the negative matrix.

THEOREM 3.5.

A strict antisaddlepoint of

be in the center, unless

Ct

is 1-by-l.

Ct

cannot

TWO-PERSON GAMES PROOF.

Suppose

pq e C(A), and let

A

S(- v

and

xM(v)y < v.

y

solutions v < v.

Let

x

be optimal in M(v)for player

Subtracting, we obtain

v - v < x (M(v) - M (v ) )y = xQy0 (v - v) .

Hence

X q = Yg = I j and

x

and

y

are pure.

Thus, for all

i f 0, j f 0,

a0j > v > v > a.0 .

This contradicts

(4.1).

Existence. ing.

The function

F(a) = val[M(a)]

is monotonic nondecreas­

Hence, by (4.2), we have

F(min aQ .) < F ( v J ; j+o °J 1 in other words

v^ —

is continuous; hence

COROLLARY.

F(v^) < 0.

Similarly, v ^ — F C ^ ) > 0.But

it has a zero in

a

— F(a)

[v^, ^ 2 ]’ ’ Q.E.D.

In the determinate case,

v is equal to

the true v alue.

We note that if (4.1) is not satisfied, then all values in the interval

[max a^Q, min a Q ^ ] From the fact that

creasing functions,

are solutions of (4.4), and no others. F(a)

and

a — F(a)

it follows that the sequence

are both monotonic nonde­ b, F(b), F(F(b)),

arbitrary) converges monotonically to a solution of (4.4). ful in making sharper numerical estimates; e.g., we have

...

(b

This can be use­ F(v^) < v < F(v2)j

24

SHAPLEY

etc. In the first indeterminate game of § 4.2 we have In the second example we also have

v = 0, by symmetry.

v = 0; thus, the first player's advantage

in this game, such as it is, vanishes if time is made discrete.

This feature,

also present in the order-equivalent variants, shows that strict inequality need not hold in Theorem 4.5 for indeterminate games.

§ 5.

5.1.

FICTITIOUS PLAY IN NON-ZERO-SUM GAMES

Discussion The method of fictitious play (FP) resembles a multistage learning

process.

At each stage, it is assumed that the players choose a strategy

that would yield the optimum result if employed against all past choices of their opponents.

Various conventions can be adopted with regard to the first

move, indifferent alternatives, simultaneous vs. alternating moves, and weighting of past choices.

The method can meaningfully be applied to any

finite game, and to many infinite games as well.

(See [3],

[4],

[5] pp. 82-

85, and [14].) It was once conjectured that the mixed strategies defined by the accumulated choices of the players would always converge to the equilibrium point of the game, or, in the event of nonuniqueness, compatible equilibrium points.

to a set of mutually

This is the natural generalization of

Robinson's theorem [14] for the zero-sum two-person case; it was recently verified by Miyasawa [12] for the special case of two players with two pure strategies apiece. The trouble begins, as we shall see, as tegy for

each player.

soon as we add a third stra­

It appears, intuitively, that this size is

to produce enough variety;

if

FP

necessary

is to fail, the game must contain elements

of both coordination and competition.

Our counterexamples include a whole

class of order-equivalent games, and thus do not depend on numerical quirks in the payoff matrices; nor are they sensitive to the minor technicalities of the

FP

algorithm.

It is clear that games with more players, or with more

strategies per player, can exhibit the same kind

of misbehavior.

TWO-PERSON GAMES 5.2.

25

A Class of Nonconvergent Examples We shall elaborate slightly on the game described in the Introduc­

tion, to eliminate any question of "degeneracy.” shown at right. ai ^

We assume that

^ ^i* ^or

a^ > b^ > c^

i B 1) 2, 3.

game is not constant-sum.

The payoff matrices are

and

It follows that the

It is easily shown that

the equilibrium point must be completely mixed (all II strategies active), and hence unique. For simplicity, we shall assume that the

FP

ultaneously, and that the first choice pair is 11. of 1

11

in the

again,

FP

sequence.

choices are made sim­

Consider any occurrence

The next choice of player I will certainly be

since that strategy will have become more desirable.

either stay with

1, or shift to

3, since

3.

Eventually,

Thus, after each run of

11

in fact, he must shift to we will find a run of

By a similar argument, this will be followed by a run of of

32, 22, 21, 11,

...,

Player II will

33, and then runs

in a never-ending cycle.

Suppose that a run of

11

is just about to begin.

Let

the current "accumulated-choices" vector for player I (thus, of times he has chosen row H

13.

i), and let

Y

X

represent

is the number

be the same for player II.

Let

denote the current "comparative-payoffs" vector for player I (thus, H-^ =

Yia i + '^■ H-j^ — H p

and we have

> Hp

But

< Hp

26

SHAPLEY

al ~ C1 r 13 ^ a 3 - b 3 r ll*

Let

r 33

be the length of the

33

run that follows.

By analogous reasoning

we have

al ” Y1

r33 ^ a3 - P3 r13"

Repeating the argument four times more, we obtain / 3 (5-1)

rh

where

r^

(5.1)

is greater than

- Viix /c,

(■)

(04 , 04 , 02; e5, 03, e)/ce, etc.,

is a normalizing constant.

(O)

The unique equilibrium point is

(1, 1, 1; 1, 1, 1 )/3•

Figure 2

BIBLIOGRAPHY

[1]

BASS, H., Order Matrices, Junior Paper, Princeton University,

[2]

BOHNENBLUST, H. F., KARLIN, S. and SHAPLEY, L. S., "Solutions of discrete, two-person games," Annals of Mathematics Study No. 24 (Princeton, 1950), pp. 51-72.

[3]

BROWN, G. W . , "Iterative solution of games by fictitious play," Activity Analysis of Production and Allocation, John Wiley and Sons, 1951, pp. 374-376.

[4]

DANSKIN, J. M . , "Fictitious play for continuous games," Naval Research Logistics Quarterly, Vol. 1 (1954), pp. 313-320.

[5]

DRESHER, M . , Games of Strategy: 1961.

Theory and Applications,

1954.

Prentice Hall,

SHAPLEY

28 [6 ]

EVERETT, H., "Recursive games," Annals of Mathematics Study No. 39, (Princeton, 1957), pp. 47-48.

[7]

GALE, D., KUHN, H. W. and TUCKER, A. W . , "On symmetric games," Annals of Mathematics Study No. 24 (Princeton, 1950), pp. 81-87.

[8 ]

GALE, D., KUHN, H. W. and TUCKER, A. W . , "Reductions of game matrices," Annals of Mathematics Study No. 24 (Princeton, 1950), pp. 89-96.

[9]

GALE, D. and SHERMAN, S., "Solutions of finite Annals of Mathematics Study No. 24 (Princeton,

two-person games," 1950), pp. 37-49.

[10]

KAPLANSKY, I., "A contribution to von Neumann's theory of games," Annals of Mathematics, Vol. 46 (1945), pp. 474-479.

[11]

KARLIN, S., Mathematical Methods and Theory in Games, Programming and Economics, Addison-Wesley, 1959.

[12]

MIYASAWA, K., On the Convergence of the Learning Process in a 2 x 2 Non-zero-sum Two-person Game, Economic Research Program, Princeton University, Research Memorandum No. 33 (1961).

[13]

NASH, J. F., "Non-cooperative games," Annals of Mathematics, Vol. 54 (1951), pp. 286-295.

[14]

ROBINSON, J., "An iterative method of solving a game," Annals of Mathematics, Vol. 54 (1951), pp. 296-301.

[15a]

SHAPLEY, L. S., The Noisy Duel: Existence of a Value in the Singular Case, The RAND Corporation, Rm-641, July, 1951.

[15b]

SHAPLEY, L. S., Order Matrices. September, 1953.

I, The RAND Corporation, RM-1142,

[15c]

SHAPLEY, L. S., Order Matrices. September, 1953.

II, The RAND Corporation, RM-1145,

[15d]

SHAPLEY, L. S., A Condition for the Existence of Saddle-points, The RAND Corporation, RM-1598, September, 1955.

[15e]

SHAPLEY, L. 1960.

[15f]

SHAPLEY, L. S., On the Nonconvergence of Fictitious Play, The RAND Corporation, RM-3026, March, 1962.

S.,

Symmetric Games, The RAND Corporation, RM-2476, June,

[16]

SHAPLEY, L. S., "Stochastic games," Proceedings of the National Academy of Sciences, Vol. 39 (1953), pp. 1095-1100.

[17]

SHAPLEY, L. S. and SNOW, R. N., "Basic solutions of discrete games," Annals of Mathematics Study No. 24 (Princeton, 1950), pp. 27-35.

[18]

SION, M. and WOLFE, P., "On a game without a value," Annals of Mathematics Study No. 39 (Princeton, 1957), pp. 299-306.

[19]

VON NEUMANN, J. and MORGENSTERN, 0., Theory of Games and Economic Behavior, Princeton, 1944.

L. S. Shapley’ The RAND Corporation

GAMES WITH A RANDOM MOVE Rodrigo A. Restrepo

§ 1.

INTRODUCTION

In some models of poker investigated by von Neumann [1],

[2], Gillies

[2], Mayberry [2], Karlin [3 ] and others, the strategies are functions of the outcome of a random move.

This move corresponds to the dealing of a hand

from a deck of cards, each possible hand being identified with some point in the real line.

It seems desirable to include some of these examples in a

more general class of games to which a general method of attack is applicable. This paper considers a class of games that includes the first model in [3].

Here the random move is described by two uniformly distributed

random variables with values

x

and

y

in the unit interval.

and minimizing strategies are n-tuples any

g(y) = (g-j_(y), g2 (y)^

• ••.» gn (y))

f(x) = (f-^(x), f2 (x), respectively.

The maximizing . .., fn (x))

These strategies are

designed to fit a situation where, having received some information about the outcome of the random move, each player may select one of several courses of action from a finite set of alternatives,, the number of alternatives being the same for both players.

The number

the first player, knowing only the value Consequently

f

f^(x)

will be the probability that

x, will select the i*"*1 action.

will satisfy the restriction

0 < f^(x) < 1, and if the

different courses of action are mutually exclusive, additional restrictions must be imposed on the strategies. strategies.

Similar remarks apply to the minimizing

Thus, the following strategy spaces may be considered:

29

RESTREPO

30

JF

= {f | 0 < fi (x) < 1,

all

x

and

i}

g

= lg I 0 < g i (y) < 1 ,

all

y

and

i}

n Z f (x) < 1, i=l

all

x]

9 ° = £g € S I -r2 g-f(y) < 1* i=l ±

all

y} .

i = 1 - d-T^T

^

it is

'

A CHARACTERIZATION OF THE OPTIMAL STRATEGIES

Two strategies max P(f, g*) f

fL

31

f*

and

g*

are optimal ifand only

P(f*, g*) = min P(f*, g ) . g

P(f*j g*) = —

2 b. G . (1) + max i=l 1 1 f

if

P(f*, g*) =

Explicitly,

Z f f .(x)cp.(x, g*)dx i=l J 0 1 1

and

(4)

P(f*, g*) =

Z a. F?(l) + min i=l 1 1 g

Z

i=l

J 0 g.(y)Y .(y, 1 1

f*)dy

where

(5)

^(x,

g*) =

+ (Ci - d i )Gi (l) + 2 d iG 1 (x)

and

(6 )

Y i (y, f*) = - b ± + (Ci + d i )Fi (l) - 2 d j F ^ y )

Thus, if the strategy space is

9r, then the strategy

f*

.

satisfies the

following conditions:

0
0

.

if the strategy space is SF°, then the conditions are as follows

32

RESTREPO f •(x) = 0 0 < f^x)

0 J

if

or

cp.(x, g*) > 0

£ *,

These conditions characterize

and

cp.(x, g*) < max cp. (x, g*) j^i J

cp. (x, g*) > max cp. (x, g*) 1 j^i J

and the minimizing strategy

characterized in a similar manner.

g*

can be

With this characterization (due to

S. Karlin), any knowledge about one of the optimal strategies yields informa­ tion about the optimal strategies for the other player. iterative, and the coefficients

cp^

and

The process is

play a fundamental role.

Some

of their properties are given in the next two lemmas; the proofs are simple computations which are omitted.

LEMMA 2.1.

For each

x,

max cp^(x,g) = (1—x) max (a^, a^+c^—d^) + x max (a^, a^-fc^-kL) g 1 min cp^(x,g) = (1—x) min (a^, a^+c^—d^) + x min (a^, a^+c^-kL) g min £

Y . ( y , f ) = (1—y) min (—b . b . +c . +d .) 1 1 1 1 1

+ y min (—'b 1. , —b1. +c1 .— d .) 1

max Y^( y , f ) = (1—y) max (—b ^ ,—b^+c^-kL ) + y max (—b^ ,—b^+Cj—d^)

LEMMA 2.2.

Let

of the set

E.

x(E) If

denote the characteristic function

d^ > 0, then

cpi (x, X([yi. 1 ])) = (di - c i) (yi - a i) + max [0 , 2di (x - y ±) ] Yi (y. x( [x±, 1 ])) = - ( d ± + c ±) { x ± -

§ 3.

GAMES DEFINED BY

- max [0 , 2di (y - x ±) ].

(P,

$ , S )

In these games the components of each strategy are uncorrelated, and in order to solve the game it is sufficient to find the for each index

i.

a^b^ > 0, for each

minmax

strategies

For simplicity it will be assumed in this section that i.

Furthermore,

if

a^

and

b^

are negative, reversing

GAMES WITH A RANDOM MOVE

33

the roles of the two players one obtains an equivalent game where the corresponding coefficients are positive.

If

a^ > 0,

> 0

then

reversing the orientation of the unit interval (i.e.,

(x —

y)

cp^

and

by sign q

al > 0 ,

(y — x ) )one obtains a game with

> 0 * d^ > 0

and

d^ > 0 ;

f, and

|c^| > d^, then either

q(y> Since

where

a^ + c^ — d^ > 0

c^

if or

f) > 0 g*

f*.

for

is

The case

is equally trivial.

consider now the games where

— d^ < 0 and

Finally,

f.

the characterization of Section 2 yields the optimal

One must

d^ = 0 , then

that

is uniformly best against any

known,

a^ +

if

— b^ + c^ + d^ < 0

t*ie f^rst case Lemma 2.1 shows

g*(y) - 1

< 0,

replacing sign

are constant functions and the game is trivial.

a i + c i~ ^i — all

but

— b^ + c^ + d^ > 0.

a^ > 0 ,

b^

These are games

> 0,

!c^| < d^,

where

cu and

q

are in the unit interval; their solutions are given in the following theorem:

THEOREM 3.1.

Under the given assumptions the game has

solutions of the form

f* = m.x([0 , x ± ]) + X ([x±, 1 ])

s i = n i^(^0, y i ]) + X([yi. 1]) ,

where E.

x(E)

Furthermore, iqiq = 0, and

PROOF. = y^ = q , but

n^

g* minimizes

q(cu,

denotes the characteristic function of the set

f*) = 0.

= ou

or

and

g*

Consider any strategies

f*

= 0

By Lemma 2-2

and

0
0

argument similar to that of Lemma 4.1. completely determined by

can be established by a perturbation To show that the solution is

X*, it is sufficient to observe that setting

GAMES WITH A RANDOM MOVE *

X*

y± = G&i + ^ — - c ^

one completely

can be determined by means of the

determines

37

g* and

cp^(x, g*) .

characterization of Section 2.

Then

f*

In the

indeterminate case where f X*

cp^(x, g*) = max cp^ (x, g*) > 0 , the indeterminacy of J * can be eliminated by the conditions f ' = 0 f°r i- Viewing as a parameter,

these conditions constitute a system of equations whose *

unknowns are

X*

and the values of

f^

in the indeterminate regions.

is no loss of generality in assuming that each

f^

There

is a constant in each of

these intervals. The previous result can be improved slightly. X* = cp^(o* g*)j Lemma 2.1 implies

that

values of

of

X*, the relative order

Indeed, since

a^ > X* >a^ + c^ — d^. y^, ..., yn can

For these

be deduced

from the

following theorem:

THEOREM 4-3.

Let

a^ > 0,

be arranged so that

a^ > a2 > ••• > an *

a^ > X > a^ + c^ — d^ then

for all

y 1 < y2 < ••• < yn

a^ + c^ — d^

|c^| < d^, and let the indices If

i, and if X

for all

y^ = sl^ +

d. - c. ’

if and only if

is a monotone decreasing function of

i

i, — x

PROOF.

By definition, y. = 1 — ^— — i

, and the inequality i

y. > y. J

is equivalent to

d . - Cj > ^

If X;

aj -

X

, the right hand side of (9 ) is a monotone decreasing function of X = a^;

it achieves its minimum value (zero) when

value

when

X =

+ c^ — d^

it achieves its maximum

and in this case the inequality (9)

becomes

ai + ci - di * aj + cj - dj-

EXAMPLE. c^ = — 1

and

The poker model considered

d^ = A^ + 1

Theorem 4.3 are satisfied,

with

in [3 ] has

2 < A^ < ...

X* = 0

< An -

and the optimal

Ai ?i = A T T ~ 2



a^ = 2, b^ = 0,

Theassumptions of g*

has

RESTREPO

38 Then

cp_^(x, g*) = \ * + 2d^G*(x), and the optimal

f*

can be deduced

immediately from the characterization given in Section 2.

BIBLIOGRAPHY

[1]

von NEUMANN, J. and MORGENSTERN, 0., Theory of Games and Economic Behavior, Princeton, 1944-

[2]

GILLIES, D. B., MAYBERRY, J. P., and von NEUMANN, J., "Two variants of poker," Annals of Mathematics Study No. 28 (Princeton 1953), pp. 13-50.

[3]

KARLIN, S.,and RESTREPO, R., "Multi-stage poker models," Annals of Mathematics Study No. 39 (Princeton 1957)* PP- 337-363.

[4]

KARLIN, S., "Operator treatment of the minmax principle," Annals of Mathematics Study No. 24 (Princeton 1950), pp. 133-154-

Rodrigo A. Restrepo

The University of British Columbia

A SEARCH GAME Selmer M. Johnson

§ 1.

INTRODUCTION

The following search game was first suggested to the author by Melvin Dresher several years ago. Blue chooses region to hide).

h, an integer, from the set of integers

Red guesses an integer from

1

to

is too high or too low, and repeats until he guesses

1

to

n

(a

n, is told whether he h.

The payoff to Blue

is one unit for each guess by Red (including the last guess

h).

This game

is illustrated in [1], pages 32-35. Progress was reported by the author in a 1958 internal RAND document which presented the solution for

n < 11

using a special notational device

to describe Red strategies. Recently Gilbert

[2] discussed the same game, along with related

problems, and gave its solution for even for

n = 6, were quite lengthy.

n < 6.

He stated that the calculations,

Later, in [3] Konheim computed the

number of "bisecting" strategies for Red.

This does not pertain directly to

solving the game. We present both the improved notational device for describing Red strategies and some recent theorems concerning necessary conditions for opti­ mality which greatly reduce the size of the game matrix. utions for

n < 10

quite simple.

The case

n = 11

This makes the sol­

is more complicated, and

incidentally exhibits a qualitative feature of Blue's optimal strategy con­ jectured by Gilbert in [2], namely that Blue ought to avoid as well as

n/2.

39

n/4

and

3n/4

JOHNSON

40

§ 2.

NOTATION

At first glance it would appear that this is a multimove game for Red, since after each try he has new information and makes his next move accordingly.

However, a better formulation is to describe a hunting strategy

as a complete pattern of moves which takes care of all possible hiding places for Blue.

Thus,

if

1 < j < n

each with probability integers, denoted by meaning of the

= {S^}-

n

numbers Blue can choose from, n

The following example will illustrate the

S..:

1

2

3

4

5

6

7

Pj

Pi

P2

P3

P4

P5

P6

P7

s. . IJ

2

3

1

3

4

2

3

j

Here Red guesses

3

first.

if too low, he tries the guess when

are the

p^ , then a strategy for Red is an ordered set of

j

6

If too high, he tries

1

for his second guess;

for his second guess, etc., S ^

is the number of

is tried.

Red will play a mixture of these strategies lity

t^.

The payoff,

if Red plays

S^, each with probabin and Blue plays {p^}^ is 2 S^.p.. J j J J

The value of the game is

V = min

max

Z) Z / S . .p .t . = max

{ t j {Pj} i

§ 3.

j

ljP j 1

min

Z/ S

{Pj} {t.} j

i

S ..p .t ..

^

1

PROPERTIES OF OPTIMAL BLUE STRATEGIES

First it is clear from the definition of the game that Blue may play symmetrically about the center of the interval.

a)

Also it is clear that all

Thus, we may assume

Pj = Pn-j+X-

p^

are positive in an optimal Blue strategy.

A SEARCH GAME

41

THEOREM 1.

(2)

px > p2 -

PROOF.

If

p^ < p 2, then Red would always play 2 before 1, so that

these two column payoffs in the game matrix could not be equal. consider any where matches

where

2

1

is played before

2.

Compare

is played at the k t^1 guess rather than

To see this with a related

1, and the rest of

in the relative order of play:

P_a____

pl

p2

k

k+m

t

k+m— 1

r

k— 1

k+1

k

t-1

k+m— 1

r

k— 1

(Note that the values of 2 < j < a, where

(3)

will be reduced by

S^a = S^2 — 1*)

£ j

_______________Pb

(S.. - S..)p.

It can be shown that

1

in the interval

Then

= - p, + m p 2 + £ p. > 0. 1 2 5.

Other properties of

optimal Blue strategies can be conjectured from the list of solutions in Sec. 5, but seem to be difficult to prove in general. holds for

5 < n < 11, and

conjecture that

p^ = 2p2

holds for

p^ > p ^ , 1 < j < n, for

§ 4.

For instance, 7 < n < 11.

p-^ = P 2 + P3

Also, one might

n > 4.

PROPERTIES OF OPTIMAL RED STRATEGIES

In this section we greatly reduce the list of possible optimal Red strategies against any trial Blue strategy.

THEOREM 2.

Suppose at a given stage that Red, playing

S^, has located that

S^

h

on the interval

k < j < m,

and

calls for next playing at

a, left of

the

42

JOHNSON median of the hider's frequency distribution on this interval, and if to the right of

a a.

optimality of

is too small, next playing at Then a necessary condition for

against

(4)

b

{Pj}

is that

S p. > E p.. k 2 p., so that if 2 p. < -~ 2 p. j =k 3 j =b 3 j =k ^ j =k ^ Thus Theorem 3 is proved.

§ 5.

SOLUTIONS FOR

then

a b— 1 2 p . < 2 p., k 3 a+1 3

n < 11

In this section we exhibit optimal strategies and the game value for n < 11.

From (1) it follows that Red can play a given

counterpart which

S^

S^ = 1

equally often. for

j
and

v (2) = 3/2. Case

n = 3-

= (2, 3, 1)

Here only

= (2, 1, 2)

are undominated.

and

S2 = (1> 3, 2)

with

By the remarks concerning symmetry, the

3 by 3 game matrix can be reduced to an equivalent 2 by 2 matrix.

\^Blue

\Blue

Red^\ S1

Thus

P

P2

Pi

2

1

2

1

Case

and n

2

3

= {pj} = {-4, S2

Red\

2

1

3

4

1

2

3

3

becomes

S2

between

Pi

.2,

.4}, t-^

of course. = 4.

= .6, and

t2 = -4, the latter split

Here there

are only

3

undominated strategies

The reduced game matrix is 3 by 2.

\Blue Rech\ S1 S2 S3

equally

V(3) = 9/5.

\Blue Pi

P2

P2

Pi

Red^\^

1

1

2

1

2

3

0

5

3

2

1

3

2

1

4

4

1

3

4

2

0

3

7

S^.

A SEARCH GAME A solution gives p^ = 1 / 4 played with probability Case

n = 5.

1/2

for each

each, and

45 j, while

S2

and

are

V(4) = 2.

The list of undominated strategies

and the reduced

game matrix are as follows:

\B lue

\Blue

R e d \ S1 S2 S3 S4 S5 S6 S7

P1

p2

p3

p2

pl

2

3

1

3

2

3

2

1

2

3

2

1

3

2

3

2

1

4

3

2

each with

4/9, 1/18 Case

5

3

2

4

4

6

1

40

6

4

1

40

4

5

3

3

40

1

4

4

4

40

2

1

3

4

2

4

5

3

41

1

4

3

4

2

3

8

3

45

1

3

5

4

2

3

7

5

46

40

40

20

Thus Blue plays with frequency bability

R e o \

and

(5, 3, 2, 3, 5) and Red plays

each with

probability.

2/9

probability and

p2

p3

p3

V(5) = 20/9.

p2

P1 6

5

4

7

4

49

5

5

4

48

5

7

3

52

4

5

7

49

4

4

9

50

5

3

7

48

3

8

8

55

3

9

7

56 56

4

3

7

11

48

48

48

with pro­

and

n = 6.

P1

310

S-^

48

46

JOHNSON

Blue plays a frequency distribution (5, 3, 2, t^ = .6, ty = . 2 , Case

and

n = 7.

2,

3, 5) „ Red plays

The solution for

n = 7, giving optimal

only,

5

23

6

23

5

23

5

23

46

46

46

with frequency distributions as indicated for each player. Case

t-^ = .2,

V(6) = 2.4.

23

V(7) = 23/9-

n = 8.

27 27 27 27

with

27

27

27

V(8) = 2.7. Case

n = 9.

2

1 1 1 1

6

4

6

8

1

31

6

4

8

6

1

31

5

8

5

4

4

31

6

4

7

4

4

31

6

6

5

5

3

31

5

7

3

8

3

31

62 62 62 62 31

is

A SEARCH GAME Here the number of

47

strategies is larger than necessary, but suffices.

V(9) = 31/11. Case

n - 10.

Here

V(10) * 35/12.

2

1

1

1

1

1

1

1

1

2

3

2

3

4

1

4

3

4

2

3

3

2

3

4

1

4

3

2

4

3

\Blue Red\^ S1 S2 s3 S4 S5

2

1

1

1

1

2

6

4

7

7

5

35

1

6

6

5

7

5

35

6

6

6

3

2

4

3

1

4

3

2

4

3

4

6

5

35

2

3

4

1

4

3

4

2

4

3

2

5

7

6

5

7

35

3

2

3

1

4

3

4

2

4

3

3

6

6

5

5

7

35

70 70 70 70 70

Case

n = 11.

Here

V(ll) = 3 -g^.

OO

58 29 29 34 25 22 1

1122

8

1

1122

7

4

4

1122

5

5

7

5

1122

5

7

3

8

3

1122

6

8

3

8

3

1122

54

6

6

6

6

6

6

8

4

6

87

6

6

6

18

6

6

6

7

15

6

The computation appears to get more difficult for yond.

accomplished by the techniques of this paper. total number of Red pure strategies,

(8) with

n = 12

and be ­

Nevertheless, considerable reduction in the list of Red strategies is

f (n)

For instance,

if

f(n)

is the

the recursion relation

S f(k — 1) f(n — k) k=l

f(0) = f(l) = 1, etc., gives

f(ll) = 58,786

and

f(12) = 208,012.

One could also use a simple explicit formula ((2) of reference

[2]) for

48

JOHNSON

computing

f(n), namely,,

(9)

BIBLIOGRAPHY

[1]. DRESHER, Melvin, Games of Strategy: Hall (1961).

Theory and Application,

Prentice

[2]

GILBERT, E. N ., "Games of identification or convergence," SIAM Review Vol. 4, No. 1, January 1962, pp. 16-24.

[3]

KONHEIM, A. G-, "The number of bisecting strategies for the game (N)," SIAM Review, Vol. 4, NO. 4, October 1962, pp. 379-384.

Selmer M. Johnson The RAND Corporation

THE RENDEZVOUS VALUE OF A METRIC SPACE 0. Gross

§ 1.

INTRODUCTION AND THEOREMS

It is the purpose of this short note to prove a few theorems regard­ ing average (arithmetic mean) distances of points in a finite collection from some point in a compact connected metric space.

To each such space,

in

fact, corresponds a unique constant which we shall call its rendezvous value. We shall not use the term in the sequel, but introduce it solely for its connotative value in those instances in which the space is equipped with suitable geodesics.

At any rate it adds spice to the title.

Perhaps the theorems are well known, but since they seem not to be to the few people polled (expressions ranged from surprise to total disbelief), it would seem advisable to jot down short proofs of them. A stronger motivation or raison detre for this paper, however,

is

furnished by the not too well exploited fact that game theory can be used as a tool in other branches of mathematics and physics, in this instance.

including geometry, as

In fact, well known results in game theory render the

proofs of the three theorems almost trivial. The theorems we wish to prove can be phrased as follows:

THEOREM 1.

Relative to a compact connected metric space

there exists a unique constant

K

with the property that

given any finite collection of points one can find a point P

such that the average distance of the points in the

collection from

THEOREM 2.

If

P

K

is equal to

K.

is the constant according to Theorem 1

49

50

GROSS of a compact connected metric d

j

then

THEOREM 3.


V. P n i=l 1 can use the same strategy, and upon exploiting the

symmetry of the metric, we obtain

RENDEZVOUS VALUE

51

n (2)

min ^

S

D(xp

P) < V.

At this point connectedness plays a role. P

Since the functions of

in (1) and (2) are really the same function and

since the range of a

real-valued continuous function on a connected set is connected, there exists a

P

where equality is achieved: n i E D(x., P) = V. n i=l 1

Thus,

existence is proved.

To see that no other constant

assume we have another such

K.

(than V) works,

Assume for the moment that

K > V.

Now, for

a game on a compact space it is known that one can approximate to an optimal strategy by a finite mixture, provided the payoff is continuous. this instance, given {xp

..., xR}

that for all

e > 0

Thus in

there exists a finite collection of points

and positive weights

{Xp

• ••* Xn}

summing to unity such

P n E X . D(x., P) < V + e; i=l

the strategy being employed, say, by the minimizing player. We remark that the theorem is true if the space consists of a single point.

But if otherwise, since the space is connected,

many points in every neighborhood, such about each of the

x^

with common denominator points

Xp

above.

N

in particular we can obtain a cluster of Thus, if the

and numerators

i = 1, . . ., N, with

n^

x^

were rational numbers

n^, we can select

of them clustered near

arithmetic average approximating an optimal strategy. continuity of the metric.

there are infinitely

Since, however,

N x^

different to obtain an

This follows by the

the rational numbers are dense in

the reals, we see that this can be done in any event, so that given any e > 0 for all

there exists a finite collection of points P

i

E

N i=l

D(x , P) < V + e. 1

{xp

..., x ^

such that

52

GROSS

But since

K > V, we can choose

e sufficiently small so that the value

cannot be

achieved by any point

P in the collection.

The case

K < V

obviously be treated in the same manner, and uniqueness follows.

K can

This

concludes the proof of Theorem 1.

§ 3.

By

PROOF OF THEOREM 2

the arguments used to prove

Theorem 1, the

K involved in the

statement of Theorem 2 is the value of atwo person zero sum game in each

player independently picks a point

the distance between their choices. satisfies

which

in the space with the payoff being

To show that the value of the game

the lower inequality,we observe first that the compactness of

the

space guarantees the existence of a pair of diametrically distant points

x-^

and

x2 .

1/2.

Let the maximizer play a pair of such, each point with probability

Whether this strategy is optimal or not, we have

K > min ( j D(xr

P) + j

D(x2 , P)).

But by the triangle inequality, etc.

D(x-^, P) + D( x 2 , P) > D(x1 , x 2 ) = d;

whence

To see that

K < d, we observe first that any pure strategy for the minimizer

guarantees that

K < d, since

however, d > 0,

and it remains to show

generality, therefore, take

d

is a

d = 1.

bound on the payoff. that

K ^ d. We can

By assumption, without loss in

However, the assumption that

K = d = 1

leads to the existence of an infinite sequence of points in the space all one unit apart from each other (by repeated application a sequence cannot contain a convergent not compact, contrary to hypothesis. of Theorem 2.

of Theorem 1).

But such

subsequence, and the space is therefore Thus

K < d.

This concludes the proof

RENDEZVOUS VALUE § 4.

53

PROOF OF THEOREM 3

The hypothesis of Theorem 3 is not satisfied unless can without loss in generality assume that

d = 1.

theorem in game theoretic language, as we can,

d > 0, so we

If we rephrase the

in view of the previous

arguments, we are required to find a suitable compact connected metric space such that the game value is a given number

K

on the interval

(^, 1].

In

fact, the space we shall select will be homeomorphic to the closed unit interval.

Consider the closed unit

interval

[0, 1].

One verifies that if

we select ( X + l ) |x— y | D (x, y) = ------------ ,

X

as the payoff,

X|x—y| +1

then

the Euclidian metric.

0 0

if

|x — y|

evaluates

is continuous etc., the new space

is still connected and compact and has diameter 1, for any choice of X = 0,

as

\

the value of the game is &

the value of the

X-

If

and it is an easy exercise to verify that

game tends to 1.Since thevalue of the game is

readily shown to be a continuous function of the game parameter with the appropriate choice

thereof obtain a game with

This concludes the proof of

Theorem 3.

we can

the prescribed value.

BIBLIOGRAPHY

[1]

GLICKSBERG, I., "Minimax theorem for upper and lower semi-continuous payoffs," The RAND Corporation Research Memorandum, RM-478, October 1950.

0. Gross

The RAND Corporation Santa Monica, California

GENERALIZED GROSS SUBSTITUTABILITY AND EXTREMIZATION Hukukane Nikaido

§ 1.

This paper is [15,

16].

INTRODUCTION

a sequel to the two previous papers by this writer

It may also be regarded as an answer to the question how far we

can go without fixed point theorems when handling systems of inequalities which generalize those arising in game theory. Ever since recent contributions

[1, 6 , 7, 8 , 10, 13, 14], with the

aid of fixed point theorems or their equivalent propositions, gave general conditions for the existence of competitive equilibrium, attention has shifted to the detailed observation of competitive economic systems under rather special conditions such as gross substitutability.

Several interesting

results have been quite recently obtained along this line (e.g., 11, 12]). line

The purpose

above,

of this paper, which is also

[2, 3, 4, 9,

in accordance with the

is to give an elaborated version of the existence theorem of

competitive equilibrium in the case of generalized gross substitutability. A criterion,

in the form of a boundary condition, for the existence of

equilibrium will be given, so powerful that it can be still effective even when the known fixed point techniques break down.

This criterion is also

useful to locate equilibrium within a given, possibly small, open subset of the whole price set.

A rearrangement of the argument also entails the unique

determination of some system of functions by their boundary values^ a situa­ tion similar to those observed in the theories of analytic functions and harmonic functions.

Generalized gross substitutability is a unified

generalization of skew symmetricity and gross substitutability.

Finally,

the

dynamic implication of generalized gross substitutability will be made clear for the Brown-von Neumann differential equation, which was originally devised to solve zero-sum two-person games. 55

56

NIKAIDO Our method is extremely elementary and consists of the extremization

of the sum of the squared positive excess demands of goods.

It takes

advantage of a simple fact that some quadratic form is negative definite, which was observed by McKenzie

[9].

This fact can be derived from the

following well-known basic theorem on matrices with nonpositive off-diagonal elements, which reads as follows: n x n

For any

matrix

B = [b-jj ], b ^

^0

(i ^ j),

the following three conditions are equivalent: (I)

Bx > 0

(II)

for some nonnegative vector

B hasthe nonnegative inverse

(Ill)

B ^

The Hawkins-Simon condition holds,

x ^ 0. ^ 0. i.e.,

all the principal minors are positive. As is well-known,

this basic theorem can be directly proved by bringing

into a triangular matrix,

B

through elimination, without any reference to the

Frobenius theorem on nonnegative matrices.

§ 2.

EXISTENCE AND LOCATION OF EQUILIBRIUM

2.1We shall goods, labeled

be concerned with a competitive economy where all

i = 1,

2, ..., n (n^ 2), are generalized weak

substitutes in the sense as defined below. by a nonnegative n-vector attention to a set are defined.

P

The price system will be denoted

p = (p^, P2 , ••., Pn ) > 0.

We shall pay special

of price vectors in which the excess demand functions

In the sequel, all topological considerations will be done with

regard to the relative topology of the nonnegative orthant induced by

gross

of

Rn , as

Rn , unless otherwise indicated.

The basic assumptions throughout this paper are as follows: (a) origin

0

P

i.s a nonempty subcone of the nonnegative orthant

deleted, and it is open in (B )

R™, the

R^.

The excess demand functions

E^(p)

(i = 1 , 2 ,

..., n)

are

single-valued continuous functions and have continuous partial derivatives BE.

in

P, which will be denoted by (y)

E^(p).

Positive homogeneity of degree

Ei (Xp) = \m E i (p)

for

X > 0, p e P.

m, 0 £ m 0 (i ^ j)

2.2

i.e. ,

%

(6)

everywhere in

P-jE^p) = 0

identically in that is,

P.

In the terminology of economists,

gross substitutability

prevails when

E ^ (p) ^ 0 (i ^ j).

for all

when the Jacobian matrix of the system of functions

i, j

skew symmetric.

2.3

In the both

On the other hand, E ^ (p) 4- Ej^(p) = 0

cases condition

(e)

E^(p)

is

is clearly true.

The central problem at issue here is the possibility and

stability of a competitive equilibrium. price vector,

P.

p € P

is said to be an equilibrium

if

(2.1)

Ei (p) 0.

theorem on the existence of competitive

equilibrium premises only the continuity of the excess demand functions and the Walras law, and not any special condition such as gross substitutability. But it heavily relies upon the simpler topological structure of the domain of the excess demand functions such as convexity or acyclicity.

Compared with

it, the novelty of our result lies in the fact that at the cost of assuming a type of substitutability the domain of

E^(p)

complicated nature than as is usually assumed.

is allowed to be of more P

need neither be convex nor

even acyclic, so that our result seems to be more than an alternative proof. The figure on the right illustrates the intersecting portion of a the normalizing hyperplane

P n 2

with p. = 1.

1=1 1

Hol es

58

NIKAIDO 2.4

[9]

As a preliminary step, a lemma, which is observed by McKenzie

for the case of

LEMMA.

m = 0 , will be given for a variety of values of

Under conditions

(a ),

(S ),

(y ),

(6 )

m.

and ( e )

the following results are true: [i]

When

for any p e P.

(2.3)

0 o|

Then, the quadratic form

£ E..(p)?.§. i*jeN(p) J J is negative definite, provided [ii]

When

m = 1, the

(2.4)

N(p) ^ ip.

quadratic form

£ E..(p)?.?. i,j=l 1 J is negative semi-definite at every

PROOF. our case.

p e P.

The proof will be done by adapting that of McKenzie [9] to

Upon differentiation of equation

2 p^E^(p) = 0

given in (6) we

have

+ E. (p) = 0 i=l 1 1J On the other hand,

(j - 1, 2 , ..., n).

3

(y) together with (B) implies, by virtue of the Euler

theorem on homogeneous functions, n £ Pn-E ii (p) - “ E, (p) = 0 W 1 J1 J

(j * 1 > 2 i •••. »)•

Hence summing up these equations gives

(2.5)

n £ e . . ( p ) p . + (l-m)E.(p) = 0

i=l where

(j = 1, 2,

. .. ,

n).

3

e^j (p) = E ^ (p) + E ^ ( p ) .

The subsequent portions of the argument

will be separately developed for [i] and [ii].

SUBSTITUTABILITY AND EXTREMIZATION The case of [i].

(2.6)

By

£ ieN(p)

1J

59

Relation (2.5) can be rearranged to

1

— e . .(p)p. = (I—m)E. (p) + £ e..(p)p J i^N(p) 1J 1

(jeN(p)).

(e ),

for

e . .(p) £ 0 for i£ N(p), j e N(p), sothat £ e..(p)p. >0 1J “ itN(p) 1J 1 = j e N(p) in view of being nonnegative. Furthermore, (1—m)Ej (p) > 0

for

j e N(p), since

for any

1 > m.

Thus

the right-hand side of (2.6)is positive

j e N(p), that is

(2.7)

£ -e..(p)p. > 0 , ieN(p) 1J 1

Since the off-diagonal elements of non-positive by this matrix.

(e),

condition (I)

Therefore,

p. ^ 0 1 "

(ij jeN(p)).

the matrix

[— e ^ ( p ) ] (i,jeN(p))

are

in the foregoing section is satisfied by

the matrix satisfies also the Hawkins-Simon

condition (III), so that all the principal minors of

[— e ^ ( p ) ]

are positive.

This proves that (2.3) is negative definite, because (2.3) = i £ el i (p)?i§ r 2 i.jeN(p) 1J 1 J The case of [ii]. Since

e > 0

5^

be an arbitrary positive number.

reduces to

n).

We rearrange (2.8) to

(j = 1, 2 ,

. .

n ),

are Kronecker's deltas.

By ( a ) ,

P

is open in

R^.

Therefore, P

of whose components are positive, and furthermore, regarded as a limit point of the former. definite at any (8)

(2.5)

(j = 1, 2 ,

n £ (eSj. - e. .(p))p. = ep i=l ij iJ J

(2.9)

where

1,relation

n £ e..(p)p. = 0 i=l 1J 1

(2 .8 )

Let

m =

p e P

with

in the same way

every point of

P

can be

Thus, once (2.4) is negative semi-

p > 0, the continuity of

assures that the same is true everywhere in Let us

contains some vectors all

e^j(p)'s

implied by

P.

now suppose that p^ > 0 (j = 1, 2, ..., n)

in (2.9).

Then,

as in [i], it can be shown that the principal minors of the

60 n x n

NIKAIDO matrix fe5ij — e -jj (p) ]

are

positive.

Hence

£ (e..(p) — 0 6 ..)?.?. 0, in P. z i,j=l 1J 1 3" 2.5

e > 0,

this

Q.E.D.

Next we shall give a boundary condition for the existence of a

competitive equilibrium in

P.

Let n

( 2 . 10 )

f ( P) = S

1=1

e . ( P) 2 , 1

where

(2.11)

0i (p) = max [Ei (p), 0]

(i = 1, 2, ..., n ) .

Throughout this paper, as in [5, 15, 16], the function

$ (p)

will play an

important role. Boundary Condition p € P, k P,

(*)

on

$ (p).

For any nonzero boundary point

if any exists, and for any sequence

there exists some

q e P, depending possibly on n

(2 .12)

£

i=l

(2.13)

{pV 3 p

in

and

P

converging to

Ip ]

such that

n p. =

1

£

i=l

q

1

$ (q) < lim sup $(pV ). v -» 00

THEOREM. condition

If ( a ) , (*)

(3 ),

(y ),

(e)

and the boundary

are fulfilled, then there is an

equilibrium price system (2 .1)

(6 ),

p

in

P

which satisfies

and (2 .2 ).

PROOF. Let S be the intersection of n 2 p. = 1 . Clearly we have

hyperplane

i=i1 inf $ (p) = 6 ^ 0 . peS

P

with the normalizing

p,

SUBSTITUTABILITY AND EXTREMIZATION Take a

sequence

may assume that naturally

{pv }in {pV ]

p £ P.

(*), there is a

S

such that

lim $ (pV ) = 5.

itself converges to a limit p.

On the other hand, p e P. q e S

for these

But this yields a contradiction:

p

61

and

Since If

p i S, then

Thus, by the boundary condition

{pV }

such that (2.13) is valid.

6 £ $ (q) < lim $ (pV ) = 6 .

p e P,

and continuity implies $ (p) = 6.

p = p

subject to the condition

pV e S, we

Therefore,

v ~,co

That is, $ (p)is minimized at

p e S.

Since

P is open in

R^,

the

following marginality conditions hold:

(2.14)

[||-] ^ £ pk [|i-l ^ U p jJP=P " k=l k U Pk Jp=p

Since

$ (p)

0

- 1- 2,

is positively homogeneous of degree

n).

2m, the right-hand side of

(2.14) equals

2m$(p), by the Euler theorem on homogeneous functions. On the x n other hand, in view of (2.10) and (2.11), we have -r— =2 I 0-(p)E..(p) 3Pj i=l 1 (j = 1, 2, ..., n), so that (2.14) reduces to n £ S-rCpOE..^) ^ m $ ( p ) i=l 1 1J

(2.15)

Multiplying the

(j = 1, 2,

j*"*1 inequality of (2.15) by

0j(p)* we have

n £ e . .(p)e.(p)e.( p ) ^ m$(p) i,j-i

(2.16)

Now, if

1 > m ^ 0, supposing that the set

..., n).

n £ e.(p). j=i J

N(p) = {j | E^(p) > 0}

nonempty, and in view of [i] in the foregoing lemma,

is

(2.16) leads to a

contradiction n

o >

Also,

if

£

,

i)jeN(p)

e . . ( p ) ei ( p ) e i (p) ^ m#(p)

xj

x

j

-

£

j=1

e . ( p ) ^ o. J

m = 1, by [ii] in the lemma, the left-hand side of (2.16) is n so that $ (p) 2 0.(p) £ 0. Therefore, in all the cases we have j=i j (2.2) is an immediate consequence of (2.1) and (6). Q.E.D.

nonpositive, (2il).

2.6

The following corollaries give some special but important

cases on which the boundary condition

(*)

works well.

NIKAIDO

62 COROLLARY 1.

Suppose that a system of functions

which satisfies

(a),

( 8) ,

(v),

( 5 ) and ( e )

in

also fulfills the following boundary condition For any nonzero boundary point an

i

lim E.(p) = + 00• p-p equilibrium price vector in P.

The

validity of

nonzero boundary point of

(*)

P (**):

p e P, ( P, there is

such that

PROOF:

E^(p)

Then, there is an

(**) implies that

lim $ (p) = + 00

- , P"*P p e P, f P, ifany.This surely means

so that thetheorem applies to

COROLLARY 2. Suppose that

the

for any

validity

the present case.

Pas well as

its closure

minus the origin is contained in a larger set where E^(p)'s

are still defined and continuous.

Suppose,

furthermore, that the

system of functions

which satisfies

(8), (y), (6) and (e) in

(a),

E^(p) P

also fulfills the following boundary condition (***): For any nonzero boundary point some

q € P, depending on

p e P, ^ P, there is na n p, such that 2 p. = 2 q. i=i 1

and

$ (q) < $ (p).

vector in

PROOF-

i=i

1

Then, there is an equilibrium price

P.

Since this time

E^(p)'s

the boundary (minus the origin) of

P, (*)

are defined and continuous even on is equivalent to

Corollary 1 includes the most familiar case in which totality of all positive vectors.

(***). P

equals the

Corollary 2 gives rise to the location of

an equilibrium within a possibly thin neighborhood cone

P.

It is also

noted that a version of the existence theorem as considered by Arrow and Hurwicz [4], in which

E^(p)'s

are allowed to take an infinite value, can be

easily reduced to Corollary 1, if generalized gross substitutability is assumed.

63

SUBSTITUTABILITY AND EXTREMIZATION § 3.

3.1 functions values.

DETERMINATION OF FUNCTION VALUES BY BOUNDARY VALUES

In this section it will be observed to what extent a system of

E^(p)

satisfying (a), (S) and (y)

is determined by their boundary

As before, all the topological concepts should be understood in the

relative topology of

R^.

Denote by

normalizing hyperplane with

THEOREM.

=

and

S

PROOF.

the intersections of the

P, respectively.

E^(p)'s

0 ^ m < 1.

and

satisfy

(a), (6), (y),

Then, unless

1, 2, ..., n) everywhere in of (2.10) is maximized subject

E^(p) =

P, the function to

p

Suppose that a maximizing point

p

point of

e

S

at no

P.

(3), (y )j (6) and (e)

hold in

[tpj]p=P ^

(3-1)

and

Suppose that

(6) and (e)

0 (i $ (p)

P

S

lies in

Pk[fpplp=p

(J

~

Z ’

equality holds in the j*"*1 relation in (3.1) unless that

$ (p) = 0.

In fact, if

$ (p) > 0, the set

Since (2.7) holds for

immediate that

p. > 0

for

Since (a)*

P, we have the marginality condition

■ ■ ■ ’

which is the counterpart of (2.14) in the maximizing case.

nonempty.

P.

p = p

and



pK = 0.

n>’ It is noted that We wish to show

N(p) = {j j Ej (p ) > o}

is

(p) ^ 0 (i ^ j), it is

j e N(p), so that

[% V p = up pk[tyP=p

£ n

Z)

A e. .( p ) e . i.jeN(p)

Therefore, 0 = § (p) = max $ (p) (a),

(/3 ) ,

(y) and

(b),

( p ) e . (p)

over all

implies that

= m$(p)

n £ j=l

e.(p)

3

>

o.

p e S, which, combined with

E^(p) = 0 (i = 1, 2, ..., n)

everywhere

64 in

NIKAIDO P.

3.2

The following corollaries are some of the immediate consequence

of the theorem.

COROLLARY 1. G^(p)

Let

F^p)

(i = 1, 2,

(i = 1, 2,

..., n)

and

satisfying

(a ),

common

Suppose furthermore that they are defined

P.

(R )

. . ., n)

be two systems of functions

and continuous in

(y )

and

with

m < 1

in a

S, and

n

n p.F.(p) = E p.G.(p) i=l 1 1 i=l

E

(3-2)

(3-3)

(i * 3 )

everywhere in (i = 1, 2,

P.

..., n)

Then, for

if

F^(p) = G^(p)

p e S, ^ S, we have

(i = 1, 2,

Fi (p) = G.(p)

everywhere in

PROOF. corresponding

Let

identically in (i = 1, 2,

P.

Ei (p) = F j.(p) - G ^ p )

$ (p)

is continuous on S

maximized subject to

p e S

P, clearly

..., n)

. . ., n)

for

at some $ (p) > 0.

(i = 1, 2,

n).

which is compact, sothat

p.

Unless

E^(p) = 0 (i

p e S, i S, this

all the assumptions

of the theorem in P.

(i = 1, 2,

identically in P.

COROLLARY 2.

$

(p)is

= 1, 2, ...,

n)

Since, by assumption, E^(p) = 0 p

must lie in

result is in contradiction to the above theorem, because

..., n)

The

Therefore

P.

But this

E^(p)'s

satisfy

F^(p) — G^(p) *E^(p)

Q.E.D.

Except for the system of identically

vanishing functions, there is no system of functions

=0

SUBSTITUTABILITY AND EXTREMIZATION Ei (p)

satisfying

0 £ m < 1 vectors

PROOF.

and

P

(a ),

(8 ),

(y ),

(6 )

if

consists of all nonzero nonnegative

p > 0.

Since this time

S = S c p, the corresponding

p e S

takes on a maximum subject to

It should be noted, however, that if

P

p > 0, conditions

and ( e ) ,

(a ),

(8 ),

(y ),

p e P.

at some point

the above theorem, E ^ p ) = 0 (i = 1, 2, . . ., n)

functions.

(e ),

and

65

(6 )

$ (p)

really

Therefore, by

identically in

P.

Q.E.D.

consists of all positive vectors are met by systems of nontrivial

The following system of functions provides us with an example.

E, (p) = 1- 2 aiiP}+ m _ Pi (i = 2< n )» where a., £ 0 (i + j), n i j=l 2 a. . = 1 (j = 1, 2, . . ., n) and 1 m ^ 0. On the other hand, the most i=l 1J familiar example of nontrivial systems satisfying these conditions for m = 1 on the totality of all nonnegative vectors

p > 0

is undoubtedly that

arising from the skew symmetric payoff matrix J n person game: E.(p) = 2 a..p. (i = 1, 2, ..., n ) . 1 j=l J § 4.

4.1

p(t) e S (t ^ 0)

has a global solution p(0).

(8),

p(0) = p° e S

P

(in

p^.

Since the mapping

is assumed to be open in Rn ) .

R^,

Furthermore, in view of

p - |p| : the set and

(a)

(B),

satisfy the Lipschitz condition in any compact

lying entirely in

Q.

Therefore, by virtue of the

Cauchy-Lipschitz existence theorem, the modified equation

(4.2)

= ei(IPl) ” Pi

has a local solution positive number.

Next, let t

> 0.

p(t)

t ^ 0), with

^

= l ’ 2’

p(0) = p°, where

a

This means that this

p(t)

is a p(t) e S

It is readily seen, similarly as in [16], that

t e [0, ct].

for

p(t) (cr

ek ( l p D

is a local solution of (4.1).

be any local solution of (4.1) defined in

[0,

t

],

where

Then, upon differentiation and taking into account the relations

p(t) e S, ( y )

and

9i (p)2 = ei (p)Ei (p), we have n

n

(4.3)

in

[0,

t

]-

By the lemma in Section 2, the right-hand side is nonpositive,

SUBSTITUTABILITY AND EXTREMIZATION so that

$(p(t))

bound of all

t

continued to

[0,

to for

is nonincreasing in

[0,

t

t

].

Suppose that

Let

pV g T

and

p g S, that is, r

Since

p(t)

lim pV = p

g

be the least upper

p(t)

g S.

can be

can be continued

r = |p |p

_

is closed in

w

p(t) (cr ^ t ^ 0)

uu < + 00.

V-co

implies

Let

for which the local solution

[0, uu), the above result assures that t g [0, uu).

].

67

g

S, $ (p) £ $(p(0))}

Then, assumption

(“ )

_

S.

Since

S

is compact, r

Thus, in the light of the compactness of F, we can let n I0•(p) ~ P* 2 9i (p)I ^ K on T for some K > 0. Then, for any 1 1 k=l k t2 e [0, uu), we have

is also

compact.

t2 lpi (t2 ) - P ^t^)!

n Iei (p) - P ±

£

t,, 1

0k (p)|dt £ K|tx - t2 |,

the extreme right-hand side of which tends to zero as

t^, t2 -* w.

Hence

lim p(t) exists and the limit belongs to F. This implies the possibility t-uu to continue the solution beyond uu, so that uu must be 4- 00. Now take any global solution

p(t).

Then,

(4.3) holds over time,

and we have

2 ft

*

^ E . .(p) 0 • (p) 0 • (p) i,jeN(p)

t - + 00. p(t)

1 > m > 0)

(if

m = 1).

— 2§(p)^2

This proves,

(if

in the light of the lemma in Section 2, that

Since

p(t) e r

over time, and

r

$(p(t)) - 0

is compact, the distance from

to the set of all equilibrium vectors converges to zero as

infinity. nonnegative

Q.E.D.

It is finally noted that, if

p > 0, assumption

dispensed with, because

(«>)

as

P

t

tends to

consists of all nonzero

does not become effective and can be

S = S.

BIBLIOGRAPHY

[1]

ARROW, K. J. and DEBREU, G-, "Existence of an equilibrium for a competitive economy,11 Econometrica, Vol. 22, No. 3 (1954)* PP- 265-290.

[2]

ARROW, K. J., BLOCK, H. D. and HURWICZ, L., "On the stability of the competitive equilibrium, II,11 Econometrica, Vol. 27, No. 1 (1959), pp. 82-109.

NIKAIDO

68 [3]

ARROW, K. J. and HURWICZ, L., "Competitive stability under weak gross substitutability: The 'Euclidean distance' approach," International Economic Review, Vol. 1, No. 1 (1960) pp. 38-49.

[4]

ARROW, K. J. and HURWICZ, L., "Some remarks on the equilibria of economic systems," Econometrica, Vol. 28, No. 3 (1960), pp. 640-646.

[5]

BROWN, G. W. and von NEUMANN, J.,"Solution of games by a differential equation," Annals of Mathematics Studies, No. 24, (1950), pp. 73-79.

[6]

DEBREU, G., Theory of Value,

[7]

GALE, D., "The law of supply and demand," Math. pp. 155-169.

[8]

MCKENZIE, L. W. , "On equilibrium in Graham's model of world trade and other competitive systems," Econometrica, Vol. 22 (1954), pp. 147-161.

[9]

MCKENZIE, L. W. , "Matrices with Dominant Diagonals and Economic Theory," Proceedings of the First Stanford Symposium, (Stanford University Press, 1960), pp. 47-62.

(John Wiley,

1959). Scand. 3 (1955),

[10]

MCKENZIE, L. W . , "On the existence of general equilibrium for a competitive market," Econometrica, Vol. 27 (1959), pp. 54-71.

[11]

MCKENZIE, L. W- , "Stability of equilibrium and the value of positive excess demand," Econometrica, Vol. 28, No. 3 (1960), pp. 606-617.

[12]

MORISHIMA, M . , "On the three Hicksian laws of comparative The Review of Economic Studies, Vol. XXVII, 3 (I960)* pp.

[13]

NIKAIDO, H., "On the classical multilateral exchange problem," Metroeconomica, Vol. 8, No. 2 (1956), pp. 135-145.

[14]

NIKAIDO, H . , "Coincidence and some systems of inequalities," Journal of the Mathematical Society of Japan, Vol. 11, No. 4 (1959) pp. 354-373.

[15]

NIKAIDO, H . , "On a method of proof for the minimax theorem," Proceedings of the American Mathematical Society, Vol. 10, No. 1 (1959), pp. 205-212.

[16]

NIDAIDO, H., "Stability of equilibrium by the Brown-von Neumann differential equation, ' Econometrica, Vol. 27, No. 4 (1959), pp. 654-671.

statics," 195-201.

Hukukane Nidaido The Institute of Social and Economic Research Osaka University

ADAPTIVE COMPETITIVE DECISION Jack L. Rosenfeld

§ 1.

INTRODUCTION

During recent years, much interest has been shown in a class of problems called by the author general adaptive processes.

In these processes

some measure of performance, or "return," is increased as the experimenter gathers more information about the process. however,

The characteristic feature,

is that the actions taken by the experimenter determine both the

actual return to him and the type of information gathered by him. armed bandit problem described by Bradt et al.

The two-

f1 ] is a. good example.

The

question posed is how to decide which of two slot machines to use at each of n

trials in order to maximize the total return.

nothing or one dollar when played.

Both machines pay either

The a priori probability distributions of

the payoff odds of each machine are available to the experimenter. problem is a finite length general adaptive process.

This

Other general adaptive

processes are described in [2] to [6]. An adaptive competitive decision process consists of an person,

zero-sum game that is to be played an infinite number of times.

each step (a single play of the

m x n

selected by his opponent.

After

game), the payoff is made and each

player is told what alternative (pure strategy of the

advance.

m x n, two-

m x n

game) was

The payoff matrix is not completely specified in

The nature of the uncertain specification of the matrix and the

process by which the uncertainty can be resolved form the heart of the adaptive decision process.

Some of the payoffs of the matrix are unknown to

the players when the game is begun.

Unknown payoffs are selected initially

according to a priori probability distributions, these probability distributions.

If

is one of the unknown payoffs,

players do not learn the true value of *

and players are told only

a^j

until player

A

the

(the maximizing

Agencies supporting this research are listed at the end of the paper. 69

70

ROSENFELD

player) uses alternative alternative B loses

j

i

and player B (the minimizing player) uses

at some step of the infinite process.

Then A receives

a ij» and both players are told the true value of

no longer unknown.

a±y

a ^ ;

so that it is

Of course, when all the unknown payoffs have been

received, the process is reduced to the repeated play of a conventional m x n, two-person, zero-sum game. (One can visualize permanently recorded. matrices.

a stack of matrices, each with all its payoffs

Some of these payoffs are hidden by covers on all

A probability is assigned to each matrix in the stack.

players know this probability distribution.

The

One of the matrices is chosen

according to the probability distribution by a neutral referee, and that matrix is shown to the players with the covers in place.

The game is played

repeatedly until a pair of alternatives corresponding to one of the covered payoffs is used.

The cover is then removed, and the play resumes until the

next cover must be removed, and so on. remains.

This process continues until no cover

The completely specified game is then repeated indefinitely.) There are two subclasses of adaptive competitive decision:

the

equal information case, in which A and B are given the same a priori knowledge about the unknown payoffs; and the unequal information case, in which their a priori data are different.

(The ’’stack of matrices” illustration is not

valid in certain unequal information cases.)

One example of the unequal

information subclass is

the situation in which Bknows the true value of

some payoff that A does

not know.

Adaptive competitive decision processes belong to a class of problems involving the repeated play of certain game matrices; these processes are all infinite games that have supinf solutions [7, 8, 9].

§ 2.

MEASURE OF PERFORMANCE

It appears that A should play so as to receive a large payoff, learn the payoffs that are unknown to him, and prevent B from learning the payoffs that are unknown to B.

Also, A should extract information about the payoffs

unknown to him by observing the alternatives that B has chosen during previous steps, while not divulging to B, by the alternatives A chooses, any information about the payoffs unknown to B.

A quantitative measure of

ADAPTIVE COMPETITIVE DECISION performance has been found, such that

A

71

can essentially satisfy these goals

by selecting a strategy that minimizes the measure of performance. measure is called the mean loss.

This

Player B should select a strategy that

maximizes the mean loss. Before the mean loss is defined,

it must be noted that the players

of adaptive competitive decision processes lose no flexibility by restricting their strategies to "behavior strategies."

A player is said to be using a

behavior strategy if at each step he selects a probability distribution over his tive.

m

(or

n) alternatives and uses that distribution to select an alterna­

The distribution he chooses may depend upon his knowledge

of the

history of the process, which consists of the alternatives selected by both players and payoffs received at all preceding steps.

(The analysis used by

Kuhn [10] is easily applied to adaptive competitive decision processes, which are infinite games of perfect recall,

to reach this conclusion.)

Because

behavior strategies are completely general for adaptive competitive decision processes, the following discussion will assume that both players restrict themselves to the use of behavior strategies. The probability distributions used by players

A

and

B

at the k

th

step are respectively denoted by

Here

p^ (q^) is the probability that x.U at the k step. Hence,

Note that

k

A (B)

is a superscript— not a power.

selects alternative

1c

In general, p

and

i (j )

q

k

depend

upon the past history of the process. The value of the payoff matrix is denoted by maximum expected return that

A

the unknown payoffs were known. values of the unknown payoffs. step when

A

uses

p

k

and

B

v; it represents the

could guarantee himself at one step, if all Of course, v

is a function of the true

The expected return to player uses

q

k

is denoted by

k

r :

A

at the

k*1*1

72

ROSENFELD

k

5?

r Since some of the payoffs



a^j

?

A

kk

pi V i J

are unknown, r

and the values of the unknown payoffs.

' 1c

1c Ic p , q ,

is a function of

The term

Tk k L = v — r

is called the single-step loss at the k*"*1 step; it is the difference between the expected payoff that

A

could guarantee himself if the values of the

unknown payoffs were known and the expected payoff

A

does receive.

ic. single-step loss, L , represents the expected loss to

A

at the

k

The

ttl.

step

of the process due to his lack of data about the unknown payoffs. The loss for

N

steps of the process, called the N-truncated loss,

LN' is Ln W

h

N k 2 Lk . k=l

The N-truncated loss is a function of the true values of the unknown payoffs as well as the strategies used by both players.

The mean value of the

N-truncated loss, with respect to the probability distribution of the unknown payoffs is

%

where

P

- J%dP

,

denotes the cumulative distribution function of the unknown payoffs.

is a function only of the strategies of

A

and

B.

The mean N-truncated

loss can also be written

rn (1)

T"N = imNv — L N

where kth

r

k

and



v

2 yr^ k=l

,

are the mean values of the expected return to

step and the value of the payoff matrix, respectively.

were to continue for

N

steps and then terminate,

A

at the

If the process

an optimum strategy for

ADAPTIVE COMPETITIVE DECISION player

A

73

would be the one that maximizes the mean value of the sum of his

expected returns at each step.

According to Equation 1, this is also the

strategy that minimizes the mean N-truncated loss. strategy maximizes

Conversely, B ’s optimum

L^.

Because the adaptive competitive decision process involves an infinite repetition of the game,

the mean N-truncated loss is useful only if

it can be extended to an infinite number of plays. approaches infinity of performance that

A

exists,

If the limit as

N

then this quantity is the measure of

should try to minimize and

B

to maximize.

This

measure is called the mean loss, L:

L = lim L,, . N -o o

in

It will be shown for the equal information case that a supinf solution exists:

(2)

where

L

and

respectively,

Sg

opr

= inf sup L = sup inf L , o o c q A B B A

represent all possible mixed strategies of

A

and

for the infinite game of adaptive competitive decision.

B, LQpt

may be negative as well as positive or zero. Measures of performance other than the mean loss (e.g., the mean sum of discounted expected returns) can be used.

The strategies resulting

from the optimization of these other measures will differ from those arrived at by optimizing the mean loss.

However,

the mean loss has been found to be

a very useful measure.

§ 3.

EQUAL INFORMATION —

SINGLE UNKNOWN PAYOFF

The following intuitive argument presents the essence of the more formal proof that follows.

This matrix is used as an illustration.

game is to be played repeatedly. B

1

a1 h 1 2

0 - 2

2

0

Pr (an =

=

k

Pr (an = ~1^ = I

The

74

ROSENFELD

Thus

v = -1,

Consider the auxiliary game defined by the matrix

v ~ a 21 + ^

v ~ a 22 + ^

1.5

L - 1

L - 1

L + 1

What relationship does this auxiliary matrix have to the original process? Let

L

be the mean loss associated with the competitive decision process,

a mean loss exists. for that step is not received.

if

When one step of the process is played, the mean loss

v — a i f

a-^

is received, and

v — a^

if

a-^

is

In the former case, the process terminates with no further

loss, because both players should use optimum strategies from the second step on.

If

a-Q

is not received,

the mean loss from the second step on is

L,

because the situation faced by the players is identical to the situation faced by them before the first step.

This reasoning leads to the auxiliary

matrix. Players

A

and

B

should be as willing to make one play of the

game specified by the auxiliary matrix as they are to participate in the original process.

Furthermore, the minimax value of the auxiliary matrix

should equal the mean loss for the adaptive competitive decision process upon which that matrix is based

(y(L )

y(L) = L .

(3)

Note that

A

is the minimizing player of the auxiliary matrix and

maximizing player, The value of L

denotes the value of the auxiliary matrix):

L

since the payoff represents a loss to

A

B

is the

and a gain to

that satisfies Equation 3 is called the optimum mean loss,

The minimax value of the auxiliary matrix for the example is as

follows: -CL)2 + 3 •5L + 0.5 y(L) =

if

L < 2-5

if

L > 2. 5 .

-L + 4-5 L - 1

B.

ADAPTIVE COMPETITIVE DECISION Therefore, L

75

. = i. opt ^

Because neither player's information about the process changes from one step to the next,

it seems reasonable that each player should repeatedly

use one probability distribution for his alternatives before the unknown payoff is received, and afterwards repeatedly use the minimax distribution for the completely known matrix.

Strategies that consist of the repeated use

of one distribution until an unknown payoff is received, and then another repeated distribution until the next unknown payoff is received, and so on, are called "piecewise-stationary."

The first segment of

A's

optimum

piecewise-stationary strategy is the distribution that minimizes the expected payoff from the auxiliary matrix, and expected payoff. -Jr = LQ pt

B's

first distribution maximizes the

These strategies, of course, are functions of

is substituted for

L

L.

When

in the auxiliary matrix for the example,

the

minimax strategies are found to be

_ ,1 po “ ^2’

_ ,1 In qo ~ (2’ 2) m

1x 2'

(It will be shown later that the strategy

qQ

is not truly optimum.)

The complete proof of the preceding solution is given in [11] and is briefly outlined here; it adheres closely to the proof given by Everett for "simple stochastic games" [9].

Consider an adaptive competitive decision

process with a single unknown payoff

an-

The total loss for

N

steps of

the process is

and the mean value of the total loss for

(5)

_ L

= iN

where

N

steps is

N k— 1 2 (v - r ) J] (1 - P?q£) k=l t=0 1 1

is defined equal to

0.

based upon the assumption that once

>

The validity of these expressions is a^

is received the players will

repeatedly use optimum strategies for the remaining known game; thus the loss will equal zero from that point on.

Equations 4 and 5 are valid only for

"memoryless" sequences of distributions, p^,

..., pN

and

q^,

..., qN .

ROSENFELD

76

Memoryless sequences are those in which

p

k

and

q

k

are independent of the

actual alternatives used by the players at steps 1, 2, • ••, k— 1.

However, the

use of Equations 4 and 5 will be restricted to cases in which one may assume with complete generality that the sequences are memoryless. Next an auxiliary matrix is constructed such that the (1, 1) is

v —

and all other entries are

minimizing player and

B

a^j + x.

qx -

A

is the

is the maximizing player, this single-step game has

a minimax solution with value denoted by and

If player

entry

y(x)

and optimum strategies

The value and minimax strategies are functions of variable

px x.

The

auxiliary matrix can be used to divide the set of all real numbers into two subsets,

S-^

and

X

S2 :

G

y(x) < x

and

x < 0, or

y(x) < x

and

x > 0;

y(x) > x

and

x > 0, or

y(x) > x

and

x< 0 .

if

St

(6) X

(The point If

G

x = 0 x

played, and if

S0

can be in both subsets.) S-^, if

g

A

N

steps of the adaptive competitive process are

uses distribution

or until the number of steps reaches

px

repeatedly until

a-^

N, then an upper bound to

is received Ljg

can be

found by substituting into Equation 5 inequalities based upon the definition of

S-^

in Equation 6.

(Equation 5 can be used despite the restriction to

memoryless sequences, since it has been demonstrated [11] that if player uses a given piecewise-stationary strategy, player

B

a memoryless strategy as he can with any more general strategy.) resultant bound to

Lx,

N x “ Ln


anci a s s i g n

>

and a s s i g n

.

tha t no

go

is u n d e t e r m i n e d .

(i)

and assign cr^,

itself

97

1:

(iii) such

but wh i c h

INFORMATION

Consider

point of P^

to

C-^.

P 1 S ^

If t h e r e

has

exists

yet been

a n d go o n

a

assigned

Step a ;

to

cr^

consistent with

to

ST, choose J-

if t h e r e

doesn ot

one

C-^ such

exist

a\, J-

such a

Step a directly.

to

Step a : (i)

We n o t e

able n u m b e r of points >

t

not

yet

that

any countable

assigned

assigned

to

to

ST 1

° ol

S^;

step

t h e r e h as

therefore,

(since

P

been only

there m u s t

a count­

exist

is u n c o u n t a b l e ) .

sa -
m(pn, en ) — h, u u

under the same conditions.

PROOF. assume

that

point

Pq

The necessity of

(a1) holds, a motion for the pursuer is fixed.

(a1)

e(-)GAME

ph (0) = p0 for

F u rth er, as

and

k = 0, 1, 2,

h

is quite obvious.

is given,

Wechoose a small

and, step by step, construct a function

(10)

and (£')

p^(')GAMP

Let

and some initial positive number

h

satisfying the conditions

M(ph ( t k+1L e ( t k+1)) .

denoted by

strategy in our sense. than a positive

e(i)

(t ),

If

123

for all times

t >

t

,

pursuer corresponding to the new

The class of all motions

p(')

obtained

A, and it is not hard to verify that it is a

h-^

and

h^

are sufficiently small, say less

6, then by the second supposition of Theorem

e(T)) < e

and consequently V(p0 , eQ ) < T(A) < t + 0 < V(pr

e1 ) + e .

Hence we have proved the inequality

V(p0 , e0 ) _ V(px, P2 ) < e

for all

pairs of initial positions

< Pq , e^ >

sufficiently close, one to the other.

THEOREM 9.

Then

and

V

< p^, e-^ > which are

is uniformly continuous.

If for all initial positions

< p^, e^ >

the pursuer has at least one successful strategy and the upper value

V(p, s)

tends to zero as

to the set of final positions neighborhood of the set

F

F

< p, s >

tends

(i.e., in an appropriate

the evader can be caught in a

short time), then the game is determined and its upper value

V

(in this case

V = V) is simultaneously a minor

and major function.

PROOF. the upper value

In view of Theorem's k s 6, and 8 it is enough to verify V(p, e)

satisfies both conditions

Let us assume that eeD(eQ , h ) . e (h) = e.

D(pQ, h) x D(e^, h) and

There exists a motion

e(-)eAME

Let us take an arbitrary motion

initial values

h +

< p^, e^ >).

(o')

Of course

F

such that

and

(P1).

are disjoint and e(0) = eQ

and

p(*)GA^Pt:

(corresponding to

p(h)eD(pQ, h)

and consequently

min V(p, e) < V(p(h), e) + h < V(pn , en ) . peD(p0 ,h) ~ 0 0

that

RYLL-NARDZEWSKI

124

Hence min V(p, e) < V(p0 , h0 ) — h peD(p0 ,h)

for all

Obviously the last inequality is equivalent to Now let us fix an arbitrary point A to

as follows: p

in the time interval

(a1).

peD(pQ, h) , and define a strategy

< 0, h >

the pursuer moves from

p^

along an arbitrary segment independently from the behavior of the

evader.

At the moment

e(h)eD(eQ,h ) , then

h

some

position

the pursuer

T(A) < h +

max V(p, e). eeD(en ,h) peD(p0 ,h):

isobtained, where

A

0

max

eeD(e0 ,h)

It is

guarantees the time

Hence, by the definition of

v (Prv hn) < h +

0

< p, e(h) >

applies one of his optimal strategies.

easy to see that the described strategy

all

eeD(e0 , h ) .

V, we have for

V(p, e).

This concludes the proof. It is obvious that,

if at least one major function exists,

assumptions of Theorem 9 are satisfied.

THEOREM 10.

then both

Hence we have

If there exists some major function,

then

the game is determined.

REMARK 5.

This theorem implies for example the determinateness of

a game of pursuit and evasion of two points in a circle since the function

V

is a major function (where Vp > Vg ) .

V , Vg

P

- V

e

are maximal velocities of the players and

However we do not have an efficient way to compute the value of

this quite simple game. The following example shows that a game can be determined but its value is a discontinuous function, consequently some condition of continuity in Theorem 9 is really necessary.

A THEORY OF PURSUIT AND EVASION EXAMPLE. d(e^,e2 ) = limited

pM

< 0, 1 > —

|e -^ — e2 1

by 1). It

E, d(pr

p£ ) =

125

|Pj_ - p2 1

and

(i.e., the maximal velocities of both players are

is

easy to see that this game is determined and itsvalue

is given by the formula

V(p, e)

and

V

is not continuous.

§ 5-

GENERALIZATIONS AND PROBLEMS

Let us mention some possibilities of generalization of the above theory which are for some applications, e.g., to replace compactness by local compactness or to introduce more general payoff functions of the form T

J

cp(p (t ), e(t))dt,

0 where

t

=

min |t:

valued function. PROBLEM.




€f|

and

cp

is a given continuous real

Functionals of this type are used in optimal control theory. Let us assume that a game is determined and its value is

a continuous function.

A strategy can be called nice if it is univalent,

optimal, defined for all initial positions, all admissible motions of the enemy.

and continuous with respect to

It seems that such nice strategies of

evasion exist only in exceptional cases when the space "barriers."

E

is without

On the other hand we suspect that they exist more often for the

pursuer, e.g.,

in the pursuit of one point by another in a simply connected

domain with a sufficiently smooth boundary, under some natural restrictions on velocities.

BIBLIOGRAPHY

[1]

KELENDZERIDZE, D. L . , "Theory of optimal pursuit strategy,” Steklov Mathematical Institute, January 13, 1961.

RYLL-NARDZEWSKI

126 [2]

ZIEBA, A,, "An example in pursuit theory," Studia Math. 22 (1962), pp. 1-6.

[3]

MYCIELSKI, J., "Continuous games with perfect information," The RAND Corporation, P-2591, June, 1962.

C. Ryll-Nardzewski University of Wroclaw Wroclaw, Poland

A VARIATIONAL APPROACH TO DIFFERENTIAL GAMES Leonard D. Berkovitz § 1.

INTRODUCTION

A differential game is a two-person zero-sum game that can be described roughly as follows.

Let

t

denote time, let

be a vector in real Euclidean

n

(t, x)

or state, x(t)

space.

The position,

space, and let

(R

x = (x^,

. .., xn )

be a fixed region of

of the game is determined by

a system of first-order differential equations

(1.1)

where

|f~ = G ^ t i X ^ , z),

y = (y^,

player I and The choice of

•••, y°)

..., n,

is a vector chosen at each instant of time by

1

z = (z , ..., z y

i = 1,

s

is governed

) is a vector similarly chosen by player II. by a vector-valued

function of position and

time, Y(t, x) = (YI (t, x )j .

defined on

(R; similarly, the choice of

Y°(t, x)),

z

is governed by a vector-valued

function of position and time,

Z (t,

defined on

(R.

x) = (Z1 (t) x ) ,

The functions

strategies, and the variables

Y(t, x) y

Zs ( t ,

and

and

z

x)),

Z(t, x)

are called pure

are called strategic variables.

Each player selects his strategy from a class of permissible functions before the start of play.

The game is one of perfect information:

know the past history; they know how the game proceeds ential equations (1.1)]; and at time game.

Therefore,

t

Both players

[the system of differ­

they know the state

x(t)

of the

in selecting a pure strategy, a player selects a set of 127

128

BERKOVITZ

instructions for choosing his strategic variable in all possible situations. Play begins at some initial time and position whenever

t

and the position vector

x(t)

point of a previously specified surface If

(T, x^)

starting at

(to*

xq

)

are such that

3

i-n

(ft* anc^ terminates (t, x(t))

is a.

contained in the boundary of

(ft.

denotes the point of termination of the play of the game (tQ,

xq

)* then the payoff to player I is given by

g(T, xT ) +

J

T f(t, x(t), y(t), z(t))dt,

t0 where

g

is a real-valued function on

(t, x, y, z)

3, f is a real-valued function on

space, y(t) = Y(t, x(t)), and

of player I is to select a strategy of player II is to select a strategy

z(t) = Z(t, x(t)).

Y(t, x)

The objective

that maximizes the payoff; that

Z(t, x)

that minimizes the payoff.

The study of differential games was initiated by Rufus Isaacs [6]. Isaac's treatment of the problem was quite formal and heuristic. very little additional work has been done on the problem.

Nevertheless,

Fleming [4],

[5],

and Scarf [7] have considered discrete approximations to certain types of differential games.

In [1], Fleming and the present author studied a special

class of differential games in the plane by means of techniques and results from the calculus of variations. In this paper, we shall study a much wider class of differential games than those considered in [1], again using techniques and results from the calculus of variations.

The present investigation, however,

complete, even for the special case corresponding to [1]. describe the class of games to be studied.

is more

First, we shall

Necessary conditions that must

hold along a path resulting from the employment of optimal pure strategies by the two players will then be deduced for the present class of games.

This

will be done by relating the problem of determining these necessary conditions to a problem of Bolza with differential inequalities as added side conditions. The theory of such Bolza problems will then be used to obtain the desired necessary conditions.

The continuity and differentiability properties of the

value will be studied next, and we shall show that the value satisfies an analogue of the Hamilton-Jacobi equation. shall develop a sufficiency theory that,

In the last part of the paper, we in principle, enables us to construct

DIFFERENTIAL GAMES optimal pure strategies. carried out.

In some

examples,

Some of our results

129

this construction can actually be

were obtained by Isaacs [6] in

a formal

manner or under more restrictive assumptions. We conclude this introductory section by listing certain definitions and conventions that we shall use throughout the paper. notation will generally be used. single letters.

Vector matrix

Vectors and matrices will be denoted by

Superscripts will be used to denote the components of a

vector; subscripts will be used to distinguish vectors.

Vectors will be

written as matrices consisting of either one row or one column. We shall not use a transpose symbol to distinguish between the two usages, as it will be clear from the context how the vector, is to be considered. of two n-dimensional vectors

X

and

Thus, the product

G, say, will be written as

^G.

All

scalars that occur will be real and all vectors will have real components. A vector will be called positive if each of its components is positive. Negative, nonpositive and nonnegative vectors are defined similarly.

The

length

of a vector

x

will be denoted by

llxll .

Let

X (t,x,y, z) = (x (t,x, y, z ), •••} X (t,x.,y,z))

be a vector-valued function defined and differentiable on a region of (t, x, y ,

z)

space.

If

z

is an s-dimensional vector, we can form a matrix

of partial derivatives in which the (i,j)-th element is

—3X— 1 dXJ The symbol

Xz

1• = ,1,

. . ., m,and

will denote this matrix.

,

, j. = 1, ...,

Note that when

reduces to the usual notation for a partial derivative. and

Xy

will have

will

be denoted

similar meanings.

by det

M.

Thedeterminant

s.

m = 1, s = 1, this The symbols

Xx

of a square matrix

M

130

BERKOVITZ The term region will mean, as usual, an open connected set.

The

closure of a region 3D will be denoted by 3D, and

3D will be called a (k) closed region. A real-valued function will be said to be of class C' 3D if it is

on

those of order

k

3) and all of its derivatives up to and including

have continuous extensions to 3D.

will be said to be of class on

3D.

on

on

A vector-valued function

3D if each of its components is

A region will be said to have a piecewise smooth boundary if its

boundary consists of a finite number of manifolds with boundary. The letter

t

will denote time, and the operator

(d/dt)

will be

denoted by a prime.

§ 2.

2.1

The Functions Let x

vector,

and let

(t, x, y, z) into

and

g

of

(t,x)

space and a

We assume that

space.

g

f(t, x, y, z)

with range contained in Euclidean §.

bounded region

§

of

is contained in the projection of

G(t,x,y,z) = (G1 (t,x,y,z),

on

be an s-dimensional

As we indicated in Section 1, we shall be concerned

with a real-valued function

of class

let y

be an s-dimensional vector. We shall be concerned with

space.

(t, x)

G

an n-dimensional vector,

z

a bounded region

§

f

be

FORMULATION OF THE GAME

n

and a vector-valued function

..., Gn (t,x,y,z))

space.

We assume that

f

and

G

are

We may write the system (1.1) in vector notation as

(2.1)

x 1 = G(t,x,y,z),

where the prime denotes differentiation with respect to time. 2.2

The Constraints We now discuss functions

of strategic variable

K(t, x, z)

z, and functions

K(t, x, y)

that will constrain the

choice of strategic variable

y.

into

K(t, x, z) be a vector-valued

(t, x, z)

space.

Let

Let

that will constrain the choice

denote the projection of function

§ of class

DIFFERENTIAL GAMES C^2 ^

§>(Z K

on

131

having range on p-dimensional space and satisfying the

following constraint conditions:

(2.2)

(i)

The set of points that

(ii)

If

(t, x, z)

K(t, x, z) > 0

in

is nonvoid.

p > s, then at each

(t, x, z)

at most

s

vanish.

The matrix with elements

of the components of

formed from those components vanish at

(t, x, z)

such

K1

§>,

in K

of

can

K

that

has maximum rank at

(t, x, z).

A third constraint condition will be stated presently. Let containing

w

be a naturally ordered subset of the integers

at most

s

elements.

For each such

w

1,

. .., p

consider the system of

equations

(2-3)

K l"(t, x, z) = 0,

where the symbol of

K

having indices that belong to

the natural order. holds.

denotes a vector whose components are those components

Let

S^2^

ws and the order of the components

denote the subset of

It follows from (2.2) (ii) that

S^2^

is

on which (2.3)

is either the null set or a

differentiable manifold, and the solutions of (2.3) can be used to obtain a system of local coordinates for

S^2 ^*

We can now state our third constraint

condition:

(iii)

Each manifold

S^2^

can be represented

by a finite number of overlapping coordinate systems obtained by using solutions of (2.3) that always include

(t, x)

in the set of

independent variables.

These solutions of (2.3) will be referred to as the coordinate solutions.

132

BERKOVITZ The constraints on the choice of

function

K(t, x, y)

of class

y

are furnished by vector-valued

on

having range in a p-dimensional

space, and satisfying (2.2) and (2.3) with appropriate changes in notation. An important special case, which also serves to illustrate (2.3), is that of

separable constraints.

not necessarily disjoint, of the set on j

in

g

for

S2 •If

k

i

in

x) > B^(t, x)

that

(t, x, z) is in

hold, be nonvoid.

on

S^, and let

g.

and S2

1, ..., s. B^(t, x)

is an integer in both

A^(t,

Let

S-^

For each

Let

A x (t, x)

be of class on

S2 , we suppose

(t, x)

in

g

for

that

g, let the set of

z

such

and such that the inequalities

A X (t, x) — z1 > 0,

i in

S-^,

7? — BJ (t, x) > 0,

j in

S2 ,

Then we can form a vector-valued function

that satisfies

(2.2) and (2.3) by taking the functions

ZJ

as the components of

_ B^(t, x)

be two subsets,

be of class

and

(2-2) and

K

K(t, x, z)

A 1 (t, x) — z1

and

and, if necessary, relabeling the

superscripts. 2-3

Terminal Manifolds For each

i = 1, ..., a ,

let

i £L

be an n-dimensional manifold of

(2 )

class

Cv

that lies in

(2.4)

g

and that is given parametrically by equations

t = t^a)

where

cr = (a^,

..., a11)

x = x t (a),

ranges over an open cube

CKL

in n-dimensional r

space.

We select a connected submanifold

CL

of each

3^.

Let

a

u

3 =

i=l We shall call belong to

3

81

g(t, x)

g(T, xT ) shall call

the terminal surface of

will be denoted by

Let Let

3

3.. 1 the game.

be a subregion of

g

such that

3

be a real-valued function of class

is therefore defined on g(T, x^)

Points

(t, x)

that

(T, x,p) .

3

and

the terminal payoff

is of class function.

is contained in on

8]_*

8-^*

The function

on each

3^

We

DIFFERENTIAL GAMES

133

3^

We remark that the assumption that all of by parameters

is made to simplify the subsequent exposition. (2 ) The essential assumption is that £L is a C K manifold.

2.4

cr

can be coordinatized

in

Strategies A notion that will be used in our discussion of strategies and

other subsequent work is that of a decomposition of a region.

3D-^, . .., 3Da

collection of subregions

(i)

Each

3D^ i = 1, ..., ct, is connected and has a piecewise-smooth (ii)

3D^ n 3Dj = 0

valued function defined on

if 3D

Let

(R

on3D^.

3D

3

A real­ on

3L.

i = 1,

...,

(R

The region

A vector-valued function will be said

(R contained in

on &

(R.

a, the region

(R^

will be the region of

anc* such that Further, assume §,

denote the class of functions

(R and satisfy the following conditions:

(t,

the projection of §

lies in

For every

K(t, x, Y(t, x)) > 0 ;

the y-constraints.

Similarly,

(R

let

Z

(t, x)

values

lie in an s-dimensional space.

in (R, the point

S

i-s

in

into

lies in

We assume that Y

in

associated with it a decomposition of

C^2 ^

y = Y(t, x)

(R, the point (t, x, y)

and satisfy the following conditions:

(t, x, Z(t, x))

Since a function

piecewise

denote the class of functions

on

K(t, x, Z(t, x)) > 0.

and

space,

that is, the functional values satisfy

are piecewise z = Z(t, x)

space in which

(i) The values

(ii)

x, Y(t, x))

(t, x)

Y that are

lie in an s-dimensional space.

and satisfies

and

is always on the same

(R c

Note that since

3D.

(R is compact.

it follows that

y

3D

the function agrees

can be imbedded in an n-dimensional manifold that separates

Let on

3D^

forms a part of the boundary of

the play of the game takes place. bounded,

3)^*

if each component is piecewise

be a region with closure

that, for any given side of

such that oneach —

Cv on

the terminal surface

3

3D

(k)

with a function that is to be piecewise

i ^ j, and (iii) 3D =

will be said to be piecewise

if there is a decomposition of

that

3D will be said to

of a region

3D whenever the following conditions hold:

constitute a decomposition of

boundary,

A finite

(ii)

(i)

For every

Z

that

The (t, x)

and satisfies and

Z

is piecewise

are nonvoid. on

(R, there is

(R, and the same is true of a function

134 Z

BERKOVITZ in

Z.

Hence a pair of functions

(Y, Z)

with

also has associated with it a decomposition of

Y

in

^

and

(R, say (R-^,

Z

Z

in

(R^. It

follows from standard existence theorems for ordinary differential equations and the hypotheses concerning the functions to a given point

(tQ,

xq

)j interior to some

G, Y, and

Z, that corresponding

(R^, there exists a unique

solution

x(t) = x(t; tQ , xQ )

(2.5)

of the system of differential equations

(2 .6 )

x' = G(t, x, Y(t, x), Z(t, x)),

defined on some interval about

(2.7)

and satisfying the initial condition

x(t0 ) = x(tQ ; tQ, xQ ) = xQ .

What we shall need, however, (i)

tg

x(t; tQ, X q )

other regions

(R^

is the assurance that either

can be continued across the boundary of so as to reach

point on the boundary of

(R^

3, or

(ii)

x(t; tQ, X q )

that is also a point of

game, we

need to be assured that a play starting at

Further,

if

of (R^ and

(tQ, X q ) (R^

is both an interior point of

for some pair of indices

exist for solutions of

(R^

3.

and through

extends to a

In terms of the

(tQ, X q )will

(Rand a

terminate.

boundary point

i, j, then several possibilities

(2.6) satisfying the initial conditions (2.7).

Thus,

to ensure that play can take place, we need to impose restrictions on the functions

Y

and

Z

chosen as strategies by the players.

The restrictions

presently to be made are imposed partly for this purpose, and partly to enable us to carry out our analysis. We define a regular decomposition of a region

(R

to be a

decomposition in which the constituent subregions can be designated

DIFFERENTIAL GAMES

135

in such a way (see Fig. 1) that the following conditions are satisfied:

(i)

The regions

i = 1,

(R^, defined for each

a

by the formula

( h

\

constitute a decomposition of (ii)

same side of (iii)

3^, and

(Rij n 3i= 0

a , (R. . H 3 ^

whenever

i = 1,

For each

always lies on the

(R^ fl 3k = 0 whenever i / k.

i = 1, . . . ,

For each

_

(iv)

a , (R^

For each i = 1,

(R.

0,

and

j ^ j^. a, and j = 1, ..., j ^— 1,

...,

the set

is a connected and oriented manifold of dimension n

and class

We suppose that ^ - jj

can

described by equations

(2-8)

t = £^(0)

where

cr = (cr'*’,

x = x^cr),

crn )

ranges over a cube

in Euclidean n-space. (v)

Each manifold 971-jj

divides

(R^

into two disjoint

regions such that each region lies entirely on one side of 971. • • ij i = 1, a, (vi)

For each subset a,

1,

Furthermore, 971. . 0 971., = 0 ij and j, k = 1, j i - 1. i^,

..., i^

the set

91.

.

n (R.

of the integers . , defined by the ik

ir

formula

91.

ir ...,ik

= (& .

for

i2

n ... n 0

if

Z*1 ^ ,

= 0

if

B1 (t, x) < Z*x (t, x) < A 1 (t, x),

< 0

if

Z ^(t, x) = A^(t, x ) .

statements involving neither

(t, x)

If

i

i

deleted.

F ^ = 0.

belongs to

S-^

and

If

i

is in

Similarly,

if

S2 , then at every

interior to some

if

Y i (t, x) = A i (t, x ) ,

, = 0

if

B X (t, x) < Y*x (t, x) < A 1 (t, x

< 0

if

Y #i (t, x) = B ^ t ,

is not

modifications

If

S^(S2 ), then (4 .13) holds with

0

" > Fyi

x) = B1 (t, x),

A 1 (B1 )

nor S2 , then

i(i = 1, . .., s)

(4.14)

S2 , then at every

interior to some

does not belong to

point

and

If

in both S-^and

K; if

component of

K.

i

S2 , then the appropriate

hold, as in the case of

belongs to

component of

x).

Sp

let

j^(i)

belongs toS2 , let

Suppose that

i

F . z-1-

denote the corresponding j2 (i)

belongs to

denote the corresponding

S^ and

and the definition of separable constraints, we get

S2 .

Then

from (4.4)

_ 1

DIFFERENTIAL GAMES

157

P i ; Ji(i) F . = 2 LLJ KJ i = - L i + M 2 j=l The conclusion (4.13) now follows from (4-5)* since

A

i

(4-6)* and the observation that,

i > B , one and only one ofthe three conditions

B^ < Z ^ < A^

can hold at a point.

Z

*i

The statement following

i = A ,

*i i Z = B ,

(4.13) is a

of (4.5), (4.6), and the observation that if ibelongs only jpW JiCi) S-^(S2 ), then i~i =0(d =0). Similar arguments yield (4.14).

consequence

to

Another special case in which a simplification of Theorem 1 can be effected is that in which the constraints are independent of the state variable

x.

COROLLARY 2-

If K

then at every

is independent

(t, x)

of

x, i.e., K =K(t, z),

interior to some

(4.15)

F Z* = 0. z x If

K

is independent of

every

(t, x)interior

x, i.e., K = K(t, y), then at to some

(4.16)

FyY* = 0.

If both

K and

_

K

are independent

(4.17)

of

x, thenalong

* E ,

M t ) = - F x. Equation (4-15) follows from (4.10) and the observation that since

K

is independent of

similarly.

x, we have

Kx = 0.

Equation (4.16) is established

Equation (4-17) follows from (4.3),

(4.15), and (4-17).

If the constraints are separable and the functions are all constants, or depend at most on

A 1, B 1, A 1, B 1

t, then the conclusions of both

corollaries apply. 4.4

Further Necessary Conditions The next necessary condition is most easily stated in terms of a

game that we now describe. * Let E be an optimal path and let in Theorem 2.

At every point

construct a game

^(t, x (t))

Q=(t,

x(t))

^(t) of

be the function described E

in the following manner.

with

t ^ t^,

we

The pure strategies

158

BERKOVITZ

for player I consist of (a) some

Y

in 2/^

all vectors

that is continuous at

y = Y+ (t, x (t))

for some

Y

in

y

Q, and (b)

vectors

z

for some

Z

such that

discontinuous at

in

defined

in (4.1).

that we

take the payoff to be

THEOREM 3.

t = t^

•X-

for some

Z

y

such that The pure z =

Q, and (b)

in % ^

all

that is

F(t, x (t), y, z, *-(t)), where

F(t, x (t), y , z ,

F

*5 asbefore,

is

except

^+ (t)).

(t, x (t)), with

E , the game

for

Q.

such that

at Q, we can construct

At every point

of an optimal path

z

that is continuous at

The payoff is If

all vectors

all vectors

z = Z+ (t, x (t))

Q.

y = Y(t, x (t))

that is discontinuous at

strategies for player II consist of (a) Z(t, x (t))

such that

t ^ t^,

^(t, x (t))

-X-

F(t, x (t), y (t), z (t), X(t)), where

y (t)

has value and

z (t)

are given by (2.15). An optimal pure strategy for player * I is y (t) and an optimal pure strategy for player II is

z (t) .

At

t = t^,

^(t, x (t))

has value

F(t, x (t), y (t)+, z (t)+, X+ (t)); an optimal pure * + strategy for player I is y (t) , and an optimal pure strategy for player II is z (t) . This theorem follows from Condition II of Theorem

1when that * Z in the class

theorem is applied to the problems of maximizing against ^ ^

and minimizing against

Y

in the class

The following theorem is a consequence

of Condition III,

applied to

the preceding maximization and minimization problems.

THEOREM 4.

Let

be a point of formed from

E E

K

be

an optimal path, and let

with

t ^ t^j .

Let

by taking those components of

vanish at that point.

Let

e = (e^,

..., em )

solution vector of the linear system e((F + uK)zz)e > 0

at this point.

denote the vector formed from of

K

K

that vanish at this point.

(t, x

K denote K

that

be a nonzero

Kze = 0Similarly,

Then let

e(F + u K ) ^ ) e < 0

K

by taking these components Let

e

be an

s-dimensional nonzero solution vector of the system Then

(t))

the vector

at this point.

K e = 0.

DIFFERENTIAL GAMES § 5.

5«1

The Function

159

THE VALUE W(t, x)

A (t , x )

The principal purpose of this section is to study the differentia­ bility properties of the value equation that purpose,

W

it is

multiplier

W

and to deduce a partial differential

satisfies in its regions of differentiability.

convenient to introduce a function

A,

For this

A(t, x), related to the

and to derive some of its properties.

(R^, where (R ^

and consider a subregion

Sec. 2.4) of regular decomposition.

* * (Y , Z )

(R associated with

Consider the regular decomposition of

is as in (i) of the definition (see

In (R., we now consider

(R.. .

To

Ji simplify notation, we set

= m.

Through each point of (R^m

there passes a unique optimal path

terminating at

3^.

terminal point

of precisely one optimal path.

terminating at

(T,

Conversely, each point

xt)

(5.1)

= (t^(cr), x^(cr))

(t^(cr), x^(cr))

of

3^ is

the

Thus an optimal path in (R^m

can be written as

x*(t, a) = x*(t, ti (a), x i (d)) .

Since

x (t, cr)

it follows that

x

is the

(t, cr)

solution of (2.14) with

is of class

space defined by the requirements -* (2 ) Moreover, x (t, cr) is in

C ^

x(t^(cr)) = x^(cr),

on the region cU^m

cr e 3C^, and

of

(t, cr)-

t^ m - i ^ ) < t < t^(tf).

~

The argument to establish these * * the continuity properties of G, Y , Z ,(ii) extensions

statements uses (i) * * of Y^m and Z^m to functions that are of class (Rim* 3 ^

and

311^

xi (cr) are of class

in its interior, C^^,

(iii)

(2 )

C v ' in a region containing

the assumption that

ti (cr)

and

and (iv) standard theorems concerning the dependence of

solutions of differential equations on initial conditions. We now show that for (t, cr)

such that

(5.2)

Let the vector

cr

in

3C^

(t, cr) v and

det

* xt

in *11. , where lm

t^ ^ ^ f 07)