Game theory
Do the following questions
11.6,
11.7,
11.8,
11.9,
13.10,
13.11,
13.12,
13.13,
13.14,
13.15
Part One Introduction 1
Chapter 1 A First Look at the Applications 3
2 A First Look at the Theory 17
Two Strategic Form Games: Theory and Practice 33
3 Strategic Form Games and Dominant Strategies 35
4 Dominance Solvability 49
5 Nash Equilibrium 63
6 An Application" Cournot Duopoly 75
7 An Application: The Commons Problem 91
8 Mixed Strategies 103
9 Two Applications: Natural Monopoly and Bankruptcy Law 121
10 Zero-Sum Games 139
Three Extensive Form Games: Theory and Applications 155
11 Extensive Form Games and Backward Induction 157
12 An Application: Research and Development 179
13 Subgame Perfect Equilibrium 193
14 Finitely Repeated Games 209
15 Infinitely Repeated Games 227
16 An Application: Competition and Collusion in the NASDAQ Stock Market 243
17 An Application: OPEC 257
18 Dynamic Games with an Application to the Commmons Problem 275
Four Asymmetric Information Games: Theory and Applications 291
19 Moral Hazard and Incentives Theory 293
20 Games with Incomplete Information 309
page_vii
Page VIII
21 An Application: Incomplete Information in a Cournot Duopoly 331
22 Mechanism Design, the Revelation Principle, and Sales to an Unknown Buyer 349
23 An Application: Auctions 367
24 Signaling Games and the Lemons Problem 383
Five Foundations 401
25 Calculus and Optimization 403
26 Probability and Expectation 421
27 Utility and Expected Utility 433
28 Existence of Nash Equilibria 451
Index 465
page_viii
Page IX
CONTENTS
Preface XXI
A Reader's Guide XXIX
Part One Indroduction 1
Chapter 1 A First Look at the Applications 3
1.1 Gabes That We Play 3
1.2 Background 7
1.3 Examples 8
Summary 12
Exercises 12
Chapter 2 A First Look at the Theory 17
2.1 Rules of the Game: Background 17
2.2 Who, What, When: The Extensive Form 18
2.2.1 Information Sets and Strategies 20
2.3 Who What, When: The Normal (or Strategic) Form 21
2.4 How Much: Von Neumann-Morgenstern Utility Function 23
2.5 Representation of the Examples 25
Summary 27
Exercises 28
Part Two Strategic Form Games: Theory and Practice 33
Chapter 3 Strategic Form Games and Dominant Strategies 35
3.1 Strategic Form Games 35
3.1.1 Examples 36
3.1.2 Equivalence with the Extensive Form 39
3.2 Case Study The Strategic Form of Art Auctions 40
3.2.1 Art Auctions: A Description 40
3.2.2 Art Auctions: The Strategic Form 40
3.3 Dominant Strategy Solution 41
page_ix
Page X
3.4 Cae Study Again A Dominant Strategy at the Auction 43
Summary 44
Exercises 45
Chapter 4 Dominance Solvability 49
4.1 The Idea 49
4.1.1 Dominated and Undominated Strategies 49
4.1.2 Iterated Elimination of Dominated Strategies 51
4.1.3 More Examples 51
4.2 Case Study Electing the United Nations Secretary General 54
4.3 A More Formal Definition 55
4.4 A Discussion 57
Summary 59
Exercises 59
Chapter 5 Nash Equilibrium 63
5.1 The Concept 63
5.1.1 Intuition and Definition 63
5.1.2 Nash Parables 64
5.2 Examples 66
5.3 Case Study Nash Equilibrium in the Animal Kingdom 68
5.4 Relation Between the Solution Concepts 69
Summary 71
Exercises 71
Chapter 6 An Application: Cournot Duopoly 75
6.1 Background 75
6.2 The Basic Model 76
6.3 Cournot Nash Equilibrium 77
6.4 Cartel Solution 79
6.5 Case Study Today's OPEC 81
page_x
Page XI
6.6 Variants on the Main Theme I: A Graphical Analysis 82
6.6.1 The IEDS Solution to the Cournot Model 84
6.7 Variants on the Main Theme II: Stackelberg Model 85
6.8 Variants on the Main Theme III: Generalization 86
Summary 87
Exercises 88
Chapter 7 An Application: The Commons Problem 91
7.1 Background: What is the Commons? 91
7.2 A Simple Model 93
7.3 Social Optimality 95
7.4 The Problem Worsens in a Large Population 96
7.5 Case Studies Buffalo, Global Warming, and the Internet 97
7.6 Averting a Tragedy 98
Summary 99
Exercises 100
Chapter 8 Mixed Strategies 103
8.1 Definition and Examples 103
8.1.1 What Is a Mixed Strategy? 103
8.1.2 Yet More Examples 106
8.2 An Implication 107
8.3 Mixed Strategies Can Dominate Some Pure Strategies 108
8.3.1 Implications for Dominant Strategy Solution and IEDS 109
8.4 Mixed Strategies are Good for Bluffing 110
8.5 Mixed Strategies and Nash Equilibrium 111
8.5.1 Mixed-Strategy Nash Equilibria in an Example 113
8.6 Case Study Random Drug Testing 114
Summary 115
Exercises 116
page_xi
Page XII
Chapter 9 Tow Applications: Naturla Monopoly and Bankruptcy Law 121
9.1 Chicken, Symmetric Games, and Symmetric Equilibria 121
9.1.1 Chicken 121
9.1.2 Symmetric Games and Symmetric Equilibria 122
9.2 Natural Monopoly 123
9.2.1 The Economic Background 123
9.2.2 A Simple Example 124
9.2.3 War of Attrition and a General Analysis 125
9.3 Bankruptcy Law 128
9.3.1 The Legal Background 128
9.3.2 A Numerical Example 128
9.3.3 A General Analysis 130
Summary 132
Exercises 133
Chapter 10 Zero-Sum Games 139
10.1 Definition and Examples 139
10.2 Playing Safe: Maxmin 141
10.2.1 The Concept 141
10.2.2 Examples 142
10.3 Playing Sound: Minmax 144
10.3.1 The Concept and Examples 144
10.3.2 Two Results 146
10.4 Playing Nash: Playing Both Safe and Sound 147
Summary 149
Exercises 149
Part Three Extensive Form Games: Theory and Applications 155
Chapter 11 Extensive Form Games and Backward Induction 157
11.1 The Extensive Form 157
11.1.1 A More Formal Treatment 158
page_xii
Page XIII
11.1.2 Strategies, Mixed Strategies, and Chance Nodes 160
11.2 Perfect Information Games: Definition and Examples 162
11.3 Backward Induction: Examples 165
11.3.1 The Power of Commitment 167
11.4 Backward Induction: A General Result 168
11.5 Connection With IEDS in the Strategic Form 170
11.6 Case Study Poison Pills and Other Takeover Deterrents 172
Summary 174
Exercises 175
Chapter 12 An Application: Research and Development 179
12.1 Background: R&D, Patents, and Ologopolies 179
12.1.1 A Patent Race in Progress: High-Definition Television 180
12.2 A Model of R&D 181
12.3 Backward Induction: Analysis of the Model 183
12.4 Some Remarks 188
Summary 189
Exercises 190
Chapter 13 Subgame Perfect Equilibrium 193
13.1 A Motivating Example 193
13.2 Subgames and Strategies Within Subgames 196
13.3 Subgame Perfect Equilibrium 197
13.4 Two More Examples 199
13.5 Some Remarks 202
13.6 Case Study Peace in the World War I Trenches 203
Summary 205
Exercises 205
Chapter 14 Finitely Repeated Games 209
14.1 Examples and Economic Applications 209
page_xiii
Page XIV
14.1.1 Three Repeated Games and a Definition 209
14.1.2 Four Economic Applications 212
14.2 Finitely Repeated Games 214
14.2.1 Some General Conclusions 218
14.3 Case Study Treasury Bill Auctions 219
Summary 222
Exercises 222
Chapter 15 Infinitely Repeated Games 227
15.1 Detour Through Discounting 227
15.2 Analysis of Example 3: Trigger Strategies and Good Behavior 229
15.3 The Folk Theorem 232
15.4 Repeated Games With Imperfect Detection 234
Summary 237
Exercises 238
Chapter 16 An Application: Competition and Collusion in the NASDAQ Stock Market243
16.1 The Background 243
16.2 The Analysis 245
16.2.1 A Model of the NASDAQ Market 245
16.2.2 Collusion 246
16.2.3 More on Collusion 248
16.3 The Broker-Dealer Relationship 249
16.3.1 Order Preferencing 249
16.3.2 Dealers Big and Small 250
16.4 The Epilogue 251
Summary 252
Exercises 252
Chapter 17 An Application: OPEC 257
17.1 Oil: A Historical Review 257
page_xiv
Page XV
17.1.1 Production and Price History 258
17.2 A Simple Model of the Oil Market 259
17.3 Oil Prices and the Role of OPEC 260
17.4 Repteated Games With Demand Uncertainty 262
17.5 Unobserved Quota Violations 266
17.6 Some Further Comments 269
Summary 270
Exercises 271
Chapter 18 Dynamic Games With An Application to the Commons Problem 275
18.1 Dynamic Games: A Prologue 275
18.2 The Commons Problem: A Model 276
18.3 Sustainable Development and Social Optimum 278
18.3.1 A Computation of the Social Optimum 278
18.3.2 An Explanation of the Social Optimum 281
18.4 Achievable Development and Game Equilibrium 282
18.4.1 A Computation of the Game Equilibrium 282
18.4.2 An Explanation of the Equilibrium 284
18.4.3 A Comparison of the Socially Optimal and the Equilibrium Outcomes 285
18.5 Dynamic Games: An Epilogue 286
Summary 287
Exercises 288
Part Four Asymmetric Information Games: Theory and Applications 291
Chapter 19 Moral Hazard and Incentives Theory 293
19.1 Moral Hazard: Examples and a Definition 293
29.2 A Principal-Agent Model 295
19.2.1 Some Examples of Incentive Schemes 297
19.3 The Optimal Incentive Scheme 299
19.3.1 No Moral Hazard 299
page_xv
Page XVI
19.3.2 Moral Hazard 299
19.4 Some General Conclusions 301
19.4.1 Extensions and Generalizations 303
19.5 Case Study Compensating Primary Care Physicians in an HMO 304
Summary 305
Exercises 306
Chapter 20 Games with Incomplete Information 309
20.1 Some Examples 309
20.1.1 Some Analysis of the Examples 312
20.2 A Complete Analysis of Example 4 313
20.2.1 Bayes-Nash Equilibrium 313
20.2.2 Pure-Strategy Bayes-Nash Equilibria 315
20.2.3 Mixed-Strategy Bayes-Nash Equilibria 316
20.3 More General Considerations 318
20.3.1 A Modified Example 318
20.3.2 A General Framework 320
20.4 Dominance-Based Solution Concepts 321
20.5 Case Study Final Jeopardy 323
Summary 326
Exercises 326
Chapter 21 An Application: Incomplete Information in a Cournot Duopoly 331
21.1 A Model and its Equilibrium 331
21.1.1 The Basic Model 331
21.1.2 Bayes-Nash Equilibrium 332
21.2 The Complete Information Solution 336
21.3 Revealing Costs to a Rival 338
21.4 Two-Sided Incompleteness of Information 340
21.5 Generalizations and Extensions 341
21.5.1 Oligopoly 341
page_xvi
Page XVII
21.5.2 Demand Uncertainty 342
Summary 343
Exercises 343
Chapter 22 Mechanism Design, The Revelation Priciple, and Sales to an Unknown Buyer
349
22.1 Mechanism Design: The Economic Context 349
22.2 A Simple Example: Selling to a Buyer With an Unknown Valuation 351
22.2.1 Known Passion 351
22.2.2 Unknown Passion 352
22.3 Mechanism Design and the Revelation Principle 356
22.3.1 Single Player 356
22.3.2 Many Players 357
22.4 A More General Example: Selling Variable Amounts 358
22.4.1 Known Type 359
22.4.2 Unknown Type 359
Summary 362
Exercises 362
Chapter 23 An Application: Auctions 367
23.1 Background and Examples 367
23.1.1 Basic Model 369
23.2 Second-Price Auctions 369
23.3 First-Price Auctions 371
23.4 Optimal Auctions 373
23.4.1 How Well Do the First- and Second-Price Auctions Do? 375
23.5 Final Remarks 376
Summary 377
Exercises 378
Chapter 24 Signaling Games and the Lemons Problem 383
24.1 Motivation and Two Examples 383
24.1.1 A First Analysis of the Examples 385
page_xvii
Page XVIII
24.2 A Definition, an Equilibrium Concept, and Examples 387
24.2.1 Definition 387
24.2.2 Perfect Bayesian Equilibrium 387
24.2.3 A Further Analysis of the Examples 389
24.3 Signaling Product Quality 391
24.3.1 The Bad Can Drive Out the Good 391
24.3.2 Good Can Signal Quality? 392
24.4 Case Study Used CarsA Market for Lemons? 394
24.5 Concluding Remarks 395
Summary 396
Exercises 396
Part Five Foundations 401
Chapter 25 Calculus and Optimization 403
25.1 A Calculus Primer 403
25.1.1 Functions 404
25.1.2 Slopes 405
25.1.3 Some Formulas 407
25.1.4 Concave Functions 408
25.2 An Optimization Theory Primer 409
25.2.1 Necessary Conditions 409
25.2.2 Sufficient Conditions 410
25.2.3 Feasibility Constraints 411
25.2.4 Quadratic and Log Functions 413
Summary 414
Exercises 415
Chapter 26 Probability and Expectation 421
26.1 Probability 421
26.1.1 Independence and Conditional Probability 425
26.2 Random Variables and Expectation 426
26.2.1 Conditional Expectation 427
page_xviii
Page XIX
Summary 428
Exercises 428
Chapter 27 Utility and Expected Utility 433
27.1 Decision Making Under Certainty 433
27.2 Decision Making Under Uncertainty 436
27.2.1 The Expected Utility Theorem and the Expected Return Puzzle 437
27.2.2 Details on the Von Neumann-Morgenstern Theorem 439
27.2.3 Payoffs in a Game 441
27.3 Risk Aversion 441
Summary 444
Exercises 444
Chapter 28 Existence of Nash Equilibria 452
28.1 Definition and Examples 451
28.2 Mathematical Background: Fixed Points 453
28.3 Existence of Nash Equilibria: Results and Intuition 458
Summary 460
Exercises 461
Index 465
page_xix
Page XXI
PREFACE
This book evolved out of lecture notes for an undergraduate course in game theory that I have taught at Columbia University for the past six years. On the first two occasions I took the straight road, teaching out of available texts. But the road turned out to be somewhat bumpy; for a variety of reasons I was not satisfied with the many texts that I considered. So the third time around I built myself a small bypass; I wrote a set of sketchy lecture notes from which I taught while I assigned a more complete text to the students. Although this compromise involved minimal costs to me, it turned out to be even worse for my students, since we were now traveling on different roads. And then I (foolishly) decided to build my own highway; buoyed by a number of favorable referee reports, I decided to turn my notes into a book. I say foolishly because I had no idea how much hard work is involved in building a road. I only hope I built a smooth one.
The Book's Purpose And Its Intended Audience
The objective of this book is to provide a rigorous yet accessible introduction to game theory and its applications, primarily in economics and business, but also in political science, the law, and everyday life. The material is intended principally for two audiences: first, an undergraduate audience that would take this course as an elective for an economics major. (My experience has been, however, that my classes are also heavily attended by undergraduate majors in engineering and the sciences who take this course to fulfill their economics requirement.) The many applications and case studies in the book should make it attractive to its second audience, MBA students in business schools. In addition, I have tried to make the material useful to graduate students in economics and related disciplinesPh.D. students in political science, Ph.D. students in economics not specializing in economic theory, etc.who would like to have a source from which they can get a self-contained, albeit basic, treatment of game theory.
Pedagogically I have had one overriding objective: to write a textbook that would take the middle road between the anecdotal and the theorem-driven treatments of the subject. On the one hand is the approach that teaches purely by examples and anecdotes. In my experience that leaves the students, especially the brighter ones, hungering for more. On the other hand, there is the more advanced approach emphasizing a rigorous treatment, but again, in my experience, if there are too few examples and applications it is difficult to keep even the brighter students interested.
I have tried to combine the best elements of both approaches. Every result is precisely stated (albeit with minimal notation), all assumptions are detailed, and at least a sketch of a proof is provided. The text also contains nine chapter-length applications and twelve fairly detailed case studies.
page_xxi
Page XXII
Distinctive Features Of The Book
I believe this book improves on available undergraduate texts in the following ways.
Content a full description of utility theory and a detailed analysis of dynamic game theory
The book provides a thorough discussion of the single-agent decision theory that forms the underpinning of game theory. (That exercise takes up three chapters in Part Five.) More importantly perhaps, this is the first text that provides a detailed analysis of dynamic strategic interaction (in Part Three). The theory of repeated games is studied over two and a half chapters, including discussions of finitely and infinitely repeated games as well as games with varying stage payoffs. I follow the theory with two chapter-length applications: market-making on the NASDAQ financial market and the price history of OPEC. A discussion of dynamic games (in which the game environment evolves according to players' previous choices) follows along with an application to the dynamic commons problem. I believe many of the interesting applications of game theory are dynamicstudent interest seems always to heighten when I get to this part of the courseand I have found that every other text pays only cursory attention to many dynamic issues.
Style emphasis on a parallel development of theory and examples
Almost every chapter that introduces a new concept opens with numerical examples, some of which are well known and many of which are not. Sometimes I have a leading example and at other times a set of (small) examples. After explaining the exam-pies, I go to the concept and discuss it with reasonable rigor. At this point I return to the examples and analyze the just introduced concept within the context of the examples. At the end of a sectiona set of chapters on related ideasI devote a whole chapter, and sometimes two, to economic applications of those ideas.
Length and Organization bite-sized chapters and a static to dynamic progression
I decided to organize the material within each chapter in such a fashion that the essential elements of a whole chapter can be taught in one class (or a class and a half, depending on level). In my experience it has been a lot easier to keep the students engaged with this structure than with texts that have individual chapters that are, for example, over fifty pages long. The topics evolve in a natural sequence: static complete information to dynamic complete information to static incomplete information. I decided to skip much of dynamic incomplete information (other than signaling) because the questions in this part of the subject are a lot easier than the answers (and my students seemed to have little stomach for equilibrium refinements, for example). There are a few advanced topics as well; different instructors will have the freedom to decide which subset of the advanced topics they would like to teach in their course. Sections that are more difficult are marked with the symbol . Depending on level, some instructors will want to skip
page_xxii
Page XXIII
these sections at first presentation, while others may wish to take extra time in discussing the material.
Exercises
At the end of each chapter there are about twenty-five to thirty problems (in the Exercises section). In addition, within the text itself, each chapter has a number of questions (or concept checks) in which the student is asked to complete a part of an argument, to compute a remaining case in an example, to check the computation for an assertion, and so on. The point of these questions is to make sure that the reader is really following the chapter's argument; I strongly encourage my students to answer these questions and often include some of them in the problem sets.
Case Studies and Applications
At the end of virtually every theoretical chapter there is a case study drawn from real life to illustrate the concept just discussed. For example, after the chapter on Nash equilibrium, there is a discussion of its usage in understanding animal conflicts. After a chapter on backward induction (and the power of commitment), there is a discussion of poison pills and other take-over deterrents. Similarly, at the end of each cluster of similar topics there is a whole chapter-length application. These range from the tragedy of the commons to bankruptcy law to incomplete information Cournot competition.
An Overview And Two Possible Syllabi
The book is divided into five parts. The two chapters of Part One constitute an Introduction. Part Two (Chapters 3 through 10) covers Strategic Form Games: Theory and Practice, while Part Three (Chapters 11 through 18) concentrates on Extensive Form Games: Theory and Practice. In Part Four (Chapters 19 through 24) I discuss Asymmetric Information Games: Theory and Practice. Finally, Part Five (Chapters 25 through 28) consists of chapters on Foundations.
I can suggest two possible syllabi for a one-semester course in game theory and applications. The first stresses the applications end while the second covers all the theoretical topics. In terms of mathematical requirements, the second is, naturally, more demanding and presumes that the students are at a higher level. I have consequently included twenty chapters in the second syllabus and only eighteen in the first. (Note that the numbers are chapter numbers.)
Syllabus 1 (Applications Emphasis)
1. A First Look at the Applications
3. Strategic Form Games and Dominant Strategies
page_xxiii
Page XXIV
4. Dominance Solvability
5. Nash Equilibrium
6. An Application: Cournot Duopoly
8. Mixed Strategies
9.Two Applications: Natural Monopoly and Bankruptcy Law
11. Extensive Form Games and Backward Induction
12. An Application: Research and Development
13. Subgame Perfect Equilibrium
15. Infinitely Repeated Games
16. An Application: Competition and Collusion in the NASDAQ Stock Market
17. An Application: OPEC
19. Moral Hazard and Incentives Theory
20. Games with Incomplete Information
22. Mechanism Design, the Revelation Principle, and Sales to an Unknown Buyer
23. An Application: Auctions
24.Signaling Games and the Lemons Problem
Syllabus 2 (Theory Emphasis)
2. A First Look at the Theory
27. Utility and Expected Utility
3. Strategic Form Games and Dominant Strategies
4. Dominance Solvability
5. Nash Equilibrium
page_xxiv
Page XXV
6. An Application: Cournot Duopoly
7. An Application: The Commons Problem
8. Mixed Strategies
10. Zero-Sum Games
28. Existence of Nash Equilibria
11. Extensive Form Games and Backward Induction
13. Subgame Perfect Equilibrium
14. Finitely Repeated Games
15. Infinitely Repeated Games
17. An Application: OPEC
18. Dynamic Games with an Application to the Commons Problem
20. Games with Incomplete Information
21. An Application: Incomplete Information in a Cournot Duopoly
22. Mechanism Design, the Revelation Principle, and Sales to an Unknown Buyer
23. An Application: Auctions
Prerequisites
I have tried to write the book in a manner such that very little is presumed of a reader's mathematics or economics background. This is not to say that one semester each of calculus and statistics and a semester of intermediate microeconomics will not help. However, students who do not already have this background but are willing to put in extra work should be able to educate themselves sufficiently.
Toward that end, I have included a chapter on calculus and optimization, and one on probability and expectation. Readers can afford not to read the two chapters if they already have the following knowledge. In calculus, I presume knowledge of the slope of a function and a familiarity with slopes of the linear, quadratic, log, and the square-root functions. In optimization theory, I use the first-order characterization of an interior
page_xxv
Page XXVI
optimum, that the slope of a maximand is zero at a maximum. As for probability, it helps to know how to take an expectation. As for economic knowledge, I have attempted to explain all relevant terms and have not presumed, for example, any knowledge of Pareto optimality, perfect competition, and monopoly.
Acknowledgments
This book has benefited from the comments and criticisms of many colleagues and friends. Tom Gresik at Penn State, Giorgidi Giorgio at La Sapienza in Rome, Sanjeev Goyal at Erasmus, Matt Kahn at Columbia, Amanda Bayer at Swarthmore, Rob Porter at Northwestern, and Charles Wilson at NYU were foolhardy
enough to have taught from preliminary versions of the text, and I thank them for their courage and comments. In addition, the following reviewers provided very helpful comments:
Amanda Bayer, Swarthmore College
James Dearden, Lehigh University
Tom Gresik, Penn State
Ehud Kalai, Northwestern University
David Levine, UCLA
Michael Meurer, SUNY Buffalo
Yaw Nyarko, NYU
Robert Rosenthal, Boston University
Roberto Serrano, Brown University
Rangarajan Sundaram, NYU
A second group of ten referees provided extremely useful, but anonymous, comments.
My graduate students Satyajit Bose, Tack-Seung Jun, and Tsz-Cheong Lai very carefully read the entire manuscript. Without their hawk-eyed intervention, the book would have many more errors. They are also responsible for the Solutions Manual, which accompanies this text. My colleagues in the community, Venky Bala, Terri Devine, Ananth
page_xxvi
Page XXVII
Madhavan, Mukul Majumdar, Alon Orlitsky, Roy Radner, John Rust, Paulo Siconolfi, and Raghu Sundaram, provided support, sometimes simply by questioning my sanity in undertaking this project. My brother, Prajjal Dutta, often provided a noneconomist's reality check. Finally, I cannot sufficiently thank my wife, Susan Sobelewski, who provided critical intellectual and emotional support during the writing of this book.
page_xxvii
Page XXIX
A READER'S GUIDE
Game theory studies strategic situations. Suppose that you are a contestant on the quiz show "Jeopardy!" At the end of the half hour contest (during Final Jeopardy) you have to make a wager on being able to answer correctly a final question (that you have not yet been asked). If you answer correctly, your wager will be added to your winnings up to that point; otherwise, the wager will be subtracted from your total. The two other contestants also make wagers and their final totals are computed in an identical fashion. The catch is that there will be only one winner: the contestant with the maximum amount at the very end will take home his or her winnings while the other two will get (essentially) nothing.
Question: How much should you wager? The easy part of the answer is that the more confident you are in your knowledge, the more you should bet. The difficult part is, how much is enough to beat out your rivals? That clearly depends on how much they wager, that is, what their strategies are. It also depends on how knowledgeable you think they are (after all, like you, they will bet more if they are more knowledgeable, and they are also more likely to add to their total in that case). The right wager may also depend on how much money you have already wonand how much they have won.
For instance, suppose you currently have $10,000 and they have $7,500 each. Then a $5,001 wagerand a correct answerguarantees you victory. But that wager also guarantees you a lossif you answer incorrectlyagainst an opponent who wagers only $2,500. You could have bet nothing and guaranteed victory against the $2,500 opponent (since the rules of "Jeopardy!" allow all contestants to keep their winnings in
the event of a tie). Of course, the zero bet would have been out of luck against an opponent who bet everything and answered correctly. And then there is a third possibility for you: betting everything . . .
As you can see the problem appears to be quite complicated. (And keep in mind that I did not even mention additional relevant factors: estimates that you have about answering correctly or about the other contestants answering correctly, that the others may have less than $5,000, that you may have more than $15,000, and so forth.) However, game theory has the answer to this seemingly complicated problem! (And you will read about it in Chapter 20.) The theory provides us with a systematic way to analyze questions such as: What are the options available for each contestant? What are the consequences of various choices? How can we model a contestant's estimate of the others' knowledge? What is a rational wager for a contestant?
In Chapter I you will encounter a variety of other examplesfrom real life, from economics, from politics, from law, and from businesswhere game theory gives us the tools and the techniques to analyze the strategic issues.
In terms of prerequisites for this book, I have attempted to write a self-contained text. If you have taken one semester each of calculus, statistics, and intermediate microeconomics, you will find life easier. If you do not have the mathematics background,
page_xxix
Page XXX
it is essential that you acquire it. You should start with the two chapters in Part Five, one on calculus and optimization, the other on probability and expectation. Read them carefully and do as many of the exercises as possible. If the chapter on utility theory, also in Part Five, is not going to be covered in class, you should read that carefully as well. As for economic knowledge, if you have not taken an intermediate microeconomics class, it would help for you to pick up one of the many textbooks for that course and read the chapters on perfect competition and monopoly.
I have tried to write each chapterand each part of the bookin a way that the level of difficulty rises as you read through it. This approach facilitates jumping from topic to topic. If you are reading this book on your ownand not as part of a classthen a good way to proceed is to read the foundational chapters (25 through 27) first and then to read sequentially through each part. At a first reading you may wish to skip the last two chapters within each part, which present more difficult material. Likewise you may wish to skip the last conceptual section or so within each chapter (but don't skip the case studies!). Sections that are more difficult are marked with the symbol ; you may wish to skip those sections as well at first reading (or to read them at a more deliberate pace).
page_xxx
Page 1
PART ONE INTRODUCTION
page_1
Page 3
Chapter 1 A First Look At The Applications
This chapter is organized in three sections. Section 1.1 will introduce you to some applications of game theory while section 1.2 will provide a background to its history and principal subject matter. Finally, in section 1.3, we will discuss in detail three specific games.
1.1 Games That We Play
If game theory were a company, its corporate slogan would be No man is an island. This is because the focus of game theory is interdependence, situations in which an entire group of people is affected by the choices made by every individual within that group. In such an interlinked situation, the interesting questions include
What will each individual guess about the others' choices?
What action will each person take? (This question is especially intriguing when the best action depends on what the others do.)
What is the outcome of these actions? Is this outcome good for the group as a whole? Does it make any difference if the group interacts more than once?
How do the answers change if each individual is unsure about the characteristics of others in the group?
page_3
Page 4
The content of game theory is a study of these and related questions. A more formal definition of game theory follows; but consider first some examples of interdependence drawn from economics, politics, finance, law, and even our daily lives.
Art auctions (such as the ones at Christie's or Sotheby's where works of art from Braque to Veronese are sold) and Treasury auctions (at which the United States Treasury Department sells U.S. government bonds to finance federal budget expenditures): Chapters 3, 14, and 23, respectively
Voting at the United Nations (for instance, to select a new Secretary General for the organization): Chapter 4
Animal conflicts (over a prized breeding ground, scarce fertile females of the species, etc.): Chapter 5
Sustainable use of natural resources (the pattern of extraction of an exhaustible resource such as oil or a renewable resource such as forestry): Chapters 7 and 18
Random drug testing at sports meets and the workplace (the practice of selecting a few athletes or workers to take a test that identifies the use of banned substances): Chapter 8
Bankruptcy law (which specifies when and how much creditors can collect from a company that has gone bankrupt): Chapter 9
Poison pill provisions (that give management certain latitude in fending off unwelcome suitors looking to take over or merge with their company): Chapter 11
R&D expenditures (for example, by pharmaceutical firms): Chapter 12
Trench warfare in World War I (when armies faced each other for months on end, dug into rival trench-lines on the borders between Germany and France): Chapter 13
OPEC (the oil cartel that controls half of the world's oil production and, hence, has an important say in determining the price that you pay at the pump): Chapter 17
A group project (such as preparing a case study for your game theory class)
Game theory A formal way to analyze interaction among a group of rational agents who behave strategically.
Game theory is a formal way to consider each of the following items:
group In any game there is more than one decision-maker; each decision-maker is referred to as a "player."
page_4
Page 5
interaction What any one individual player does directly affects at least one other player in the group.
strategic An individual player accounts for this interdependence in deciding what action to take.
rational While accounting for this interdependence, each player chooses her best action.
Let me now illustrate these four conceptsgroup, interaction, strategic, and rationalby discussing in detail some of the examples given above.
Examples from Everyday Life
Working on a group project, a case study for the game theory class: The group comprises the students jointly working on the case. Their interaction arises from the fact that a certain amount of work needs to get done in order to write a paper; hence, if one student slacks off, somebody else has to put in extra hours the night before the paper is due. Strategic play involves estimating the likelihood of freeloaders in the group, and rational play requires a careful comparison of the benefits to a better grade against the costs of the extra work.
Random drug testing (at the Olympics): The group is made up of competitive athletes and the International Olympic Committee (IOC). The interaction is both between the athleteswho make decisions on training regimens as well as on whether or not to use drugsand with the IOC, which needs to preserve the reputation of the sport. Rational strategic play requires the athletes to make decisions based on their chances of winning and, if they dope, their chances of getting caught. Similarly, it requires the IOC to determine drug testing procedures and punishments on the basis of testing costs and the value of a clean-whistle reputation.
Examples from Economics and Finance
R&D efforts by pharmaceutical companies: Some estimates suggest that research and development (R&D) expenditures constitute as much as 20% of annual sales of U.S. pharmaceutical companies and that, on average, the development cost of a new drug is about $350 million dollars. Companies are naturally concerned about issues such as which product lines to invest research dollars in, how high to price a new drug, how to reduce the risk associated with a new drug's development, and the like. In this example, the group is the set of drug companies. The interaction arises because the first developer of a drug makes the most profits (thanks to the associated patent). R&D expenditures are strategic and rational if they are chosen to maximize the profits from developing a new drug, given inferences about the competition's commitment to this line of drugs.
page_5
Page 6
Treasury auctions: On a regular basis, the United States Treasury auctions off U.S. government securities.1 The principal bidders are investment banks such as Lehman Brothers or Merrill Lynch (who in turn sell the securities off to their clients). The group is therefore the set of investment banks. (The bidders, in fact, rarely change from auction to auction.) They interact because the other bids determine whether a bidder is allocated any securities and possibly also the price that the bidder pays. Bidding is rational and strategic if bids are based on the likely competition and achieve the right balance between paying too much and the risk of not getting any securities.
Examples from Biology and Law
Animal behavior: One of the more fascinating applications of game theory in the last twenty-five years has been to biology and, in particular, to the analyses of animal conflicts and competition. Animals in the wild typically have to compete for scarce resources (such as fertile females or the carcasses of dead animals); it pays, therefore, to discover such a resourceor to snatch it away from the discoverer. The problem is that doing so can lead to a costly fight. Here the group of "players" is all the animals that have an eye on the same prize(s). They interact because resources are limited. Their choices are strategic if they account for the behavior of competitors, and are rational if they satisfy short-term goals such as satisfying hunger or long-term goals such as the perpetuation of the species.
Bankruptcy law: In the United States once a company declares bankruptcy its assets can no longer be attached by individual creditors but instead are held in safekeeping until such time as the company and its creditors reach some understanding. However, creditors can move the courts to collect payments before the bankruptcy declaration (although by doing so a creditor may force the company into bankruptcy). Here the interaction among the group of creditors arises from the fact that any money that an individual creditor can successfully seize is money that becomes unavailable to everyone else. Strategic play requires an estimation of how patient other creditors are going to be and a rational choice involves a trade-off between collecting early and forcing an unnecessary bankruptcy.
At this point, you may well ask what, then, is not a game? A situation can fail to be a game in either of two casesthe one or the infinity case. By the one case, I mean contexts where your decisions affect no one but yourself. Examples include your choice about whether or not to go jogging, how many movies to see this week, and where to eat dinner. By the infinity case, I mean situations where your decisions do affect others, but there are so many people involved that it is neither feasible nor sensible to keep track of what each one does. For example, if you were to buy some stock in AT&T it is best to imagine that your purchase has left the large body of shareholders in AT&T entirely unaffected. Likewise, if you are the owner of Columbia Bagels in New York City, your decision on the price of onion bagels is unlikely to affect the citywidenot to speak of the nationwideonion bagel price.
1These securities are Treasury Bonds and Bills, financial instruments that are held by the public (or its representatives, such as mutual funds or pension funds). These securities promise to pay a sum of money after a fixed period of time, say three months, a year, or five years. Additionally, they may also promise to pay a fixed sum of money periodically over the lifetime of the security.
page_6
Page 7
Although many situations can be formalized as a game, this book will not provide you with a menu of answers. It will introduce you to the methodology of games and illustrate that methodology with a variety of examples. However, when faced with a particular strategic setting, you will have to incorporate its unique (informational and other) features in order to come up with the right answer. What this book will teach you is a systematic way to incorporate those features and it will give you a coherent way to analyze the consequent game. Everyone of us acts strategically, whether we know it or not. This book is designed to help you become a better strategist.
1.2 Background
The earliest predecessors of game theory are economic analyses of imperfectly competitive markets. The pioneering analyses were those of the French economist Augustin Cournot (in the year 1838)2 and the English economist Francis Edgeworth (1881)3 (with subsequent advances due to Bertrand and Stackelberg). Cournot analyzed an oligopoly problemwhich now goes by the name of the Cournot modeland employed a method of analysis which is a special case of the most widely used solution concept found in modern game theory. We will study the Cournot model in some detail in Chapter 6.
An early breakthrough in more modern times was the study of the game of chess by E. Zermelo in 1913. Zermelo showed that the game of chess always has a solution, in the sense that from any position on the board one of the two players has a winning strategy.4 More importantly, he pioneered a technique for solving a certain class of games that is today called backwards induction. We will study this procedure in detail in Chapters 11 and 12.
The seminal works in modern times is a paper by John von Neumann that was published in 1928 and, more importantly, the subsequent book by him and Oskar Morgenstern titled Theory of Games & Economic Behavior (1944). Von Neumann was a multi-faceted man who made seminal contributions to a number of subjects including computer science, statistics, abstract topology, and linear programming. His 1928 paper resolved a long-standing puzzle in game theory.5 Von Neumann got interested in economic problems in part because of the economist Oskar Morgenstern. Their collaboration dates to 1938 when Morgenstern came to Princeton University, where Von Neumann had been a professor at the Institute of Advanced Study since 1933. Von Neumann and Morgenstern started by working on a paper about the connection between economics and game theory and ended with the crown jewelthe Theory of Games & Economic Behavior.
In their book Yon Neumann and Morgenstern made three major contributions, in addition to formalizing the concept of a game. First, they gave an axiom-based foundation to utility theory, a theory that explains just what it is that players get from playing a game. (We will discuss this work in Chapter 27.) Second, they thoroughly characterized the optimal solutions to what are called zero-sum games, two-player games in which
2See Cournot's Researches Into the Mathematical Principles of the Theory of Wealth (especially Chapter 7).
3See Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences.
4That, of course, is not the same thing as saying that the player can easily figure what this winning strategy is!(It is also possible that neither player has a winning strategy but rather that the game will end in a stalemate.)
5The puzzle was whether or not a class of games called zerosum gameswhich are defined in the next paragraphalways have a solution. A famous French mathematician, Emile Borel, had conjectured in 1913 that they need not; Von Neumann proved that they must always have a solution.
page_7
Page 8
one player wins if and only if the other loses. Third, they introduced a version of game theory called cooperative games. Although neither of these constructions are used very much in modem game theory, they both played an important role in the development of game theory that followed the publication of their book.6
The next great advance is due to John Nash who, in 1950, introduced the equilibrium (or solution) concept which is the one most widely used in modern game theory. This solution conceptcalled, of course, Nash equilibriumhas been extremely influential; in this book we will meet it for the first time in Chapter 5. Nash's approach advanced game theory from zero-sum to nonzero-sum games (i.e., situations in which both players could win or lose). As mentioned above, Nash's solution concept built on the earlier work of Cournot on oligopolistic markets.7 For all this he was awarded the Nobel Prize for Economics in 1994.
Which brings us to John Harsanyi and Reinhard Selten who shared the Nobel Prize with John Nash. In two papers dating back to 1965 and 1975, Reinhard Selten generalized the idea of Nash equilibrium to dynamic games, settings where play unfolds sequentially through time.8 In such contexts it is extremely important to consider the future consequences of one's present actions. Of course there can be many possible future consequences and Selten offered a methodology to select among them a ''reasonable" forecast for future play. We will study Selten's fundamental idea in Chapter 13 and its applications in Chapters 14 through 18.9
In 1967-1968, Harsanyi generalized Nash's ideas to settings in which players have incomplete information about each others' choices or preferences. Since many economic problems are in fact characterized by such incompleteness of information, Harsanyi's generalization was an important step to take. Incomplete information games will be discussed in Chapter 20 and their applications can be found in Chapters 21 through 24.10
At this point you might be wondering why this subjectwhich promises to study such weighty matters as the arms race, oligopoly markets, and natural resource usagegoes by the name of something quite as fun-loving as game theory. Part of the reason for this is historical: Game theory is called game theory because parlor gamespoker, bridge, chess, backgammon, and so onwere a convenient starting point to think about the deeper conceptual issues regarding interaction, strategy, and rationality, which form the core of the subject. Even as the terminology is not meant to suggest that the issues addressed are light or trivial in any way, it is also hoped that the terminology will turn out to be somewhat appropriate and that you will have fun learning the subject.11
1.3 Examples
To fix ideas, let us now work though three games in some detail.
1. Nim and Marienbad.
These are two parlor games that work as follows. There are two piles of matches and two players. The game starts with player 1 and thereafter the
6In this book we will study zerosum games in some detail in Chapter 10. We will not, however, look at cooperative game theory.
7John Nash wrote four papers on game theory, two on Nash equilibrium and two more on bargaining theory (and he co-authored three others). Each of the four papers has greatly influenced the further development of the discipline. (If you wish, perhaps at a later point in the course, to read the paper on Nash equilibrium, look for "Equilibrium Points of N-person Games, 1950, Proceedings of the National Academy of Sciences.) Unfortunately, health problems cut short what would have been a longer and even more spectacular research career.
8The Selten papers are "Spieltheoretische Behandlung eines Oligopolmodells mit Nachfrage-tragheit" (1965), Zietschrift für die gesamte Statswissenschaft, and Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games (1975), International Journal of Game Theory.
9Many interesting applications of game theory have a sequential, or dynamic, character to them. Put differently, there are few game situations where you are sure that you are never going to encounter any of the other players ever again; as the good game theorist James Bond would say, "Never say never again." We will discuss, in Chapters 15 and 16, games where you think (there is some chance) that you will encounter the same players again, and in an identical context. In Chapters 17 and 18, we will discuss games where you think you will encounter the same players again but possibly in a differerent context.
page_8
Page 9
players take turns. When it is a player's turn, he can remove any number of matches from either pile. Each player is required to remove some number of matches if either pile has matches remaining, and he can only remove matches from one pile at a time.
In Nim, whichever player removes the last match wins the game. In Marienbad, the player who removes the last match loses the game.
The interesting question for either of these games is whether or not there is a winning strategy, that is, is there a strategy such that if you used it whenever it is your turn to move, you can guarantee that you will win regardless of how play unfolds from that point on?
Analysis of Nim.
Call the two piles balanced if there is an equal number of matches in each pile; and call them unbalanced otherwise. It turns out that if the piles are balanced, player 2 has a winning strategy. Conversely, if the piles are unbalanced, player 1 has a winning strategy.
Let us consider the case where there is exactly one match in each pile; denote this (1,1). It is easy to see that player 2 wins this game. It is not difficult either to see that player 2 also wins if we start with (2,2). For example, if player 1 removes two matches from the first pile, thus moving the game to (0,2), then all player 2 has to do is remove the remaining two matches. On the other hand, if player I removes only one match and moves the game, say, to (1,2), then player 2 can counter that by removing a match from the other pile. At that point the game will be at (1,1) and now we know player 2 is going to win.
More generally, suppose that we start with n matches in each pile, n > 2. Notice that player I will never want to remove the last match from either pile, that is, he would want to make sure that both piles have matches in them.12 However, in that case, player 2 can ensure that after every one of his plays, there is an equal number of matches in each pile. (How?)13 This means that sooner or later there will ultimately be one match in each pile.
If we start with unbalanced piles, player I can balance the piles on his first play. Hence, by the above logic, he has a winning strategy. The reason for that is clear: once the piles are balanced, it is as if we are starting afresh with balanced piles but with player 2 going first. However, we know that the first to play loses when the piles are balanced.
CONCEPT CHECK Are there any other winning strategies in this game? What do you think might happen if there are more than two piles? Do all such games, in which players take turns making plays, have winning strategies? (Think of tic-tac-toe.)14
Similar logic can be applied to the analysis of Nim's cousin, Marienbad. Remember, though, in working through the claims below that in Marienbad the last player to remove matches loses the game.
10The original I967-1968 Harsanyi papers are "Games with Incomplete Information Played by Bayesian Players," Management Science. Do notas David Letterman would say after a Stupid Human Tricks segmenttry them at home, just yet!
11There are several books that I hope you will graduate to once you are finished reading this one. Two that I have found very useful for their theoretical treatments are Game Theory by Drew Fudenberg and Jean Tirole (MIT Press) and An Introduction to Game Theory by Martin Osborne and Ariel Rubinstein (MIT Press). If you want a more advanced treatment of any topic in this book, you could do worse than pick up either of these two texts. A hook that is more applications oriented is Thinking Strategically by Barry Nalebuff and Avinash Dixit (W. W. Norton).
12Else, player 2 can force a win by removing all the matches from the pile which has matches remaining.
13Think of what happens if player 2 simply mimics everything that player 1 does, except with the other pile.
14These three questions have been broken down into further bite-sized pieces in the Exercises section.
page_9
Page 10
CONCEPT CHECK ANALYSIS OF MARIENBAD We claim that: If the two piles are balanced with one match in each pile, player 1 has a winning strategy. On the other hand, if the two piles are balanced, with at least two matches in each pile, player 2 has a winning strategy. Finally, if the two piles are unbalanced, player I has a winning strategy. Try proving these claims.15
Note, incidentally, that in both of these games the first player to move (referred to in my discussion as player 1) has an advantage if the piles are unbalanced, but not otherwise.
2. Voting.
This example is an idealized version of committee voting. It is meant to illustrate the advantages of strategic voting, in other words, a manner of voting in which a voter thinks through what the other voters are likely to do rather than voting simply according to his preferences.16
Suppose that there are two competing bills, designated here as A and B, and three legislators, voters 1, 2 and 3, who vote on the passage of these bills. Either of two outcomes are possible: either A or B gets passed, or the legislators choose to pass neither bill (and stay with the status quo law instead). The voting proceeds as follows: first, bill A is pitted against bill B; the winner of that contest is then pitted against the status quo which, for simplicity, we will call "neither"(or N). In each of the two rounds of voting, the bill that the majority of voters cast their vote for, wins. The three legislators have the following preferences among the available options.
voter 1:
voter 2:
voter 3:
(where should be read as, "Bill A is preferred to bill B.")
Analysis.
Note that if the voters voted according to their preferences (i.e., truthfully) then A would win against B and then, in round two, would also win against N. However, voter 3 would be very unhappy with this state of affairs; she most prefers N and can in fact enforce that outcome by simply switching her first round vote to B, which would then lose to N. Is that the outcome? Well, since we got started we might wish to then note that, acknowledging this possibility, voter 2 can also switch her vote and get A elected (which is preferable to N for this voter).
There is a way to proceed more systematically with the strategic analysis. To begin with, notice that in the second round each voter might as well vote truthfully. This is because by voting for a less preferred option, a legislator might get that passed. That would be clearly worse than blocking its passage. Therefore, if A wins in the first round, the eventual outcome will be A, whereas if B wins, the eventual outcome will be N. Every
15Again you may prefer to work step by step through these questions in the Exercis