Camerer, Chong - Behavioral Game Theory - Thinking, Learning And Teaching (2001 - 70 Pages).pdf

(992 KB) Pobierz
C:\Gauss\...\sweden3.DVI
Behavioral Game Theory:
Thinking, Learning, and Teaching
Colin F. Camerer 1
California Institute of Technology
Pasadena, CA 91125
Teck-Hua Ho
Wharton School, University of Pennsylvania
Philadelphia PA 19104
Juin Kuan Chong
National University of Singapore
Kent Ridge Crescent
Singapore 119260
November 14, 2001
1 This research was supported by NSF grants SBR 9730364, SBR 9730187 and SES-0078911. Thanks
to many people for helpful comments on this research, particularly Caltech colleagues (especially Richard
McKelvey, Tom Palfrey, and Charles Plott), Monica Capra, Vince Crawford, John Du®y, Drew Fuden-
berg, John Kagel, members of the MacArthur Preferences Network, our research assistants and collabora-
tors Dan Clendenning, Graham Free, David Hsia, Ming Hsu, Hongjai Rhee, and Xin Wang, and seminar
audience members too numerous to mention. Dan Levin gave the shooting-ahead military example.
Dave Cooper, Ido Erev, and Bill Frechette wrote helpful emails.
1
1 Introduction
Game theory is a mathematical system for analyzing and predicting how humans behave
in strategic situations. Standard equilibrium analyses assume all players: 1) form beliefs
based on analysis of what others might do (strategic thinking); 2) choose a best response
given those beliefs (optimization); 3) adjust best responses and beliefs until they are
mutually consistent (equilibrium).
It is widely-accepted that not every player behaves rationally in complex situations,
so assumptions (1) and (2) are sometimes violated. For explaining consumer choices
and other decisions, rationality may still be an adequate approximation even if a modest
percentage of players violate the theory. But game theory is di®erent. Players' fates
are intertwined. The presence of players who do not think strategically or optimize can
therefore change what rational players should do. As a result, what a population of
players is likely to do when some are not thinking strategically and optimizing can only
be predicted by an analysis which uses the tools of (1)-(3) but accounts for bounded
rationality as well, preferably in a precise way. 2
It is also unlikely that equilibrium (3) is reached instantaneously in one-shot games.
The idea of instant equilibration is so unnatural that perhaps an equilibrium should not
be thought of as a prediction which is vulnerable to falsi¯cation at all. Instead, it should
be thought of as the limiting outcome of an unspeci¯ed learning or evolutionary process
that unfolds over time. 3 In this view, equilibrium is the end of the story of how strategic
thinking, optimization, and equilibration (or learning) work, not the beginning (one-shot)
or the middle (equilibration).
This paper has three goals. First we develop an index of bounded rationality which
measures players' steps of thinking and uses one parameter to specify how heterogeneous a
population of players is. Coupled with best response, this index makes a unique prediction
of behavior in any one-shot game. Second, we develop a learning algorithm (called
Functional Experience-Weighted Attraction Learning (fEWA)) to compute the path of
2 Our models are related to important concepts like rationalizability, which weakens the mutual con-
sistency requirement, and behavior of ¯nite automata. The di®erence is that we work with simple
parametric forms and concentrate on ¯tting them to data.
3 In his thesis proposing a concept of equilibrium, Nash himself suggested equilibrium might arise
from some \mass action" which adapted over time. Taking up Nash's implicit suggestion, later analyses
¯lled in details of where evolutionary dynamics lead (see Weibull, 1995; Mailath, 1998).
2
equilibration. The algorithm generalizes both ¯ctitious play and reinforcement models
and has shown greater empirical predictive power than those models in many games
(adjusting for complexity, of course). Consequently, fEWA can serve as an empirical
device for ¯nding the behavioral resting point as a function of the initial conditions.
Third, we show how the index of bounded rationality and the learning algorithm can be
used to understand repeated game behaviors such as reputation building and strategic
teaching.
Our approach is guided by three stylistic principles: Precision; generality; and em-
pirical discipline. The ¯rst two are standard desiderata in game theory; the third is a
cornerstone in experimental economics.
Precision : Because game theory predictions are sharp, it is not hard to spot likely
deviations and counterexamples. Until recently, most of the experimental literature con-
sisted of documenting deviations (or successes) and presenting a simple model, usually
specialized to the game at hand. The hard part is to distill the deviations into an al-
ternative theory that is similarly precise as standard theory and can be widely applied.
We favor speci¯cations that use one or two free parameters to express crucial elements
of behavioral °exibility because people are di®erent. We also prefer to let data, rather
than our intuition, specify parameter values. 4
Generality : Much of the power of equilibrium analyses, and their widespread use,
comes from the fact that the same principles can be applied to many di®erent games,
using the universal language of mathematics. Widespread use of the language creates a
dialogue that sharpens theory and cumulates worldwide knowhow. Behavioral models of
games are also meant to be general, in the sense that the same models can be applied
to many games with minimal customization. The insistence on generality is common in
economics, but is not universal. Many researchers in psychology believe that behavior
is so context-speci¯c that it is impossible to have a common theory that applies to all
contexts. Our view is that we can't know whether general theories fail until they are
broadly applied. Showing that customized models of di®erent games ¯t well does not
mean there isn't a general theory waiting to be discovered that is even better.
4 While great triumphs of economic theory come from parameter-free models (e.g., Nash equilibrium),
relying on a small number of free parameters is more typical in economic modeling. For example, nothing
in the theory of intertemporal choice pins a discount factor ± to a speci¯c value. But if a wide range
of phenomena are consistent with a value like .95, then as economists we are comfortable working with
such a value despite the fact that it does not emerge from axioms or deeper principles.
 
3
It is noteworthy that in the search for generality, the models we describe below are
typically ¯t to dozens of di®erent data sets, rather than one or two. The number of
subject-periods used when games are pooled is usually several thousand. This doesn't
mean the results are conclusive or unshakeable. It just illustrates what we mean by a
general model.
Empirical discipline : Our approach is heavily disciplined by data. Because game
theory is about people (and groups of people) thinking about what other people and
groups will do, it is unlikely that pure logic alone will tell us what they will happen. 5
As the physicist Murray Gell-Mann said, `Think how hard physics would be if particles
could think.' It is even harder if we don't watch what `particles' do when interacting.
Our insistence on empirical discipline is shared by others, past and present. Von
Neumann and Morgenstern (1944) thought that
the empirical background of economic science is de¯nitely inadequate...it
would have been absurd in physics to expect Kepler and Newton without
Tycho Brahe,{ and there is no reason to hope for an easier development in
economics
Fifty years later Eric Van Damme (1999) thought the same:
Without having a broad set of facts on which to theorize, there is a certain
danger of spending too much time on models that are mathematically ele-
gant, yet have little connection to actual behavior. At present our empirical
knowledge is inadequate and it is an interesting question why game theorists
have not turned more frequently to psychologists for information about the
learning and information processes used by humans.
The data we use to inform theory are experimental because game-theoretic predictions
are notoriously sensitive to what players know, when they move, and what their payo®s
are. Laboratory environments provide crucial control of all these variables (see Crawford,
1997). As in other lab sciences, the idea is to use lab control to sort out which theories
5 As Thomas Schelling (1960, p. 164) wrote \One cannot, without empirical evidence, deduce what
understandings can be perceived in a nonzero-sum game of maneuver any more than one can prove, by
purely formal deduction, that a particular joke is bound to be funny."
 
4
work well and which don't, then later use them to help understand patterns in naturally-
occurring data. In this respect, behavioral game theory resembles data-driven ¯elds
like labor economics or ¯nance more than analytical game theory. The large body of
experimental data accumulated over the last couple of decades (and particularly the last
¯ve years; see Camerer, 2002) is a treasure trove which can be used to sort out which
simple parametric models ¯t well.
While the primary goal of behavioral game theory models is to make accurate pre-
dictions when equilibrium concepts do not, it can also circumvent two central problems
in game theory: Re¯nement and selection. Because we replace the strict best-response
(optimization) assumption with stochastic better-response, all possible paths are part of
a (statistical) equilibrium. As a result, there is no need to apply subgame perfection or
propose belief re¯nements (to update beliefs after zero-probability events where Bayes'
rule is helpless). Furthermore, with plausible parameter values the thinking and learning
models often solve the long-standing problem of selecting one of several Nash equilibria,
in a statistical sense, because the models make a unimodal statistical prediction rather
than predicting multiple modes. Therefore, while the thinking-steps model generalizes
the concept of equilibrium, it can also be more precise (in a statistical sense) when
equilibrium is imprecise (cf. Lucas, 1986). 6
We make three remarks before proceeding. First, while we do believe the thinking,
learning and teaching models in this paper do a good job of explaining some experimental
regularity parsimoniously, lots of other models are being actively explored. 7 The models
in this paper illustrate what most other models also strive to explain, and how they are
6 Lucas (1986) makes a similar point in macroeconomic models. Rational expectations often yields
indeterminacy whereas adaptive expectations pins down a dynamic path. Lucas writes (p. S421): \The
issue involves a question concerning how collections of people behave in a speci¯c situation. Economic
theory does not resolve the question...It is hard to see what can advance the discussion short of assembling
a collection of people, putting them in the situation of interest, and observing what they do."
7 Quantal response equilibrium (QRE), a statistical generalization of Nash, almost always explains the
direction of deviations from Nash and should replace Nash as the static benchmark that other models
are routinely compared to (see Goeree and Holt, in press. Stahl and Wilson (1995), Capra (1999) and
Goeree and Holt (1999b) have models of limited thinking in one-shot games which are similar to ours.
There are many learning models. fEWA generalizes some of them (though reinforcement with payo®
variability adjustment is di®erent; see Erev, Bereby-Meyer, and Roth, 1999). Other approaches include
rule learning (Stahl, 1996, 2000), and earlier AI tools like genetic algorithms or genetic programming to
\breed" rules. Finally, there are no alternative models of strategic teaching that we know of but this is
an important area others should look at.
Zgłoś jeśli naruszono regulamin