摘要
This paper investigates simultaneous learning about both nature and others* actions in repeated games and identifies a set of sufficient conditions for which Harsanyi*s doctrine holds. Players have a utility function over infinite histories that are continuous for the sup-norm topology. Nature*s drawing after any history may depend on any past actions. Provided that (1) every player maximizes her expected payoff against her own beliefs, (2) every player updates her beliefs in a Bayesian manner, (3) prior beliefs about both nature and other players* strategies have a grain of truth, and (4) beliefs about nature are independent of actions chosen during the game, we construct a Nash equilibrium, that is, realization-equivalent to the actual plays, where Harsanyi*s doctrine holds. Those assumptions are shown to be tight. 1. Introduction Consider a finite number of agents interacting simultaneously. Every agent possibly plays an infinite number of times, and her payoff depends on the joint choice of actions as well as events beyond agents* control (called choices of nature). To analyze such interactions, it is always assumed that the players share a common prior about the probability distribution of nature. Such an approach is known as Harsanyi*s doctrine, introduced in Harsanyi [1]. We provide a learning foundation for this doctrine. We consider a class of games where nature*s choices may (or not) depend on any past actions by players, and payoff functions are continuous for the sup-norm topology over the set of infinite histories. Provided that Bayesian players have a grain of truth, we show that resulting outcomes converge for the sup-norm topology to a Nash equilibrium that we construct, where Harsanyi*s doctrine holds. Kalai and Lehrer [2, 3] consider a similar type of learning model, without nature*s choice and with a weaker topology of convergence that significantly restricts their class of games. More precisely, those last references use a structure, that is, not a topology, in sharp contrast with the general sup-norm topology. The notion in Kalai and Lehrer states that, for any two probability measures and and for some small real , the measure is -close to if two conditions hold. There must exist a measurable set that is assigned a measure of at least both by and , and for any measurable set it must be true that A given strategy profile with associated probability measure is then said to play -like another strategy profile with associated probability measure if is -close to . The main problem with this concept is the lack of symmetry; that is, if is