Advanced

Learning to signal: analysis of a micro-level reinforcement model

Argiento, Raffaele; Pemantle, Robin; Skyrms, Brian and Volkov, Stanislav LU (2009) In Stochastic Processes and their Applications 119(2). p.373-390
Abstract
We consider the following signaling game. Nature plays first from the set {1,2}{1,2}. Player 1 (the Sender) sees this and plays from the set {A,B}{A,B}. Player 2 (the Receiver) sees only Player 1’s play and plays from the set {1,2}{1,2}. Both players win if Player 2’s play equals Nature’s play and lose otherwise. Players are told whether they have won or lost, and the game is repeated. An urn scheme for learning coordination in this game is as follows. Each node of the decision tree for Players 1 and 2 contains an urn with balls of two colors for the two possible decisions. Players make decisions by drawing from the appropriate urns. After a win, each ball that was drawn is reinforced by adding another of the same color to the urn. A... (More)
We consider the following signaling game. Nature plays first from the set {1,2}{1,2}. Player 1 (the Sender) sees this and plays from the set {A,B}{A,B}. Player 2 (the Receiver) sees only Player 1’s play and plays from the set {1,2}{1,2}. Both players win if Player 2’s play equals Nature’s play and lose otherwise. Players are told whether they have won or lost, and the game is repeated. An urn scheme for learning coordination in this game is as follows. Each node of the decision tree for Players 1 and 2 contains an urn with balls of two colors for the two possible decisions. Players make decisions by drawing from the appropriate urns. After a win, each ball that was drawn is reinforced by adding another of the same color to the urn. A number of equilibria are possible for this game other than the optimal ones. However, we show that the urn scheme achieves asymptotically optimal coordination. (Less)
Please use this url to cite or link to this publication:
author
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Urn model, Stochastic approximation, Evolution, game, Probability, Stable, Unstable, Two-player game
in
Stochastic Processes and their Applications
volume
119
issue
2
pages
373 - 390
publisher
Elsevier
external identifiers
  • scopus:58549119564
ISSN
1879-209X
DOI
10.1016/j.spa.2008.02.014
language
English
LU publication?
no
id
df749654-38a8-419b-a866-4e8cb7d6ceda (old id 4588099)
date added to LUP
2014-08-18 15:45:16
date last changed
2017-12-10 04:11:35
@article{df749654-38a8-419b-a866-4e8cb7d6ceda,
  abstract     = {We consider the following signaling game. Nature plays first from the set {1,2}{1,2}. Player 1 (the Sender) sees this and plays from the set {A,B}{A,B}. Player 2 (the Receiver) sees only Player 1’s play and plays from the set {1,2}{1,2}. Both players win if Player 2’s play equals Nature’s play and lose otherwise. Players are told whether they have won or lost, and the game is repeated. An urn scheme for learning coordination in this game is as follows. Each node of the decision tree for Players 1 and 2 contains an urn with balls of two colors for the two possible decisions. Players make decisions by drawing from the appropriate urns. After a win, each ball that was drawn is reinforced by adding another of the same color to the urn. A number of equilibria are possible for this game other than the optimal ones. However, we show that the urn scheme achieves asymptotically optimal coordination.},
  author       = {Argiento, Raffaele and Pemantle, Robin and Skyrms, Brian and Volkov, Stanislav},
  issn         = {1879-209X},
  keyword      = {Urn model,Stochastic approximation,Evolution,game,Probability,Stable,Unstable,Two-player game},
  language     = {eng},
  number       = {2},
  pages        = {373--390},
  publisher    = {Elsevier},
  series       = {Stochastic Processes and their Applications},
  title        = {Learning to signal: analysis of a micro-level reinforcement model},
  url          = {http://dx.doi.org/10.1016/j.spa.2008.02.014},
  volume       = {119},
  year         = {2009},
}