Perturbed Learning Automata in Potential Games
(2011) 50th IEEE Conference on Decision and Control and European Control Conference, 2011- Abstract
- This paper presents a reinforcement learning algorithm and provides conditions for global convergence to Nash equilibria. For several reinforcement learning schemes, including the ones proposed here, excluding convergence to action profiles which are not Nash equilibria may not be trivial, unless the step-size sequence is appropriately tailored to the specifics of the game. In this paper, we sidestep these issues by introducing a new class of reinforcement learning schemes where the strategy of each agent is perturbed by a state-dependent perturbation function. Contrary to prior work on equilibrium selection in games, where perturbation functions are globally state dependent, the perturbation function here is assumed to be local, i.e., it... (More)
- This paper presents a reinforcement learning algorithm and provides conditions for global convergence to Nash equilibria. For several reinforcement learning schemes, including the ones proposed here, excluding convergence to action profiles which are not Nash equilibria may not be trivial, unless the step-size sequence is appropriately tailored to the specifics of the game. In this paper, we sidestep these issues by introducing a new class of reinforcement learning schemes where the strategy of each agent is perturbed by a state-dependent perturbation function. Contrary to prior work on equilibrium selection in games, where perturbation functions are globally state dependent, the perturbation function here is assumed to be local, i.e., it only depends on the strategy of each agent. We provide conditions under which the strategies of the agents will converge to an arbitrarily small neighborhood of the set of Nash equilibria almost surely. We further specialize the results to a class of potential games. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/2204390
- author
- Chasparis, Georgios LU ; Shamma, Jeff S. and Rantzer, Anders LU
- organization
- publishing date
- 2011
- type
- Contribution to conference
- publication status
- in press
- subject
- conference name
- 50th IEEE Conference on Decision and Control and European Control Conference, 2011
- conference location
- Orlando, Florida, United States
- conference dates
- 2011-12-12 - 2011-12-15
- language
- English
- LU publication?
- yes
- id
- 73ab2c8d-a6f0-4f76-bbb1-561148018ee4 (old id 2204390)
- date added to LUP
- 2016-04-04 13:51:26
- date last changed
- 2019-04-30 20:59:09
@misc{73ab2c8d-a6f0-4f76-bbb1-561148018ee4, abstract = {{This paper presents a reinforcement learning algorithm and provides conditions for global convergence to Nash equilibria. For several reinforcement learning schemes, including the ones proposed here, excluding convergence to action profiles which are not Nash equilibria may not be trivial, unless the step-size sequence is appropriately tailored to the specifics of the game. In this paper, we sidestep these issues by introducing a new class of reinforcement learning schemes where the strategy of each agent is perturbed by a state-dependent perturbation function. Contrary to prior work on equilibrium selection in games, where perturbation functions are globally state dependent, the perturbation function here is assumed to be local, i.e., it only depends on the strategy of each agent. We provide conditions under which the strategies of the agents will converge to an arbitrarily small neighborhood of the set of Nash equilibria almost surely. We further specialize the results to a class of potential games.}}, author = {{Chasparis, Georgios and Shamma, Jeff S. and Rantzer, Anders}}, language = {{eng}}, title = {{Perturbed Learning Automata in Potential Games}}, url = {{https://lup.lub.lu.se/search/files/6221639/8084016.pdf}}, year = {{2011}}, }