Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Adaptive multiagent reinforcement learning with non-positive regret|
|Citation:||Proceedings of the 29th Australasian Joint Conference on Artificial Intelligence, 2016 / Kang, B., Bai, Q. (ed./s), vol.9992 LNAI, pp.29-41|
|Series/Report no.:||Lecture notes in computer science|
|Conference Name:||29th Australasian Joint Conference on Artificial Intelligence (AI) (05 Dec 2016 - 08 Dec 2016 : Hobart, Tas)|
|Duong D. Nguyen, B, Langford B. White, and Hung X. Nguyen|
|Abstract:||We propose a novel adaptive reinforcement learning (RL) procedure for multi-agent non-cooperative repeated games. Most existing regret-based algorithms only use positive regrets in updating their learning rules. In this paper, we adopt both positive and negative regrets in reinforcement learning to improve its convergence behaviour. We prove theoretically that the empirical distribution of the joint play converges to the set of correlated equilibrium. Simulation results demonstrate that our proposed procedure outperforms the standard regret-based RL approach and a well-known state-of-the-art RL scheme in the literature in terms of both computational requirements and system fairness. Further experiments demonstrate that the performance of our solution is robust to variations in the total number of agents in the system; and that it can achieve markedly better fairness performance when compared to other relevant methods, especially in a large-scale multiagent system.|
|Keywords:||Multiagent systems; Reinforcement Learning; Game theory; Correlated equilibrium; No regret|
|Rights:||Springer International Publishing AG 2016|
|Appears in Collections:||Electrical and Electronic Engineering publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.