Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium
- URL: http://arxiv.org/abs/2002.07066v3
- Date: Tue, 23 Jun 2020 21:09:42 GMT
- Title: Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium
- Authors: Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang
- Abstract summary: We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
- Score: 116.56359444619441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop provably efficient reinforcement learning algorithms for
two-player zero-sum finite-horizon Markov games with simultaneous moves. To
incorporate function approximation, we consider a family of Markov games where
the reward function and transition kernel possess a linear structure. Both the
offline and online settings of the problems are considered. In the offline
setting, we control both players and aim to find the Nash Equilibrium by
minimizing the duality gap. In the online setting, we control a single player
playing against an arbitrary opponent and aim to minimize the regret. For both
settings, we propose an optimistic variant of the least-squares minimax value
iteration algorithm. We show that our algorithm is computationally efficient
and provably achieves an $\tilde O(\sqrt{d^3 H^3 T} )$ upper bound on the
duality gap and regret, where $d$ is the linear dimension, $H$ the horizon and
$T$ the total number of timesteps. Our results do not require additional
assumptions on the sampling model.
Our setting requires overcoming several new challenges that are absent in
Markov decision processes or turn-based Markov games. In particular, to achieve
optimism with simultaneous moves, we construct both upper and lower confidence
bounds of the value function, and then compute the optimistic policy by solving
a general-sum matrix game with these bounds as the payoff matrices. As finding
the Nash Equilibrium of a general-sum game is computationally hard, our
algorithm instead solves for a Coarse Correlated Equilibrium (CCE), which can
be obtained efficiently. To our best knowledge, such a CCE-based scheme for
optimism has not appeared in the literature and might be of interest in its own
right.
Related papers
- Provably Efficient Generalized Lagrangian Policy Optimization for Safe
Multi-Agent Reinforcement Learning [105.7510838453122]
We examine online safe multi-agent reinforcement learning using constrained Markov games.
We develop an upper confidence reinforcement learning algorithm to solve this Lagrangian problem.
Our algorithm updates the minimax decision primal variables via online mirror descent and the dual variable via projected gradient step.
arXiv Detail & Related papers (2023-05-31T22:09:24Z) - Breaking the Curse of Multiagents in a Large State Space: RL in Markov
Games with Independent Linear Function Approximation [56.715186432566576]
We propose a new model, independent linear Markov game, for reinforcement learning with a large state space and a large number of agents.
We design new algorithms for learning correlated equilibria (CCE) and Markov correlated equilibria (CE) with sample bounds complexity that only scalely with each agent's own function class complexity.
Our algorithms rely on two key technical innovations: (1) utilizing policy replay to tackle non-stationarity incurred by multiple agents and the use of function approximation; and (2) separating learning Markov equilibria and exploration in the Markov games.
arXiv Detail & Related papers (2023-02-07T18:47:48Z) - Learning Two-Player Mixture Markov Games: Kernel Function Approximation
and Correlated Equilibrium [157.0902680672422]
We consider learning Nash equilibria in two-player zero-sum Markov Games with nonlinear function approximation.
We propose a novel online learning algorithm to find a Nash equilibrium by minimizing the duality gap.
arXiv Detail & Related papers (2022-08-10T14:21:54Z) - Towards General Function Approximation in Zero-Sum Markov Games [126.58493169301012]
This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves.
Provably efficient algorithms for both decoupled and coordinated settings are developed.
arXiv Detail & Related papers (2021-07-30T15:25:13Z) - Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation [92.99933928528797]
We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves.
We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty"
We show that Nash-UCRL-VTR can provably achieve an $tildeO(dHsqrtT)$ regret, where $d$ is the linear function dimension.
arXiv Detail & Related papers (2021-02-15T09:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.