Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation
- URL: http://arxiv.org/abs/2102.07404v1
- Date: Mon, 15 Feb 2021 09:09:16 GMT
- Title: Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation
- Authors: Zixiang Chen and Dongruo Zhou and Quanquan Gu
- Abstract summary: We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves.
We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty"
We show that Nash-UCRL-VTR can provably achieve an $tildeO(dHsqrtT)$ regret, where $d$ is the linear function dimension.
- Score: 92.99933928528797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study reinforcement learning for two-player zero-sum Markov games with
simultaneous moves in the finite-horizon setting, where the transition kernel
of the underlying Markov games can be parameterized by a linear function over
the current state, both players' actions and the next state. In particular, we
assume that we can control both players and aim to find the Nash Equilibrium by
minimizing the duality gap. We propose an algorithm Nash-UCRL-VTR based on the
principle "Optimism-in-Face-of-Uncertainty". Our algorithm only needs to find a
Coarse Correlated Equilibrium (CCE), which is computationally very efficient.
Specifically, we show that Nash-UCRL-VTR can provably achieve an
$\tilde{O}(dH\sqrt{T})$ regret, where $d$ is the linear function dimension, $H$
is the length of the game and $T$ is the total number of steps in the game. To
access the optimality of our algorithm, we also prove an $\tilde{\Omega}(
dH\sqrt{T})$ lower bound on the regret. Our upper bound matches the lower bound
up to logarithmic factors, which suggests the optimality of our algorithm.
Related papers
- Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Learning Two-Player Mixture Markov Games: Kernel Function Approximation
and Correlated Equilibrium [157.0902680672422]
We consider learning Nash equilibria in two-player zero-sum Markov Games with nonlinear function approximation.
We propose a novel online learning algorithm to find a Nash equilibrium by minimizing the duality gap.
arXiv Detail & Related papers (2022-08-10T14:21:54Z) - Towards General Function Approximation in Zero-Sum Markov Games [126.58493169301012]
This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves.
Provably efficient algorithms for both decoupled and coordinated settings are developed.
arXiv Detail & Related papers (2021-07-30T15:25:13Z) - Near-Optimal Reinforcement Learning with Self-Play [50.29853537456737]
We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision.
We propose an optimistic variant of the emphNash Q-learning algorithm with sample complexity $tildemathcalO(SAB)$, and a new emphNash V-learning algorithm with sample complexity $tildemathcalO(S(A+B))$.
arXiv Detail & Related papers (2020-06-22T05:00:13Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.