Provably Efficient Reinforcement Learning in Decentralized General-Sum
Markov Games
- URL: http://arxiv.org/abs/2110.05682v1
- Date: Tue, 12 Oct 2021 02:01:22 GMT
- Title: Provably Efficient Reinforcement Learning in Decentralized General-Sum
Markov Games
- Authors: Weichao Mao, Tamer Ba\c{s}ar
- Abstract summary: This paper addresses the problem of learning an equilibrium efficiently in general-sum Markov games.
We propose an algorithm in which each agent independently runs optimistic V-learning to efficiently explore the unknown environment.
We show that the agents can find an $epsilon$-approximate CCE in at most $widetildeO( H6S A /epsilon2)$ episodes.
- Score: 5.205867750232226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of learning an equilibrium efficiently in
general-sum Markov games through decentralized multi-agent reinforcement
learning. Given the fundamental difficulty of calculating a Nash equilibrium
(NE), we instead aim at finding a coarse correlated equilibrium (CCE), a
solution concept that generalizes NE by allowing possible correlations among
the agents' strategies. We propose an algorithm in which each agent
independently runs optimistic V-learning (a variant of Q-learning) to
efficiently explore the unknown environment, while using a stabilized online
mirror descent (OMD) subroutine for policy updates. We show that the agents can
find an $\epsilon$-approximate CCE in at most $\widetilde{O}( H^6S A
/\epsilon^2)$ episodes, where $S$ is the number of states, $A$ is the size of
the largest individual action space, and $H$ is the length of an episode. This
appears to be the first sample complexity result for learning in generic
general-sum Markov games. Our results rely on a novel investigation of an
anytime high-probability regret bound for OMD with a dynamic learning rate and
weighted regret, which would be of independent interest. One key feature of our
algorithm is that it is fully \emph{decentralized}, in the sense that each
agent has access to only its local information, and is completely oblivious to
the presence of others. This way, our algorithm can readily scale up to an
arbitrary number of agents, without suffering from the exponential dependence
on the number of agents.
Related papers
- A Black-box Approach for Non-stationary Multi-agent Reinforcement Learning [53.83345471268163]
We investigate learning the equilibria in non-stationary multi-agent systems.
We show how to test for various types of equilibria by a black-box reduction to single-agent learning.
arXiv Detail & Related papers (2023-06-12T23:48:24Z) - Stochastic Principal-Agent Problems: Efficient Computation and Learning [25.637633553882985]
A principal and an agent interact in a environment, each privy to observations about the state not available to the other.
The model encompasses as special cases extensive-form games (EFGs) and approaches games of Markov decision processes (POMDPs)
We show an efficient algorithm for an episodic reinforcement learning setting where transition probabilities are unknown.
arXiv Detail & Related papers (2023-06-06T16:20:44Z) - Breaking the Curse of Multiagents in a Large State Space: RL in Markov
Games with Independent Linear Function Approximation [56.715186432566576]
We propose a new model, independent linear Markov game, for reinforcement learning with a large state space and a large number of agents.
We design new algorithms for learning correlated equilibria (CCE) and Markov correlated equilibria (CE) with sample bounds complexity that only scalely with each agent's own function class complexity.
Our algorithms rely on two key technical innovations: (1) utilizing policy replay to tackle non-stationarity incurred by multiple agents and the use of function approximation; and (2) separating learning Markov equilibria and exploration in the Markov games.
arXiv Detail & Related papers (2023-02-07T18:47:48Z) - Decentralized model-free reinforcement learning in stochastic games with
average-reward objective [1.9852463786440127]
We show that our algorithm achieves both sublinear high probability regret of order $T3/4$ and sublinear expected regret of order $T2/3$.
Our algorithm enjoys a low computational complexity and low memory space requirement compared to the previous works.
arXiv Detail & Related papers (2023-01-13T15:59:53Z) - Representation Learning for General-sum Low-rank Markov Games [63.119870889883224]
We study multi-agent general-sum Markov games with nonlinear function approximation.
We focus on low-rank Markov games whose transition matrix admits a hidden low-rank structure on top of an unknown non-linear representation.
arXiv Detail & Related papers (2022-10-30T22:58:22Z) - Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Decentralized Optimistic Hyperpolicy Mirror Descent: Provably No-Regret
Learning in Markov Games [95.10091348976779]
We study decentralized policy learning in Markov games where we control a single agent to play with nonstationary and possibly adversarial opponents.
We propose a new algorithm, underlineDecentralized underlineOptimistic hypeunderlineRpolicy munderlineIrror deunderlineScent (DORIS)
DORIS achieves $sqrtK$-regret in the context of general function approximation, where $K$ is the number of episodes.
arXiv Detail & Related papers (2022-06-03T14:18:05Z) - V-Learning -- A Simple, Efficient, Decentralized Algorithm for
Multiagent RL [35.304241088947116]
V-learning is a new class of single-agent RL algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into a RL algorithm.
Unlike Q-learning, it only maintains the estimates of V-values instead of Q-values.
arXiv Detail & Related papers (2021-10-27T16:25:55Z) - Decentralized Cooperative Multi-Agent Reinforcement Learning with
Exploration [35.75029940279768]
We study multi-agent reinforcement learning in the most basic cooperative setting -- Markov teams.
We propose an algorithm in which each agent independently runs a stage-based V-learning style algorithm.
We show that the agents can learn an $epsilon$-approximate Nash equilibrium policy in at most $proptowidetildeO (1/epsilon4)$ episodes.
arXiv Detail & Related papers (2021-10-12T02:45:12Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.