Can Reinforcement Learning Find Stackelberg-Nash Equilibria in
General-Sum Markov Games with Myopic Followers?
- URL: http://arxiv.org/abs/2112.13521v1
- Date: Mon, 27 Dec 2021 05:41:14 GMT
- Title: Can Reinforcement Learning Find Stackelberg-Nash Equilibria in
General-Sum Markov Games with Myopic Followers?
- Authors: Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan
- Abstract summary: We study multi-player general-sum Markov games with one of the players designated as the leader and the other players regarded as followers.
For such a game, our goal is to find a Stackelberg-Nash equilibrium (SNE), which is a policy pair $(pi*, nu*)$.
We develop sample-efficient reinforcement learning (RL) algorithms for solving for an SNE in both online and offline settings.
- Score: 156.5760265539888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study multi-player general-sum Markov games with one of the players
designated as the leader and the other players regarded as followers. In
particular, we focus on the class of games where the followers are myopic,
i.e., they aim to maximize their instantaneous rewards. For such a game, our
goal is to find a Stackelberg-Nash equilibrium (SNE), which is a policy pair
$(\pi^*, \nu^*)$ such that (i) $\pi^*$ is the optimal policy for the leader
when the followers always play their best response, and (ii) $\nu^*$ is the
best response policy of the followers, which is a Nash equilibrium of the
followers' game induced by $\pi^*$. We develop sample-efficient reinforcement
learning (RL) algorithms for solving for an SNE in both online and offline
settings. Our algorithms are optimistic and pessimistic variants of
least-squares value iteration, and they are readily able to incorporate
function approximation tools in the setting of large state spaces. Furthermore,
for the case with linear function approximation, we prove that our algorithms
achieve sublinear regret and suboptimality under online and offline setups
respectively. To the best of our knowledge, we establish the first provably
efficient RL algorithms for solving for SNEs in general-sum Markov games with
myopic followers.
Related papers
- Actions Speak What You Want: Provably Sample-Efficient Reinforcement
Learning of the Quantal Stackelberg Equilibrium from Strategic Feedbacks [94.07688076435818]
We study reinforcement learning for learning a Quantal Stackelberg Equilibrium (QSE) in an episodic Markov game with a leader-follower structure.
Our algorithms are based on (i) learning the quantal response model via maximum likelihood estimation and (ii) model-free or model-based RL for solving the leader's decision making problem.
arXiv Detail & Related papers (2023-07-26T10:24:17Z) - Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation [92.99933928528797]
We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves.
We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty"
We show that Nash-UCRL-VTR can provably achieve an $tildeO(dHsqrtT)$ regret, where $d$ is the linear function dimension.
arXiv Detail & Related papers (2021-02-15T09:09:16Z) - Near-Optimal Reinforcement Learning with Self-Play [50.29853537456737]
We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision.
We propose an optimistic variant of the emphNash Q-learning algorithm with sample complexity $tildemathcalO(SAB)$, and a new emphNash V-learning algorithm with sample complexity $tildemathcalO(S(A+B))$.
arXiv Detail & Related papers (2020-06-22T05:00:13Z) - Model-free Reinforcement Learning for Stochastic Stackelberg Security
Games [7.470839530834359]
We consider a sequential Stackelberg game with two players, a leader and a follower.
The follower has access to the state of the system while the leader does not.
We propose an RL algorithm based on Expected Sarsa that learns the Stackelberg equilibrium policy by simulating a model of the MDP.
arXiv Detail & Related papers (2020-05-24T22:34:20Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.