Taming the Exponential Action Set: Sublinear Regret and Fast Convergence
to Nash Equilibrium in Online Congestion Games
- URL: http://arxiv.org/abs/2306.13673v1
- Date: Mon, 19 Jun 2023 03:03:44 GMT
- Title: Taming the Exponential Action Set: Sublinear Regret and Fast Convergence
to Nash Equilibrium in Online Congestion Games
- Authors: Jing Dong, Jingyu Wu, Siwei Wang, Baoxiang Wang, Wei Chen
- Abstract summary: We study the online formulation of congestion games, where agents participate in the game repeatedly and observe feedback with randomness.
We propose CongestEXP, a decentralized algorithm that applies the classic exponential weights method.
We show that CongestEXP attains a regret upper bound of $O(kFsqrtT)$ for every individual player, where $T$ is the time horizon.
- Score: 29.384748500302678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The congestion game is a powerful model that encompasses a range of
engineering systems such as traffic networks and resource allocation. It
describes the behavior of a group of agents who share a common set of $F$
facilities and take actions as subsets with $k$ facilities. In this work, we
study the online formulation of congestion games, where agents participate in
the game repeatedly and observe feedback with randomness. We propose
CongestEXP, a decentralized algorithm that applies the classic exponential
weights method. By maintaining weights on the facility level, the regret bound
of CongestEXP avoids the exponential dependence on the size of possible
facility sets, i.e., $\binom{F}{k} \approx F^k$, and scales only linearly with
$F$. Specifically, we show that CongestEXP attains a regret upper bound of
$O(kF\sqrt{T})$ for every individual player, where $T$ is the time horizon. On
the other hand, exploiting the exponential growth of weights enables CongestEXP
to achieve a fast convergence rate. If a strict Nash equilibrium exists, we
show that CongestEXP can converge to the strict Nash policy almost
exponentially fast in $O(F\exp(-t^{1-\alpha}))$, where $t$ is the number of
iterations and $\alpha \in (1/2, 1)$.
Related papers
- Games played by Exponential Weights Algorithms [0.0]
We consider a repeated interaction in discrete time, where each player uses an exponential weights algorithm characterized by an initial mixed action and a fixed learning rate.
We show that whenever a strict Nash equilibrium exists, the probability to play a strict Nash equilibrium at the next stage converges almost surely to 0 or 1.
arXiv Detail & Related papers (2024-07-09T08:49:51Z) - MF-OML: Online Mean-Field Reinforcement Learning with Occupation Measures for Large Population Games [5.778024594615575]
This paper proposes an online mean-field reinforcement learning algorithm for computing Nash equilibria of sequential games.
MFOML is the first fully approximate multi-agent reinforcement learning algorithm for provably solving Nash equilibria.
As a byproduct, we also obtain the first tractable globally convergent computational for approximate computing of monotone mean-field games.
arXiv Detail & Related papers (2024-05-01T02:19:31Z) - Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with
General Utilities [12.104551746465932]
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints.
Our algorithm converges to a first-order stationary point (FOSP) at the rate of $mathcalOleft(T-2/3right)$.
In the sample-based setting, we demonstrate that, with high probability, our algorithm requires $widetildemathcalOleft(epsilon-3.5right)$ samples to achieve an $epsilon$-FOSP.
arXiv Detail & Related papers (2023-05-27T20:08:35Z) - Global Nash Equilibrium in Non-convex Multi-player Game: Theory and
Algorithms [66.8634598612777]
We show that Nash equilibrium (NE) is acceptable to all players in a multi-player game.
We also show that no one can benefit unilaterally from the general theory step by step.
arXiv Detail & Related papers (2023-01-19T11:36:50Z) - The Sample Complexity of Online Contract Design [120.9833763323407]
We study the hidden-action principal-agent problem in an online setting.
In each round, the principal posts a contract that specifies the payment to the agent based on each outcome.
The agent then makes a strategic choice of action that maximizes her own utility, but the action is not directly observable by the principal.
arXiv Detail & Related papers (2022-11-10T17:59:42Z) - Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Near-Optimal No-Regret Learning for General Convex Games [121.50979258049135]
We show that regret can be obtained for general convex and compact strategy sets.
Our dynamics are on an instantiation of optimistic follow-the-regularized-bounds over an appropriately emphlifted space.
Even in those special cases where prior results apply, our algorithm improves over the state-of-the-art regret.
arXiv Detail & Related papers (2022-06-17T12:58:58Z) - Nearly Minimax Algorithms for Linear Bandits with Shared Representation [86.79657561369397]
We consider the setting where we play $M$ linear bandits with dimension $d$, each for $T$ rounds, and these $M$ bandit tasks share a common $k(ll d)$ dimensional linear representation.
We come up with novel algorithms that achieve $widetildeOleft(dsqrtkMT + kMsqrtTright)$ regret bounds, which matches the known minimax regret lower bound up to logarithmic factors.
arXiv Detail & Related papers (2022-03-29T15:27:13Z) - Learning Stationary Nash Equilibrium Policies in $n$-Player Stochastic
Games with Independent Chains [2.132096006921048]
We consider a class of $n$-player games in which players have their own internal state/action spaces while they are coupled through their payoff functions.
For this class of games, we first show that finding a stationary Nash (NE) policy without any assumption on the reward functions is interactable.
We develop algorithms based on dual averaging and dual mirror descent, which converge to the set of $epsilon$-NE policies.
arXiv Detail & Related papers (2022-01-28T16:27:21Z) - When Can We Learn General-Sum Markov Games with a Large Number of
Players Sample-Efficiently? [10.397170312149067]
This paper investigates what learning goals admit better sample complexities in the setting of $m$-player general-sum Markov games.
First, we design algorithms for learning an $epsilon$-Coarse Correlated Equilibrium (CCE) in $widetildemathcalO(H5Smax_ile m A_i / epsilon2)$ episodes.
Second, we consider the important special case of Markov Potential Games, and design an algorithm that learns an $epsilon$-approximate Nash equilibrium within $widet
arXiv Detail & Related papers (2021-10-08T15:06:22Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.