Multi-Leader Congestion Games with an Adversary
- URL: http://arxiv.org/abs/2112.07435v1
- Date: Tue, 14 Dec 2021 14:47:43 GMT
- Title: Multi-Leader Congestion Games with an Adversary
- Authors: Tobias Harks, Mona Henle, Max Klimm, Jannik Matuschke, Anja Schedel
- Abstract summary: We study a multi-leader single-follower congestion game where multiple users (leaders) choose one resource out of a set of resources.
We show that pure Nash equilibria may fail to exist and therefore, we consider approximate equilibria instead.
We show how to compute efficiently a best approximate equilibrium, that is, with smallest possible $alpha$ among all $alpha$-approximate equilibria of the given instance.
- Score: 0.5914780964919123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a multi-leader single-follower congestion game where multiple users
(leaders) choose one resource out of a set of resources and, after observing
the realized loads, an adversary (single-follower) attacks the resources with
maximum loads, causing additional costs for the leaders. For the resulting
strategic game among the leaders, we show that pure Nash equilibria may fail to
exist and therefore, we consider approximate equilibria instead. As our first
main result, we show that the existence of a $K$-approximate equilibrium can
always be guaranteed, where $K \approx 1.1974$ is the unique solution of a
cubic polynomial equation. To this end, we give a polynomial time combinatorial
algorithm which computes a $K$-approximate equilibrium. The factor $K$ is
tight, meaning that there is an instance that does not admit an
$\alpha$-approximate equilibrium for any $\alpha<K$. Thus $\alpha=K$ is the
smallest possible value of $\alpha$ such that the existence of an
$\alpha$-approximate equilibrium can be guaranteed for any instance of the
considered game. Secondly, we focus on approximate equilibria of a given fixed
instance. We show how to compute efficiently a best approximate equilibrium,
that is, with smallest possible $\alpha$ among all $\alpha$-approximate
equilibria of the given instance.
Related papers
- Barriers to Welfare Maximization with No-Regret Learning [68.66209476382213]
We prove lower bounds for computing a near-optimal $T$-sparse CCE.
In particular, we show that the inapproximability of maximum clique precludes attaining any non-trivial sparsity in time.
arXiv Detail & Related papers (2024-11-04T00:34:56Z) - Games played by Exponential Weights Algorithms [0.0]
We consider a repeated interaction in discrete time, where each player uses an exponential weights algorithm characterized by an initial mixed action and a fixed learning rate.
We show that whenever a strict Nash equilibrium exists, the probability to play a strict Nash equilibrium at the next stage converges almost surely to 0 or 1.
arXiv Detail & Related papers (2024-07-09T08:49:51Z) - The complexity of approximate (coarse) correlated equilibrium for incomplete information games [16.96984593866157]
We study the complexity of decentralized learning of approximate correlated equilibria in incomplete information games.
Our lower bound holds even for the easier solution concept of $epsilon$-approximate $mathitco$ correlated equilibrium.
arXiv Detail & Related papers (2024-06-04T14:35:27Z) - On Tractable $Φ$-Equilibria in Non-Concave Games [53.212133025684224]
Non-concave games introduce significant game-theoretic and optimization challenges.
We show that when $Phi$ is finite, there exists an efficient uncoupled learning algorithm that converges to the corresponding $Phi$-equilibria.
We also show that Online Gradient Descent can efficiently approximate $Phi$-equilibria in non-trivial regimes.
arXiv Detail & Related papers (2024-03-13T01:51:30Z) - Optimistic Policy Gradient in Multi-Player Markov Games with a Single
Controller: Convergence Beyond the Minty Property [89.96815099996132]
We develop a new framework to characterize optimistic policy gradient methods in multi-player games with a single controller.
Our approach relies on a natural generalization of the classical Minty property that we introduce, which we anticipate to have further applications beyond Markov games.
arXiv Detail & Related papers (2023-12-19T11:34:10Z) - A Black-box Approach for Non-stationary Multi-agent Reinforcement Learning [53.83345471268163]
We investigate learning the equilibria in non-stationary multi-agent systems.
We show how to test for various types of equilibria by a black-box reduction to single-agent learning.
arXiv Detail & Related papers (2023-06-12T23:48:24Z) - Equilibrium Bandits: Learning Optimal Equilibria of Unknown Dynamics [23.722837647516357]
Consider a decision-maker that can pick one out of $K$ actions to control an unknown system, for $T$ turns.
The dynamics of the system are unknown to the decision-maker, which can only observe a noisy reward at the end of every turn.
Existing bandit algorithms, either adversarial, achieve linear (tau) regret for this problem.
We present a novel algorithm that knows to switch an action quickly if it is not worth it to wait until the equilibrium is reached.
arXiv Detail & Related papers (2023-02-27T10:47:15Z) - Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Safe Equilibrium [1.7132914341329848]
The standard game-theoretic solution concept, Nash equilibrium, assumes that all players behave rationally.
We propose a new solution concept called safe equilibrium that models opponents as behaving rationally with a specified probability.
We prove that a safe equilibrium exists in all strategic-form games, and prove that its computation is PPAD-hard.
arXiv Detail & Related papers (2022-01-12T01:45:51Z) - When Can We Learn General-Sum Markov Games with a Large Number of
Players Sample-Efficiently? [10.397170312149067]
This paper investigates what learning goals admit better sample complexities in the setting of $m$-player general-sum Markov games.
First, we design algorithms for learning an $epsilon$-Coarse Correlated Equilibrium (CCE) in $widetildemathcalO(H5Smax_ile m A_i / epsilon2)$ episodes.
Second, we consider the important special case of Markov Potential Games, and design an algorithm that learns an $epsilon$-approximate Nash equilibrium within $widet
arXiv Detail & Related papers (2021-10-08T15:06:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.