Multiplayer Bandit Learning, from Competition to Cooperation
- URL: http://arxiv.org/abs/1908.01135v4
- Date: Fri, 12 Jan 2024 14:32:25 GMT
- Title: Multiplayer Bandit Learning, from Competition to Cooperation
- Authors: Simina Br\^anzei and Yuval Peres
- Abstract summary: We study the effects of competition and cooperation on the tradeoff between exploration and exploitation.
The model is related to the economics literature on strategic experimentation, where usually players observe each other's rewards.
- Score: 3.7801191959442053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The stochastic multi-armed bandit model captures the tradeoff between
exploration and exploitation. We study the effects of competition and
cooperation on this tradeoff. Suppose there are $k$ arms and two players, Alice
and Bob. In every round, each player pulls an arm, receives the resulting
reward, and observes the choice of the other player but not their reward.
Alice's utility is $\Gamma_A + \lambda \Gamma_B$ (and similarly for Bob), where
$\Gamma_A$ is Alice's total reward and $\lambda \in [-1, 1]$ is a cooperation
parameter. At $\lambda = -1$ the players are competing in a zero-sum game, at
$\lambda = 1$, they are fully cooperating, and at $\lambda = 0$, they are
neutral: each player's utility is their own reward. The model is related to the
economics literature on strategic experimentation, where usually players
observe each other's rewards.
With discount factor $\beta$, the Gittins index reduces the one-player
problem to the comparison between a risky arm, with a prior $\mu$, and a
predictable arm, with success probability $p$. The value of $p$ where the
player is indifferent between the arms is the Gittins index $g = g(\mu,\beta) >
m$, where $m$ is the mean of the risky arm.
We show that competing players explore less than a single player: there is
$p^* \in (m, g)$ so that for all $p > p^*$, the players stay at the predictable
arm. However, the players are not myopic: they still explore for some $p > m$.
On the other hand, cooperating players explore more than a single player. We
also show that neutral players learn from each other, receiving strictly higher
total rewards than they would playing alone, for all $ p\in (p^*, g)$, where
$p^*$ is the threshold from the competing case.
Finally, we show that competing and neutral players eventually settle on the
same arm in every Nash equilibrium, while this can fail for cooperating
players.
Related papers
- Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms [71.73971094342349]
Self-play via online learning is one of the premier ways to solve large-scale two-player zero-sum games.
We show that OMWU offers several advantages including logarithmic dependence on the size of the payoff matrix.
We prove that a broad class of algorithms that do not forget the past quickly all suffer the same issue.
arXiv Detail & Related papers (2024-06-15T13:26:17Z) - Adversarial Combinatorial Bandits with Switching Costs [55.2480439325792]
We study the problem of adversarial bandit with a switching cost $lambda$ for a switch of each selected arm in each round.
We derive lower bounds for the minimax regret and design algorithms to approach them.
arXiv Detail & Related papers (2024-04-02T12:15:37Z) - Optimal Cooperative Multiplayer Learning Bandits with Noisy Rewards and
No Communication [0.0]
We consider a cooperative multiplayer bandit learning problem where the players are only allowed to agree on a strategy beforehand.
In this problem, each player simultaneously selects an action.
We show that this algorithm can achieve logarithmic $O(fraclog TDelta_bma)$ (gap-dependent) regret as well as $O(sqrtTlog T)$ (gap-independent) regret.
arXiv Detail & Related papers (2023-11-10T17:55:44Z) - Competing for Shareable Arms in Multi-Player Multi-Armed Bandits [29.08799537067425]
We study a novel multi-player multi-armed bandit (MPMAB) setting where players are selfish and aim to maximize their own rewards.
We propose a novel Selfish MPMAB with Averaging Allocation (SMAA) approach based on the equilibrium.
We establish that no single selfish player can significantly increase their rewards through deviation, nor can they detrimentally affect other players' rewards without incurring substantial losses for themselves.
arXiv Detail & Related papers (2023-05-30T15:59:56Z) - The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player
Multi-Armed Bandits with no Communication [10.446001329147112]
We study the multi-player multi-armed bandit problem.
In this problem, $m$ players cooperate to maximize their total reward from $K > m$ arms.
We ask whether it is possible to obtain optimal instance-dependent regret $tildeO (1/Delta)$ where $Delta$ is the gap between the $m$-th and $m+1$-st best arms.
arXiv Detail & Related papers (2022-02-19T18:19:36Z) - Near-Optimal Learning of Extensive-Form Games with Imperfect Information [54.55092907312749]
We present the first line of algorithms that require only $widetildemathcalO((XA+YB)/varepsilon2)$ episodes of play to find an $varepsilon$-approximate Nash equilibrium in two-player zero-sum games.
This improves upon the best known sample complexity of $widetildemathcalO((X2A+Y2B)/varepsilon2)$ by a factor of $widetildemathcalO(maxX,
arXiv Detail & Related papers (2022-02-03T18:18:28Z) - Experimental test of Tsirelson's bound with a single photonic qubit [8.8709589922781]
In the Clauser-Horne-Shimony-Holt game, two space-like separated players, Alice and Bob are each assigned a classical bit $a$ and $b$ respectively.
In the game, if the players use the classical strategies, the optimal success probability $w(textCHSH)=0.75$.
Popescu and Rohrlich noted that the perfect success probability $1$ can also be achieved in a more general theory without violating the no-signaling assumption.
arXiv Detail & Related papers (2022-01-25T09:06:53Z) - When Can We Learn General-Sum Markov Games with a Large Number of
Players Sample-Efficiently? [10.397170312149067]
This paper investigates what learning goals admit better sample complexities in the setting of $m$-player general-sum Markov games.
First, we design algorithms for learning an $epsilon$-Coarse Correlated Equilibrium (CCE) in $widetildemathcalO(H5Smax_ile m A_i / epsilon2)$ episodes.
Second, we consider the important special case of Markov Potential Games, and design an algorithm that learns an $epsilon$-approximate Nash equilibrium within $widet
arXiv Detail & Related papers (2021-10-08T15:06:22Z) - Combinatorial Bandits without Total Order for Arms [52.93972547896022]
We present a reward model that captures set-dependent reward distribution and assumes no total order for arms.
We develop a novel regret analysis and show an $Oleft(frack2 n log Tepsilonright)$ gap-dependent regret bound as well as an $Oleft(k2sqrtn T log Tright)$ gap-independent regret bound.
arXiv Detail & Related papers (2021-03-03T23:08:59Z) - Bandit Learning in Decentralized Matching Markets [82.39061186055775]
We study two-sided matching markets in which one side of the market (the players) does not have a priori knowledge about its preferences for the other side (the arms) and is required to learn its preferences from experience.
This model extends the standard multi-armed bandit framework to a decentralized multiple player setting with competition.
We show that the algorithm is incentive compatible whenever the arms' preferences are shared, but not necessarily so when preferences are fully general.
arXiv Detail & Related papers (2020-12-14T08:58:07Z) - Near-Optimal Reinforcement Learning with Self-Play [50.29853537456737]
We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision.
We propose an optimistic variant of the emphNash Q-learning algorithm with sample complexity $tildemathcalO(SAB)$, and a new emphNash V-learning algorithm with sample complexity $tildemathcalO(S(A+B))$.
arXiv Detail & Related papers (2020-06-22T05:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.