Fair Algorithms for Multi-Agent Multi-Armed Bandits
- URL: http://arxiv.org/abs/2007.06699v2
- Date: Wed, 24 Feb 2021 02:43:57 GMT
- Title: Fair Algorithms for Multi-Agent Multi-Armed Bandits
- Authors: Safwan Hossain, Evi Micha, Nisarg Shah
- Abstract summary: We propose a multi-agent variant of the classical multi-armed bandit problem.
The goal is not to learn the "best arm"; indeed, each agent may perceive a different arm to be the best for her personally.
We show that multi-agent variants of three classic multi-armed bandit algorithms achieve sublinear regret.
- Score: 29.68201160277817
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a multi-agent variant of the classical multi-armed bandit problem,
in which there are $N$ agents and $K$ arms, and pulling an arm generates a
(possibly different) stochastic reward for each agent. Unlike the classical
multi-armed bandit problem, the goal is not to learn the "best arm"; indeed,
each agent may perceive a different arm to be the best for her personally.
Instead, we seek to learn a fair distribution over the arms. Drawing on a long
line of research in economics and computer science, we use the Nash social
welfare as our notion of fairness. We design multi-agent variants of three
classic multi-armed bandit algorithms and show that they achieve sublinear
regret, which is now measured in terms of the lost Nash social welfare.
Related papers
- Stochastic Bandits for Egalitarian Assignment [58.33714486693828]
We study EgalMAB, an egalitarian assignment problem in the context of multi-armed bandits.
We design and analyze a UCB-based policy EgalUCB and establish upper bounds on the cumulative regret.
arXiv Detail & Related papers (2024-10-08T09:49:47Z) - Bridging Rested and Restless Bandits with Graph-Triggering: Rising and Rotting [67.1631453378926]
Graph-Triggered Bandits is a framework to generalize rested and restless bandits.
In this work, we focus on two specific types of monotonic bandits: rising, where the expected reward of an arm grows as the number of triggers increases, and rotting, where the opposite behavior occurs.
arXiv Detail & Related papers (2024-09-09T18:23:07Z) - A General Framework for Clustering and Distribution Matching with Bandit Feedback [81.50716021326194]
We develop a general framework for clustering and distribution matching problems with bandit feedback.
We derive a non-asymptotic lower bound on the average number of arm pulls for any online algorithm with an error probability not exceeding $delta$.
arXiv Detail & Related papers (2024-09-08T12:19:12Z) - Multi-agent Multi-armed Bandits with Stochastic Sharable Arm Capacities [69.34646544774161]
We formulate a new variant of multi-player multi-armed bandit (MAB) model, which captures arrival of requests to each arm and the policy of allocating requests to players.
The challenge is how to design a distributed learning algorithm such that players select arms according to the optimal arm pulling profile.
We design an iterative distributed algorithm, which guarantees that players can arrive at a consensus on the optimal arm pulling profile in only M rounds.
arXiv Detail & Related papers (2024-08-20T13:57:00Z) - Best Arm Identification in Batched Multi-armed Bandit Problems [3.908867017036043]
Recently multi-armed bandit problem arises in many real-life scenarios where arms must be sampled in batches.
We introduce a general linear programming framework that can incorporate objectives of different theoretical settings in best arm identification.
We demonstrate by numerical studies that the algorithm also has good performance compared to certain UCB-type or Thompson sampling methods.
arXiv Detail & Related papers (2023-12-21T14:16:38Z) - Bandit Social Learning: Exploration under Myopic Behavior [58.75758600464338]
We study social learning dynamics motivated by reviews on online platforms.
Agents collectively follow a simple multi-armed bandit protocol, but each agent acts myopically, without regards to exploration.
We derive stark learning failures for any such behavior, and provide matching positive results.
arXiv Detail & Related papers (2023-02-15T01:57:57Z) - Doubly Adversarial Federated Bandits [7.23389716633927]
We study a new non-stochastic federated multi-armed bandit problem with multiple agents collaborating via a communication network.
Our algorithm gives a positive answer to an open question proposed in Cesa-Bianchi et al.
arXiv Detail & Related papers (2023-01-22T22:36:43Z) - Best Arm Identification under Additive Transfer Bandits [49.69203462561861]
We study the setting when, despite the means being unknown, there is a known additive relationship between the source and target MAB instances.
We propose and theoretically analyze an LUCB-style algorithm to identify an $epsilon$-optimal target arm with high probability.
arXiv Detail & Related papers (2021-12-08T02:20:18Z) - Multi-Armed Bandits with Dependent Arms [18.81667618369821]
We study a variant of the classical multi-armed bandit problem (MABP) which we call as Multi-Armed Bandits with dependent arms.
Multiple arms are grouped together to form a cluster, and the reward distributions of arms belonging to the same cluster are known functions of an unknown parameter that is a characteristic of the cluster.
We develop learning algorithms based on the UCB principle which utilize these additional side observations appropriately while performing exploration-exploitation trade-off.
arXiv Detail & Related papers (2020-10-13T14:00:19Z) - Generic Outlier Detection in Multi-Armed Bandit [44.11480686973274]
We propose a novel pulling algorithm named GOLD to identify such generic outlier arms.
In the experiments conducted on both synthetic and real-world data sets, the proposed algorithm achieves 98 % accuracy.
arXiv Detail & Related papers (2020-07-14T18:42:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.