Simultaneously Achieving Group Exposure Fairness and Within-Group
Meritocracy in Stochastic Bandits
- URL: http://arxiv.org/abs/2402.05575v1
- Date: Thu, 8 Feb 2024 11:19:58 GMT
- Title: Simultaneously Achieving Group Exposure Fairness and Within-Group
Meritocracy in Stochastic Bandits
- Authors: Subham Pokhriyal, Shweta Jain, Ganesh Ghalme, Swapnil Dhamal and Sujit
Gujar
- Abstract summary: We propose Bi-Level Fairness, which considers two levels of fairness.
At the first level, Bi-Level Fairness guarantees a certain minimum exposure to each group.
At the second level, we consider meritocratic fairness at the second level, which ensures that each arm is pulled according to its merit within the group.
- Score: 9.61116193836159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing approaches to fairness in stochastic multi-armed bandits (MAB)
primarily focus on exposure guarantee to individual arms. When arms are
naturally grouped by certain attribute(s), we propose Bi-Level Fairness, which
considers two levels of fairness. At the first level, Bi-Level Fairness
guarantees a certain minimum exposure to each group. To address the unbalanced
allocation of pulls to individual arms within a group, we consider meritocratic
fairness at the second level, which ensures that each arm is pulled according
to its merit within the group. Our work shows that we can adapt a UCB-based
algorithm to achieve a Bi-Level Fairness by providing (i) anytime Group
Exposure Fairness guarantees and (ii) ensuring individual-level Meritocratic
Fairness within each group. We first show that one can decompose regret bounds
into two components: (a) regret due to anytime group exposure fairness and (b)
regret due to meritocratic fairness within each group. Our proposed algorithm
BF-UCB balances these two regrets optimally to achieve the upper bound of
$O(\sqrt{T})$ on regret; $T$ being the stopping time. With the help of
simulated experiments, we further show that BF-UCB achieves sub-linear regret;
provides better group and individual exposure guarantees compared to existing
algorithms; and does not result in a significant drop in reward with respect to
UCB algorithm, which does not impose any fairness constraint.
Related papers
- Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Combinatorial Stochastic-Greedy Bandit [79.1700188160944]
We propose a novelgreedy bandit (SGB) algorithm for multi-armed bandit problems when no extra information other than the joint reward of the selected set of $n$ arms at each time $tin [T]$ is observed.
SGB adopts an optimized-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms.
arXiv Detail & Related papers (2023-12-13T11:08:25Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Contextual bandits with concave rewards, and an application to fair
ranking [108.48223948875685]
We present the first algorithm with provably vanishing regret for Contextual Bandits with Concave Rewards (CBCR)
We derive a novel reduction from the CBCR regret to the regret of a scalar-reward problem.
Motivated by fairness in recommendation, we describe a special case of CBCR with rankings and fairness-aware objectives.
arXiv Detail & Related papers (2022-10-18T16:11:55Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Achieving the Pareto Frontier of Regret Minimization and Best Arm
Identification in Multi-Armed Bandits [91.8283876874947]
We design and analyze the BoBW-lil'UCB$(gamma)$ algorithm.
We show that (i) no algorithm can simultaneously perform optimally for both the RM and BAI objectives.
We also show that BoBW-lil'UCB$(gamma)$ outperforms a competitor in terms of the time complexity and the regret.
arXiv Detail & Related papers (2021-10-16T17:52:32Z) - A Regret bound for Non-stationary Multi-Armed Bandits with Fairness
Constraints [7.716156977428555]
We present a new algorithm called Fair Upper Confidence Bound with Exploration Fair-UCBe for solving a slowly varying $k$-armed bandit problem.
We show that the performance of our algorithm in the non-stationary case approaches that of its stationary counterpart tends to zero.
arXiv Detail & Related papers (2020-12-24T18:12:01Z) - Minimax Group Fairness: Algorithms and Experiments [18.561824632836405]
We provide provably convergent oracle-efficient learning algorithms for minimax group fairness.
Our algorithms apply to both regression and classification settings.
We show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.
arXiv Detail & Related papers (2020-11-05T21:42:56Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.