Simultaneously Achieving Group Exposure Fairness and Within-Group
Meritocracy in Stochastic Bandits
- URL: http://arxiv.org/abs/2402.05575v1
- Date: Thu, 8 Feb 2024 11:19:58 GMT
- Title: Simultaneously Achieving Group Exposure Fairness and Within-Group
Meritocracy in Stochastic Bandits
- Authors: Subham Pokhriyal, Shweta Jain, Ganesh Ghalme, Swapnil Dhamal and Sujit
Gujar
- Abstract summary: We propose Bi-Level Fairness, which considers two levels of fairness.
At the first level, Bi-Level Fairness guarantees a certain minimum exposure to each group.
At the second level, we consider meritocratic fairness at the second level, which ensures that each arm is pulled according to its merit within the group.
- Score: 9.61116193836159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing approaches to fairness in stochastic multi-armed bandits (MAB)
primarily focus on exposure guarantee to individual arms. When arms are
naturally grouped by certain attribute(s), we propose Bi-Level Fairness, which
considers two levels of fairness. At the first level, Bi-Level Fairness
guarantees a certain minimum exposure to each group. To address the unbalanced
allocation of pulls to individual arms within a group, we consider meritocratic
fairness at the second level, which ensures that each arm is pulled according
to its merit within the group. Our work shows that we can adapt a UCB-based
algorithm to achieve a Bi-Level Fairness by providing (i) anytime Group
Exposure Fairness guarantees and (ii) ensuring individual-level Meritocratic
Fairness within each group. We first show that one can decompose regret bounds
into two components: (a) regret due to anytime group exposure fairness and (b)
regret due to meritocratic fairness within each group. Our proposed algorithm
BF-UCB balances these two regrets optimally to achieve the upper bound of
$O(\sqrt{T})$ on regret; $T$ being the stopping time. With the help of
simulated experiments, we further show that BF-UCB achieves sub-linear regret;
provides better group and individual exposure guarantees compared to existing
algorithms; and does not result in a significant drop in reward with respect to
UCB algorithm, which does not impose any fairness constraint.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Stochastic Bandits for Egalitarian Assignment [58.33714486693828]
We study EgalMAB, an egalitarian assignment problem in the context of multi-armed bandits.
We design and analyze a UCB-based policy EgalUCB and establish upper bounds on the cumulative regret.
arXiv Detail & Related papers (2024-10-08T09:49:47Z) - Best Arm Identification with Minimal Regret [55.831935724659175]
Best arm identification problem elegantly amalgamates regret minimization and BAI.
Agent's goal is to identify the best arm with a prescribed confidence level.
Double KL-UCB algorithm achieves optimality as the confidence level tends to zero.
arXiv Detail & Related papers (2024-09-27T16:46:02Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Achieving the Pareto Frontier of Regret Minimization and Best Arm
Identification in Multi-Armed Bandits [91.8283876874947]
We design and analyze the BoBW-lil'UCB$(gamma)$ algorithm.
We show that (i) no algorithm can simultaneously perform optimally for both the RM and BAI objectives.
We also show that BoBW-lil'UCB$(gamma)$ outperforms a competitor in terms of the time complexity and the regret.
arXiv Detail & Related papers (2021-10-16T17:52:32Z) - A Regret bound for Non-stationary Multi-Armed Bandits with Fairness
Constraints [7.716156977428555]
We present a new algorithm called Fair Upper Confidence Bound with Exploration Fair-UCBe for solving a slowly varying $k$-armed bandit problem.
We show that the performance of our algorithm in the non-stationary case approaches that of its stationary counterpart tends to zero.
arXiv Detail & Related papers (2020-12-24T18:12:01Z) - Minimax Group Fairness: Algorithms and Experiments [18.561824632836405]
We provide provably convergent oracle-efficient learning algorithms for minimax group fairness.
Our algorithms apply to both regression and classification settings.
We show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.
arXiv Detail & Related papers (2020-11-05T21:42:56Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.