Robust Optimization for Fairness with Noisy Protected Groups
- URL: http://arxiv.org/abs/2002.09343v3
- Date: Tue, 10 Nov 2020 05:37:29 GMT
- Title: Robust Optimization for Fairness with Noisy Protected Groups
- Authors: Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya
Gupta, Michael I. Jordan
- Abstract summary: We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
- Score: 85.13255550021495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many existing fairness criteria for machine learning involve equalizing some
metric across protected groups such as race or gender. However, practitioners
trying to audit or enforce such group-based criteria can easily face the
problem of noisy or biased protected group information. First, we study the
consequences of naively relying on noisy protected group labels: we provide an
upper bound on the fairness violations on the true groups G when the fairness
criteria are satisfied on noisy groups $\hat{G}$. Second, we introduce two new
approaches using robust optimization that, unlike the naive approach of only
relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true
protected groups G while minimizing a training objective. We provide
theoretical guarantees that one such approach converges to an optimal feasible
solution. Using two case studies, we show empirically that the robust
approaches achieve better true group fairness guarantees than the naive
approach.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Auditing Predictive Models for Intersectional Biases [1.9346186297861747]
Conditional Bias Scan (CBS) is a flexible auditing framework for detecting intersectional biases in classification models.
CBS identifies the subgroup for which there is the most significant bias against the protected class, as compared to the equivalent subgroup in the non-protected class.
We show that this methodology can detect previously unidentified intersectional and contextual biases in the COMPAS pre-trial risk assessment tool.
arXiv Detail & Related papers (2023-06-22T17:32:12Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Distributionally Robust Optimization with Probabilistic Group [24.22720998340643]
We propose a novel framework PG-DRO for distributionally robust optimization.
Key to our framework is soft group membership instead of hard group annotations.
Our framework accommodates samples with group membership ambiguity, offering stronger flexibility and generality than the prior art.
arXiv Detail & Related papers (2023-03-10T09:31:44Z) - Re-weighting Based Group Fairness Regularization via Classwise Robust
Optimization [30.089819400033985]
We propose a principled method, dubbed as ours, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective.
We develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group.
Our experiments show that FairDRO is scalable and easily adaptable to diverse applications.
arXiv Detail & Related papers (2023-03-01T12:00:37Z) - Just Train Twice: Improving Group Robustness without Training Group
Information [101.84574184298006]
Standard training via empirical risk minimization can produce models that achieve high accuracy on average but low accuracy on certain groups.
Prior approaches that achieve high worst-group accuracy, like group distributionally robust optimization (group DRO) require expensive group annotations for each training point.
We propose a simple two-stage approach, JTT, that first trains a standard ERM model for several epochs, and then trains a second model that upweights the training examples that the first model misclassified.
arXiv Detail & Related papers (2021-07-19T17:52:32Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.