Fair Classification with Noisy Protected Attributes: A Framework with
Provable Guarantees
- URL: http://arxiv.org/abs/2006.04778v3
- Date: Tue, 16 Feb 2021 17:21:58 GMT
- Title: Fair Classification with Noisy Protected Attributes: A Framework with
Provable Guarantees
- Authors: L. Elisa Celis and Lingxiao Huang and Vijay Keswani and Nisheeth K.
Vishnoi
- Abstract summary: We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes.
Our framework can be employed with a very general class of linear and linear-fractional fairness constraints.
We show that our framework can be used to attain either statistical rate or false positive rate fairness guarantees with a minimal loss in accuracy, even when the noise is large.
- Score: 43.326827444321935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an optimization framework for learning a fair classifier in the
presence of noisy perturbations in the protected attributes. Compared to prior
work, our framework can be employed with a very general class of linear and
linear-fractional fairness constraints, can handle multiple, non-binary
protected attributes, and outputs a classifier that comes with provable
guarantees on both accuracy and fairness. Empirically, we show that our
framework can be used to attain either statistical rate or false positive rate
fairness guarantees with a minimal loss in accuracy, even when the noise is
large, in two real-world datasets.
Related papers
- Towards Fairness and Privacy: A Novel Data Pre-processing Optimization Framework for Non-binary Protected Attributes [0.0]
This work presents a framework for addressing fairness by debiasing datasets containing a (non-binary) protected attribute.
The framework addresses this by finding a data subset that minimizes a certain discrimination measure.
In contrast to prior work, the framework exhibits a high degree of flexibility as it is metric- and task-agnostic.
arXiv Detail & Related papers (2024-10-01T16:17:43Z) - When Fair Classification Meets Noisy Protected Attributes [8.362098382773265]
This study is the first head-to-head study of fair classification algorithms to compare attribute-reliant, noise-tolerant and attribute-blind algorithms.
Our study reveals that attribute-blind and noise-tolerant fair classifiers can potentially achieve similar level of performance as attribute-reliant algorithms.
arXiv Detail & Related papers (2023-07-06T21:38:18Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - Fair Ranking with Noisy Protected Attributes [25.081136190260015]
We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed.
We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes.
arXiv Detail & Related papers (2022-11-30T15:22:28Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Multi-Class Classification from Single-Class Data with Confidences [90.48669386745361]
We propose an empirical risk minimization framework that is loss-/model-/optimizer-independent.
We show that our method can be Bayes-consistent with a simple modification even if the provided confidences are highly noisy.
arXiv Detail & Related papers (2021-06-16T15:38:13Z) - Fair Classification with Adversarial Perturbations [35.030329189029246]
We study fair classification in the presence of an omniscient adversary that, given an $eta$, is allowed to choose an arbitrary $eta$-fraction of the training samples and arbitrarily perturb their protected attributes.
Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.
We prove near-tightness of our framework's guarantees for natural hypothesis classes: no algorithm can have significantly better accuracy and any algorithm with better fairness must have lower accuracy.
arXiv Detail & Related papers (2021-06-10T17:56:59Z) - Mitigating Bias in Set Selection with Noisy Protected Attributes [16.882719401742175]
We show that in the presence of noisy protected attributes, in attempting to increase fairness without considering noise, one can, in fact, decrease the fairness of the result!
We formulate a denoised'' selection problem which functions for a large class of fairness metrics.
Our empirical results show that this approach can produce subsets which significantly improve the fairness metrics despite the presence of noisy protected attributes.
arXiv Detail & Related papers (2020-11-09T06:45:15Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.