Fair Ranking with Noisy Protected Attributes
- URL: http://arxiv.org/abs/2211.17067v1
- Date: Wed, 30 Nov 2022 15:22:28 GMT
- Title: Fair Ranking with Noisy Protected Attributes
- Authors: Anay Mehrotra, Nisheeth K. Vishnoi
- Abstract summary: We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed.
We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes.
- Score: 25.081136190260015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fair-ranking problem, which asks to rank a given set of items to maximize
utility subject to group fairness constraints, has received attention in the
fairness, information retrieval, and machine learning literature. Recent works,
however, observe that errors in socially-salient (including protected)
attributes of items can significantly undermine fairness guarantees of existing
fair-ranking algorithms and raise the problem of mitigating the effect of such
errors. We study the fair-ranking problem under a model where socially-salient
attributes of items are randomly and independently perturbed. We present a
fair-ranking framework that incorporates group fairness requirements along with
probabilistic information about perturbations in socially-salient attributes.
We provide provable guarantees on the fairness and utility attainable by our
framework and show that it is information-theoretically impossible to
significantly beat these guarantees. Our framework works for multiple
non-disjoint attributes and a general class of fairness constraints that
includes proportional and equal representation. Empirically, we observe that,
compared to baselines, our algorithm outputs rankings with higher fairness, and
has a similar or better fairness-utility trade-off compared to baselines.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Intrinsic Fairness-Accuracy Tradeoffs under Equalized Odds [8.471466670802817]
We study the tradeoff between fairness and accuracy under the statistical notion of equalized odds.
We present a new upper bound on the accuracy as a function of the fairness budget.
Our results show that achieving high accuracy subject to a low-bias could be fundamentally limited based on the statistical disparity across the groups.
arXiv Detail & Related papers (2024-05-12T23:15:21Z) - Understanding Fairness Surrogate Functions in Algorithmic Fairness [21.555040357521907]
We show that there is a surrogate-fairness gap between the fairness definition and the fairness surrogate function.
We elaborate a novel and general algorithm called Balanced Surrogate, which iteratively reduces the gap to mitigate unfairness.
arXiv Detail & Related papers (2023-10-17T12:40:53Z) - When Fair Classification Meets Noisy Protected Attributes [8.362098382773265]
This study is the first head-to-head study of fair classification algorithms to compare attribute-reliant, noise-tolerant and attribute-blind algorithms.
Our study reveals that attribute-blind and noise-tolerant fair classifiers can potentially achieve similar level of performance as attribute-reliant algorithms.
arXiv Detail & Related papers (2023-07-06T21:38:18Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fair Classification with Adversarial Perturbations [35.030329189029246]
We study fair classification in the presence of an omniscient adversary that, given an $eta$, is allowed to choose an arbitrary $eta$-fraction of the training samples and arbitrarily perturb their protected attributes.
Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.
We prove near-tightness of our framework's guarantees for natural hypothesis classes: no algorithm can have significantly better accuracy and any algorithm with better fairness must have lower accuracy.
arXiv Detail & Related papers (2021-06-10T17:56:59Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.