Metric-Free Individual Fairness in Online Learning
- URL: http://arxiv.org/abs/2002.05474v6
- Date: Sat, 23 Apr 2022 18:33:45 GMT
- Title: Metric-Free Individual Fairness in Online Learning
- Authors: Yahav Bechavod, Christopher Jung, Zhiwei Steven Wu
- Abstract summary: We study an online learning problem subject to the constraint of individual fairness.
We do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form.
We leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure.
- Score: 32.56688029679103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study an online learning problem subject to the constraint of individual
fairness, which requires that similar individuals are treated similarly. Unlike
prior work on individual fairness, we do not assume the similarity measure
among individuals is known, nor do we assume that such measure takes a certain
parametric form. Instead, we leverage the existence of an auditor who detects
fairness violations without enunciating the quantitative measure. In each
round, the auditor examines the learner's decisions and attempts to identify a
pair of individuals that are treated unfairly by the learner. We provide a
general reduction framework that reduces online classification in our model to
standard online classification, which allows us to leverage existing online
learning algorithms to achieve sub-linear regret and number of fairness
violations. Surprisingly, in the stochastic setting where the data are drawn
independently from a distribution, we are also able to establish PAC-style
fairness and accuracy generalization guarantees (Rothblum and Yona [2018]),
despite only having access to a very restricted form of fairness feedback. Our
fairness generalization bound qualitatively matches the uniform convergence
bound of Rothblum and Yona [2018], while also providing a meaningful accuracy
generalization guarantee. Our results resolve an open question by Gillen et al.
[2018] by showing that online learning under an unknown individual fairness
constraint is possible even without assuming a strong parametric form of the
underlying similarity measure.
Related papers
- Parametric Fairness with Statistical Guarantees [0.46040036610482665]
We extend the concept of Demographic Parity to incorporate distributional properties in predictions, allowing expert knowledge to be used in the fair solution.
We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges.
arXiv Detail & Related papers (2023-10-31T14:52:39Z) - A Universal Unbiased Method for Classification from Aggregate
Observations [115.20235020903992]
This paper presents a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses.
Our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses.
arXiv Detail & Related papers (2023-06-20T07:22:01Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Individually Fair Learning with One-Sided Feedback [15.713330010191092]
We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances.
On each round, $k$ instances arrive and receive classification outcomes according to a randomized policy deployed by the learner.
We then construct an efficient reduction from our problem of online learning with one-sided feedback and a panel reporting fairness violations to the contextual semi-bandit problem.
arXiv Detail & Related papers (2022-06-09T12:59:03Z) - CertiFair: A Framework for Certified Global Fairness of Neural Networks [1.4620086904601473]
Individual Fairness suggests that similar individuals with respect to a certain task are to be treated similarly by a Neural Network (NN) model.
We construct a verifier which checks whether the fairness property holds for a given NN in a classification task.
We then provide provable bounds on the fairness of the resulting NN.
arXiv Detail & Related papers (2022-05-20T02:08:47Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fair Classification with Adversarial Perturbations [35.030329189029246]
We study fair classification in the presence of an omniscient adversary that, given an $eta$, is allowed to choose an arbitrary $eta$-fraction of the training samples and arbitrarily perturb their protected attributes.
Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.
We prove near-tightness of our framework's guarantees for natural hypothesis classes: no algorithm can have significantly better accuracy and any algorithm with better fairness must have lower accuracy.
arXiv Detail & Related papers (2021-06-10T17:56:59Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.