Individually Fair Learning with One-Sided Feedback
- URL: http://arxiv.org/abs/2206.04475v1
- Date: Thu, 9 Jun 2022 12:59:03 GMT
- Title: Individually Fair Learning with One-Sided Feedback
- Authors: Yahav Bechavod, Aaron Roth
- Abstract summary: We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances.
On each round, $k$ instances arrive and receive classification outcomes according to a randomized policy deployed by the learner.
We then construct an efficient reduction from our problem of online learning with one-sided feedback and a panel reporting fairness violations to the contextual semi-bandit problem.
- Score: 15.713330010191092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider an online learning problem with one-sided feedback, in which the
learner is able to observe the true label only for positively predicted
instances. On each round, $k$ instances arrive and receive classification
outcomes according to a randomized policy deployed by the learner, whose goal
is to maximize accuracy while deploying individually fair policies. We first
extend the framework of Bechavod et al. (2020), which relies on the existence
of a human fairness auditor for detecting fairness violations, to instead
incorporate feedback from dynamically-selected panels of multiple, possibly
inconsistent, auditors. We then construct an efficient reduction from our
problem of online learning with one-sided feedback and a panel reporting
fairness violations to the contextual combinatorial semi-bandit problem
(Cesa-Bianchi & Lugosi, 2009, Gy\"{o}rgy et al., 2007). Finally, we show how to
leverage the guarantees of two algorithms in the contextual combinatorial
semi-bandit setting: Exp2 (Bubeck et al., 2012) and the oracle-efficient
Context-Semi-Bandit-FTPL (Syrgkanis et al., 2016), to provide multi-criteria no
regret guarantees simultaneously for accuracy and fairness. Our results
eliminate two potential sources of bias from prior work: the "hidden outcomes"
that are not available to an algorithm operating in the full information
setting, and human biases that might be present in any single human auditor,
but can be mitigated by selecting a well chosen panel.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Towards Distribution-Agnostic Generalized Category Discovery [51.52673017664908]
Data imbalance and open-ended distribution are intrinsic characteristics of the real visual world.
We propose a Self-Balanced Co-Advice contrastive framework (BaCon)
BaCon consists of a contrastive-learning branch and a pseudo-labeling branch, working collaboratively to provide interactive supervision to resolve the DA-GCD task.
arXiv Detail & Related papers (2023-10-02T17:39:58Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Bias-Robust Bayesian Optimization via Dueling Bandit [57.82422045437126]
We consider Bayesian optimization in settings where observations can be adversarially biased.
We propose a novel approach for dueling bandits based on information-directed sampling (IDS)
Thereby, we obtain the first efficient kernelized algorithm for dueling bandits that comes with cumulative regret guarantees.
arXiv Detail & Related papers (2021-05-25T10:08:41Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Fair Classification via Unconstrained Optimization [0.0]
We show that the Bayes optimal fair learning rule remains a group-wise thresholding rule over the Bayes regressor.
The proposed algorithm can be applied to any black-box machine learning model.
arXiv Detail & Related papers (2020-05-21T11:29:05Z) - Metric-Free Individual Fairness in Online Learning [32.56688029679103]
We study an online learning problem subject to the constraint of individual fairness.
We do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form.
We leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure.
arXiv Detail & Related papers (2020-02-13T12:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.