Statistical inference for individual fairness
- URL: http://arxiv.org/abs/2103.16714v1
- Date: Tue, 30 Mar 2021 22:49:25 GMT
- Title: Statistical inference for individual fairness
- Authors: Subha Maity, Songkai Xue, Mikhail Yurochkin, Yuekai Sun
- Abstract summary: We focus on the problem of detecting violations of individual fairness in machine learning models.
We develop a suite of inference tools for the adversarial cost function.
We demonstrate the utility of our tools in a real-world case study.
- Score: 24.622418924551315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As we rely on machine learning (ML) models to make more consequential
decisions, the issue of ML models perpetuating or even exacerbating undesirable
historical biases (e.g., gender and racial biases) has come to the fore of the
public's attention. In this paper, we focus on the problem of detecting
violations of individual fairness in ML models. We formalize the problem as
measuring the susceptibility of ML models against a form of adversarial attack
and develop a suite of inference tools for the adversarial cost function. The
tools allow auditors to assess the individual fairness of ML models in a
statistically-principled way: form confidence intervals for the worst-case
performance differential between similar individuals and test hypotheses of
model fairness with (asymptotic) non-coverage/Type I error rate control. We
demonstrate the utility of our tools in a real-world case study.
Related papers
- Fairness And Performance In Harmony: Data Debiasing Is All You Need [5.969005147375361]
This study investigates fairness using a real-world university admission dataset with 870 profiles.
For individual fairness, we assess decision consistency among experts with varied backgrounds and ML models.
Results show ML models outperform humans in fairness by 14.08% to 18.79%.
arXiv Detail & Related papers (2024-11-26T12:31:10Z) - Uncertainty-based Fairness Measures [14.61416119202288]
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings.
We show that an ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties.
arXiv Detail & Related papers (2023-12-18T15:49:03Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Bias-inducing geometries: an exactly solvable data model with fairness
implications [13.690313475721094]
We introduce an exactly solvable high-dimensional model of data imbalance.
We analytically unpack the typical properties of learning models trained in this synthetic framework.
We obtain exact predictions for the observables that are commonly employed for fairness assessment.
arXiv Detail & Related papers (2022-05-31T16:27:57Z) - Reducing Unintended Bias of ML Models on Tabular and Textual Data [5.503546193689538]
We revisit the framework FixOut that is inspired in the approach "fairness through unawareness" to build fairer models.
We introduce several improvements such as automating the choice of FixOut's parameters.
We present several experimental results that illustrate the fact that FixOut improves process fairness on different classification settings.
arXiv Detail & Related papers (2021-08-05T14:55:56Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - An Investigation of Why Overparameterization Exacerbates Spurious
Correlations [98.3066727301239]
We identify two key properties of the training data that drive this behavior.
We show how the inductive bias of models towards "memorizing" fewer examples can cause over parameterization to hurt.
arXiv Detail & Related papers (2020-05-09T01:59:13Z) - Auditing ML Models for Individual Bias and Unfairness [46.94549066382216]
We formalize the task of auditing ML models for individual bias/unfairness and develop a suite of inferential tools for the optimal value.
To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe's COMPAS recidivism prediction instrument.
arXiv Detail & Related papers (2020-03-11T00:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.