Post-processing for Individual Fairness
- URL: http://arxiv.org/abs/2110.13796v1
- Date: Tue, 26 Oct 2021 15:51:48 GMT
- Title: Post-processing for Individual Fairness
- Authors: Felix Petersen, Debarghya Mukherjee, Yuekai Sun, Mikhail Yurochkin
- Abstract summary: Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production.
We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals, guiding the desired fairness constraints.
Our algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy.
- Score: 23.570995756189266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Post-processing in algorithmic fairness is a versatile approach for
correcting bias in ML systems that are already used in production. The main
appeal of post-processing is that it avoids expensive retraining. In this work,
we propose general post-processing algorithms for individual fairness (IF). We
consider a setting where the learner only has access to the predictions of the
original model and a similarity graph between individuals, guiding the desired
fairness constraints. We cast the IF post-processing problem as a graph
smoothing problem corresponding to graph Laplacian regularization that
preserves the desired "treat similar individuals similarly" interpretation. Our
theoretical results demonstrate the connection of the new objective function to
a local relaxation of the original individual fairness. Empirically, our
post-processing algorithms correct individual biases in large-scale NLP models
such as BERT, while preserving accuracy.
Related papers
- Mitigating Matching Biases Through Score Calibration [1.5530839016602822]
Biased outcomes in record matching can result in unequal error rates across demographic groups, raising ethical and legal concerns.
In this paper, we adapt fairness metrics traditionally applied in regression models to evaluate cumulative bias across all thresholds in record matching.
We propose a novel post-processing calibration method, leveraging optimal transport theory and Wasserstein barycenters, to balance matching scores across demographic groups.
arXiv Detail & Related papers (2024-11-03T21:01:40Z) - Differentially Private Post-Processing for Fair Regression [13.855474876965557]
Our algorithm can be applied to post-process any given regressor to improve fairness by remapping its outputs.
We analyze the sample complexity of our algorithm and provide fairness guarantee, revealing a trade-off between the statistical bias and variance induced from the choice of the number of bins in the histogram.
arXiv Detail & Related papers (2024-05-07T06:09:37Z) - Sparse is Enough in Fine-tuning Pre-trained Large Language Models [98.46493578509039]
We propose a gradient-based sparse fine-tuning algorithm, named Sparse Increment Fine-Tuning (SIFT)
We validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning.
arXiv Detail & Related papers (2023-12-19T06:06:30Z) - Improving Fair Training under Correlation Shifts [33.385118640843416]
In particular, when the bias between labels and sensitive groups changes, the fairness of the trained model is directly influenced and can worsen.
We analytically show that existing in-processing fair algorithms have fundamental limits in accuracy and group fairness.
We propose a novel pre-processing step that samples the input data to reduce correlation shifts.
arXiv Detail & Related papers (2023-02-05T07:23:35Z) - Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence
Embedding [51.48582649050054]
We propose a representation normalization method which aims at disentangling the correlations between features of encoded sentences.
We also propose Kernel-Whitening, a Nystrom kernel approximation method to achieve more thorough debiasing on nonlinear spurious correlations.
Experiments show that Kernel-Whitening significantly improves the performance of BERT on out-of-distribution datasets while maintaining in-distribution accuracy.
arXiv Detail & Related papers (2022-10-14T05:56:38Z) - iFlipper: Label Flipping for Individual Fairness [16.50058737985628]
We show that label flipping is an effective pre-processing technique for improving individual fairness.
We propose an approximate linear programming algorithm and provide theoretical guarantees on how close its result is to the optimal solution.
Experiments on real datasets show that iFlipper significantly outperforms other pre-processing baselines in terms of individual fairness.
arXiv Detail & Related papers (2022-09-15T05:02:01Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Individually Fair Gradient Boosting [86.1984206610373]
We consider the task of enforcing individual fairness in gradient boosting.
We show that our algorithm converges globally and generalizes.
We also demonstrate the efficacy of our algorithm on three ML problems susceptible to algorithmic bias.
arXiv Detail & Related papers (2021-03-31T03:06:57Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.