fAux: Testing Individual Fairness via Gradient Alignment
- URL: http://arxiv.org/abs/2210.06288v1
- Date: Mon, 10 Oct 2022 21:27:20 GMT
- Title: fAux: Testing Individual Fairness via Gradient Alignment
- Authors: Giuseppe Castiglione, Ga Wu, Christopher Srinivasa, Simon Prince
- Abstract summary: We describe a new approach for testing individual fairness that does not have either requirement.
We show that the proposed method effectively identifies discrimination on both synthetic and real-world datasets.
- Score: 2.5329739965085785
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine learning models are vulnerable to biases that result in unfair
treatment of individuals from different populations. Recent work that aims to
test a model's fairness at the individual level either relies on domain
knowledge to choose metrics, or on input transformations that risk generating
out-of-domain samples. We describe a new approach for testing individual
fairness that does not have either requirement. We propose a novel criterion
for evaluating individual fairness and develop a practical testing method based
on this criterion which we call fAux (pronounced fox). This is based on
comparing the derivatives of the predictions of the model to be tested with
those of an auxiliary model, which predicts the protected variable from the
observed data. We show that the proposed method effectively identifies
discrimination on both synthetic and real-world datasets, and has quantitative
and qualitative advantages over contemporary methods.
Related papers
- Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Personalized Interpretable Classification [8.806213269230057]
We make a first step towards formally introducing personalized interpretable classification as a new data mining problem.
We conduct a series of empirical studies on real data sets.
Our algorithm can achieve the same-level predictive accuracy as those state-of-the-art (SOTA) interpretable classifiers.
arXiv Detail & Related papers (2023-02-06T01:59:16Z) - Provable Fairness for Neural Network Models using Formal Verification [10.90121002896312]
We propose techniques to emphprove fairness using recently developed formal methods that verify properties of neural network models.
We show that through proper training, we can reduce unfairness by an average of 65.4% at a cost of less than 1% in AUC score.
arXiv Detail & Related papers (2022-12-16T16:54:37Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Fairness in the Eyes of the Data: Certifying Machine-Learning Models [38.09830406613629]
We present a framework that allows to certify the fairness degree of a model based on an interactive and privacy-preserving test.
We tackle two scenarios, where either the test data is privately available only to the tester or is publicly known in advance, even to the model creator.
We provide a cryptographic technique to automate fairness testing and certified inference with only black-box access to the model at hand while hiding the participants' sensitive data.
arXiv Detail & Related papers (2020-09-03T09:22:39Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Achieving Equalized Odds by Resampling Sensitive Attributes [13.114114427206678]
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.
This differentiable functional is used as a penalty driving the model parameters towards equalized odds.
We develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature.
arXiv Detail & Related papers (2020-06-08T00:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.