Adaptive Fairness Improvement Based on Causality Analysis
- URL: http://arxiv.org/abs/2209.07190v1
- Date: Thu, 15 Sep 2022 10:05:31 GMT
- Title: Adaptive Fairness Improvement Based on Causality Analysis
- Authors: Mengdi Zhang and Jun Sun
- Abstract summary: Given a discriminating neural network, the problem of fairness improvement is to systematically reduce discrimination without significantly scarifies its performance.
We propose an approach which adaptively chooses the fairness improving method based on causality analysis.
Our approach is effective (i.e., always identify the best fairness improving method) and efficient (i.e., with an average time overhead of 5 minutes)
- Score: 5.827653543633839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a discriminating neural network, the problem of fairness improvement is
to systematically reduce discrimination without significantly scarifies its
performance (i.e., accuracy). Multiple categories of fairness improving methods
have been proposed for neural networks, including pre-processing, in-processing
and post-processing. Our empirical study however shows that these methods are
not always effective (e.g., they may improve fairness by paying the price of
huge accuracy drop) or even not helpful (e.g., they may even worsen both
fairness and accuracy). In this work, we propose an approach which adaptively
chooses the fairness improving method based on causality analysis. That is, we
choose the method based on how the neurons and attributes responsible for
unfairness are distributed among the input attributes and the hidden neurons.
Our experimental evaluation shows that our approach is effective (i.e., always
identify the best fairness improving method) and efficient (i.e., with an
average time overhead of 5 minutes).
Related papers
- Biasing & Debiasing based Approach Towards Fair Knowledge Transfer for Equitable Skin Analysis [16.638722872021095]
We propose a two-biased teachers' based approach to transfer fair knowledge into the student network.
Our approach mitigates biases present in the student network without harming its predictive accuracy.
arXiv Detail & Related papers (2024-05-16T17:02:23Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Fair Infinitesimal Jackknife: Mitigating the Influence of Biased
Training Data Points Without Refitting [41.96570350954332]
We propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points.
We find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric.
arXiv Detail & Related papers (2022-12-13T18:36:19Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Probabilistic Verification of Neural Networks Against Group Fairness [21.158245095699456]
We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
arXiv Detail & Related papers (2021-07-18T04:34:31Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Promoting Fairness through Hyperparameter Optimization [4.479834103607383]
This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development.
We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband.
We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature.
arXiv Detail & Related papers (2021-03-23T17:36:22Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Fair Densities via Boosting the Sufficient Statistics of Exponential
Families [72.34223801798422]
We introduce a boosting algorithm to pre-process data for fairness.
Our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee.
Empirical results are present to display the quality of result on real-world data.
arXiv Detail & Related papers (2020-12-01T00:49:17Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.