FairNeuron: Improving Deep Neural Network Fairness with Adversary Games
on Selective Neurons
- URL: http://arxiv.org/abs/2204.02567v1
- Date: Wed, 6 Apr 2022 03:51:32 GMT
- Title: FairNeuron: Improving Deep Neural Network Fairness with Adversary Games
on Selective Neurons
- Authors: Xuanqi Gao, Juan Zhai, Shiqing Ma, Chao Shen, Yufei Chen, Qian Wang
- Abstract summary: We propose FairNeuron, a model automatic repairing tool, to mitigate fairness concerns and balance the accuracy-fairness trade-off.
Our approach is lightweight, making it scalable and more efficient.
Our evaluation on 3 shows that FairNeuron can effectively improve all models' fairness while maintaining a stable utility.
- Score: 22.132637957776833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With Deep Neural Network (DNN) being integrated into a growing number of
critical systems with far-reaching impacts on society, there are increasing
concerns on their ethical performance, such as fairness. Unfortunately, model
fairness and accuracy in many cases are contradictory goals to optimize. To
solve this issue, there has been a number of work trying to improve model
fairness by using an adversarial game in model level. This approach introduces
an adversary that evaluates the fairness of a model besides its prediction
accuracy on the main task, and performs joint-optimization to achieve a
balanced result. In this paper, we noticed that when performing backward
propagation based training, such contradictory phenomenon has shown on
individual neuron level. Based on this observation, we propose FairNeuron, a
DNN model automatic repairing tool, to mitigate fairness concerns and balance
the accuracy-fairness trade-off without introducing another model. It works on
detecting neurons with contradictory optimization directions from accuracy and
fairness training goals, and achieving a trade-off by selective dropout.
Comparing with state-of-the-art methods, our approach is lightweight, making it
scalable and more efficient. Our evaluation on 3 datasets shows that FairNeuron
can effectively improve all models' fairness while maintaining a stable
utility.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - NeuFair: Neural Network Fairness Repair with Dropout [19.49034966552718]
This paper investigates neuron dropout as a post-processing bias mitigation for deep neural networks (DNNs)
We show that our design of randomized algorithms is effective and efficient in improving fairness (up to 69%) with minimal or no model performance degradation.
arXiv Detail & Related papers (2024-07-05T05:45:34Z) - Enhancing Fairness in Neural Networks Using FairVIC [0.0]
Mitigating bias in automated decision-making systems, specifically deep learning models, is a critical challenge in achieving fairness.
We introduce FairVIC, an innovative approach designed to enhance fairness in neural networks by addressing inherent biases at the training stage.
We observe a significant improvement in fairness across all metrics tested, without compromising the model's accuracy to a detrimental extent.
arXiv Detail & Related papers (2024-04-28T10:10:21Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Adaptive Fairness Improvement Based on Causality Analysis [5.827653543633839]
Given a discriminating neural network, the problem of fairness improvement is to systematically reduce discrimination without significantly scarifies its performance.
We propose an approach which adaptively chooses the fairness improving method based on causality analysis.
Our approach is effective (i.e., always identify the best fairness improving method) and efficient (i.e., with an average time overhead of 5 minutes)
arXiv Detail & Related papers (2022-09-15T10:05:31Z) - ESCHER: Eschewing Importance Sampling in Games by Computing a History
Value Function to Estimate Regret [97.73233271730616]
Recent techniques for approximating Nash equilibria in very large games leverage neural networks to learn approximately optimal policies (strategies)
DREAM, the only current CFR-based neural method that is model free and therefore scalable to very large games, trains a neural network on an estimated regret target that can have extremely high variance due to an importance sampling term inherited from Monte Carlo CFR (MCCFR)
We show that a deep learning version of ESCHER outperforms the prior state of the art -- DREAM and neural fictitious self play (NFSP) -- and the difference becomes dramatic as game size increases.
arXiv Detail & Related papers (2022-06-08T18:43:45Z) - Probabilistic Verification of Neural Networks Against Group Fairness [21.158245095699456]
We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
arXiv Detail & Related papers (2021-07-18T04:34:31Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - To be Robust or to be Fair: Towards Fairness in Adversarial Training [83.42241071662897]
We find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data.
We propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses.
arXiv Detail & Related papers (2020-10-13T02:21:54Z) - Mind the Trade-off: Debiasing NLU Models without Degrading the
In-distribution Performance [70.31427277842239]
We introduce a novel debiasing method called confidence regularization.
It discourages models from exploiting biases while enabling them to receive enough incentive to learn from all the training examples.
We evaluate our method on three NLU tasks and show that, in contrast to its predecessors, it improves the performance on out-of-distribution datasets.
arXiv Detail & Related papers (2020-05-01T11:22:55Z) - Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by
Enabling Input-Adaptive Inference [119.19779637025444]
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images)
This paper studies multi-exit networks associated with input-adaptive inference, showing their strong promise in achieving a "sweet point" in cooptimizing model accuracy, robustness and efficiency.
arXiv Detail & Related papers (2020-02-24T00:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.