Estimating and Improving Fairness with Adversarial Learning
- URL: http://arxiv.org/abs/2103.04243v1
- Date: Sun, 7 Mar 2021 03:10:32 GMT
- Title: Estimating and Improving Fairness with Adversarial Learning
- Authors: Xiaoxiao Li, Ziteng Cui, Yifan Wu, Li Gu, Tatsuya Harada
- Abstract summary: We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
- Score: 65.99330614802388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness and accountability are two essential pillars for trustworthy
Artificial Intelligence (AI) in healthcare. However, the existing AI model may
be biased in its decision marking. To tackle this issue, we propose an
adversarial multi-task training strategy to simultaneously mitigate and detect
bias in the deep learning-based medical image analysis system. Specifically, we
propose to add a discrimination module against bias and a critical module that
predicts unfairness within the base classification model. We further impose an
orthogonality regularization to force the two modules to be independent during
training. Hence, we can keep these deep learning tasks distinct from one
another, and avoid collapsing them into a singular point on the manifold.
Through this adversarial training method, the data from the underprivileged
group, which is vulnerable to bias because of attributes such as sex and skin
tone, are transferred into a domain that is neutral relative to these
attributes. Furthermore, the critical module can predict fairness scores for
the data with unknown sensitive attributes. We evaluate our framework on a
large-scale public-available skin lesion dataset under various fairness
evaluation metrics. The experiments demonstrate the effectiveness of our
proposed method for estimating and improving fairness in the deep
learning-based medical image analysis system.
Related papers
- DCAST: Diverse Class-Aware Self-Training Mitigates Selection Bias for Fairer Learning [0.0]
bias unascribed to sensitive features is challenging to identify and typically goes undiagnosed.
Strategies to mitigate unidentified bias and evaluate mitigation methods are crucially needed, yet remain underexplored.
We introduce Diverse Class-Aware Self-Training (DCAST), model-agnostic mitigation aware of class-specific bias.
arXiv Detail & Related papers (2024-09-30T09:26:19Z) - Refining Tuberculosis Detection in CXR Imaging: Addressing Bias in Deep Neural Networks via Interpretability [1.9936075659851882]
We argue that the reliability of deep learning models is limited, even if they can be shown to obtain perfect classification accuracy on the test data.
We show that pre-training a deep neural network on a large-scale proxy task, as well as using mixed objective optimization network (MOON), can improve the alignment of decision foundations between models and experts.
arXiv Detail & Related papers (2024-07-19T06:41:31Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Distraction is All You Need for Fairness [0.0]
We propose a strategy for training deep learning models called the Distraction module.
This method can be theoretically proven effective in controlling bias from affecting the classification results.
We demonstrate the potency of the proposed method by testing it on UCI Adult and Heritage Health datasets.
arXiv Detail & Related papers (2022-03-15T01:46:55Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z) - Interpretable Off-Policy Evaluation in Reinforcement Learning by
Highlighting Influential Transitions [48.91284724066349]
Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education.
Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding.
We develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates.
arXiv Detail & Related papers (2020-02-10T00:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.