Toward Fairness Through Fair Multi-Exit Framework for Dermatological
Disease Diagnosis
- URL: http://arxiv.org/abs/2306.14518v2
- Date: Sat, 1 Jul 2023 10:05:15 GMT
- Title: Toward Fairness Through Fair Multi-Exit Framework for Dermatological
Disease Diagnosis
- Authors: Ching-Hao Chiu, Hao-Wei Chung, Yu-Jen Chen, Yiyu Shi, Tsung-Yi Ho
- Abstract summary: We develop a fairness-oriented framework for medical image recognition.
Our framework can improve the fairness condition over the state-of-the-art in two dermatological disease datasets.
- Score: 16.493514215214983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness has become increasingly pivotal in medical image recognition.
However, without mitigating bias, deploying unfair medical AI systems could
harm the interests of underprivileged populations. In this paper, we observe
that while features extracted from the deeper layers of neural networks
generally offer higher accuracy, fairness conditions deteriorate as we extract
features from deeper layers. This phenomenon motivates us to extend the concept
of multi-exit frameworks. Unlike existing works mainly focusing on accuracy,
our multi-exit framework is fairness-oriented; the internal classifiers are
trained to be more accurate and fairer, with high extensibility to apply to
most existing fairness-aware frameworks. During inference, any instance with
high confidence from an internal classifier is allowed to exit early.
Experimental results show that the proposed framework can improve the fairness
condition over the state-of-the-art in two dermatological disease datasets.
Related papers
- Achieving Fairness Through Channel Pruning for Dermatological Disease Diagnosis [18.587384389499768]
We propose an innovative and adaptable Soft Nearest Neighbor Loss-based channel pruning framework.
Our work demonstrates that pruning can also be a potent tool for achieving fairness.
Experiments conducted on two skin lesion diagnosis datasets validate the effectiveness of our method.
arXiv Detail & Related papers (2024-05-14T15:04:46Z) - Generative models improve fairness of medical classifiers under
distribution shifts [49.10233060774818]
We show that learning realistic augmentations automatically from data is possible in a label-efficient manner using generative models.
We demonstrate that these learned augmentations can surpass ones by making models more robust and statistically fair in- and out-of-distribution.
arXiv Detail & Related papers (2023-04-18T18:15:38Z) - Fair Multi-Exit Framework for Facial Attribute Classification [16.493514215214983]
In this paper, we extend the concept of multi-exit framework.
Unlike existing works mainly focusing on accuracy, our multi-exit framework is fairness-oriented.
Experiment results show that the proposed framework can largely improve the fairness condition over the state-of-the-art in CelebA and UTK Face datasets.
arXiv Detail & Related papers (2023-01-08T06:18:51Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing
Methods [9.152759278163954]
This work presents two novel intra-processing techniques based on fine-tuning and pruning an already-trained neural network.
To the best of our knowledge, this is one of the first efforts studying debiasing methods on chest radiographs.
arXiv Detail & Related papers (2022-07-26T10:18:59Z) - Fairness-aware Model-agnostic Positive and Unlabeled Learning [38.50536380390474]
We propose a fairness-aware Positive and Unlabeled Learning (PUL) method named FairPUL.
For binary classification over individuals from two populations, we aim to achieve similar true positive rates and false positive rates.
Our framework is proven to be statistically consistent in terms of both the classification error and the fairness metric.
arXiv Detail & Related papers (2022-06-19T08:04:23Z) - Diagnosing failures of fairness transfer across distribution shift in
real-world medical settings [60.44405686433434]
Diagnosing and mitigating changes in model fairness under distribution shift is an important component of the safe deployment of machine learning in healthcare settings.
We show that this knowledge can help diagnose failures of fairness transfer, including cases where real-world shifts are more complex than is often assumed in the literature.
arXiv Detail & Related papers (2022-02-02T13:59:23Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Technical Challenges for Training Fair Neural Networks [62.466658247995404]
We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
arXiv Detail & Related papers (2021-02-12T20:36:45Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.