Incorporating Rather Than Eliminating: Achieving Fairness for Skin Disease Diagnosis Through Group-Specific Expert
- URL: http://arxiv.org/abs/2506.17787v1
- Date: Sat, 21 Jun 2025 18:42:00 GMT
- Title: Incorporating Rather Than Eliminating: Achieving Fairness for Skin Disease Diagnosis Through Group-Specific Expert
- Authors: Gelei Xu, Yuying Duan, Zheyuan Liu, Xueyang Li, Meng Jiang, Michael Lemmon, Wei Jin, Yiyu Shi,
- Abstract summary: We introduce FairMoE, a framework that employs layer-wise mixture-of-experts modules to serve as group-specific learners.<n>Unlike traditional methods that rigidly assign data based on group labels, FairMoE dynamically routes data to the most suitable expert.
- Score: 18.169924728540487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-based systems have achieved high accuracy in skin disease diagnostics but often exhibit biases across demographic groups, leading to inequitable healthcare outcomes and diminished patient trust. Most existing bias mitigation methods attempt to eliminate the correlation between sensitive attributes and diagnostic prediction, but those methods often degrade performance due to the lost of clinically relevant diagnostic cues. In this work, we propose an alternative approach that incorporates sensitive attributes to achieve fairness. We introduce FairMoE, a framework that employs layer-wise mixture-of-experts modules to serve as group-specific learners. Unlike traditional methods that rigidly assign data based on group labels, FairMoE dynamically routes data to the most suitable expert, making it particularly effective for handling cases near group boundaries. Experimental results show that, unlike previous fairness approaches that reduce performance, FairMoE achieves substantial accuracy improvements while preserving comparable fairness metrics.
Related papers
- Comparative assessment of fairness definitions and bias mitigation strategies in machine learning-based diagnosis of Alzheimer's disease from MR images [4.569587135821805]
The present study performs a fairness analysis of machine learning (ML) models for the diagnosis of Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD) from MRI-derived neuroimaging features.<n> Biases associated with age, race, and gender in a multi-cohort dataset are investigated.<n>Results reveal the existence of biases related to age and race, while no significant gender bias is observed.
arXiv Detail & Related papers (2025-05-29T15:07:19Z) - FairREAD: Re-fusing Demographic Attributes after Disentanglement for Fair Medical Image Classification [3.615240611746158]
We propose Fair Re-fusion After Disentanglement (FairREAD), a framework that mitigates unfairness by re-integrating sensitive demographic attributes into fair image representations.<n>FairREAD employs adversarial training to disentangle demographic information while using a controlled re-fusion mechanism to preserve clinically relevant details.<n> Comprehensive evaluations on a large-scale clinical X-ray dataset demonstrate that FairREAD significantly reduces unfairness metrics while maintaining diagnostic accuracy.
arXiv Detail & Related papers (2024-12-20T22:17:57Z) - Fair Diagnosis: Leveraging Causal Modeling to Mitigate Medical Bias [14.848344916632024]
In medical image analysis, model predictions can be affected by sensitive attributes, such as race and gender.<n>We present a causal modeling framework, which aims to reduce the impact of sensitive attributes on diagnostic predictions.
arXiv Detail & Related papers (2024-12-06T02:59:36Z) - Looking Beyond What You See: An Empirical Analysis on Subgroup Intersectional Fairness for Multi-label Chest X-ray Classification Using Social Determinants of Racial Health Inequities [4.351859373879489]
Inherited biases in deep learning models can lead to disparities in prediction accuracy across protected groups.
We propose a framework to achieve accurate diagnostic outcomes and ensure fairness across intersectional groups.
arXiv Detail & Related papers (2024-03-27T02:13:20Z) - Achieving Reliable and Fair Skin Lesion Diagnosis via Unsupervised Domain Adaptation [43.1078084014722]
Unsupervised domain adaptation (UDA) can integrate large external datasets for developing reliable classifiers.
UDA can effectively mitigate bias against minority groups and enhance fairness in diagnostic systems.
arXiv Detail & Related papers (2023-07-06T17:32:38Z) - Generative models improve fairness of medical classifiers under
distribution shifts [49.10233060774818]
We show that learning realistic augmentations automatically from data is possible in a label-efficient manner using generative models.
We demonstrate that these learned augmentations can surpass ones by making models more robust and statistically fair in- and out-of-distribution.
arXiv Detail & Related papers (2023-04-18T18:15:38Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.