Achieving Fairness Without Harm via Selective Demographic Experts
- URL: http://arxiv.org/abs/2511.06293v1
- Date: Sun, 09 Nov 2025 09:11:02 GMT
- Title: Achieving Fairness Without Harm via Selective Demographic Experts
- Authors: Xuwei Tan, Yuanlong Wang, Thai-Hoang Pham, Ping Zhang, Xueru Zhang,
- Abstract summary: bias mitigation techniques often impose a trade-off between fairness and accuracy.<n>In high-stakes domains like clinical diagnosis, such trade-offs are ethically and practically unacceptable.<n>We propose a fairness-without-harm approach by learning distinct representations for different demographic groups.
- Score: 16.212815178841087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning systems become increasingly integrated into human-centered domains such as healthcare, ensuring fairness while maintaining high predictive performance is critical. Existing bias mitigation techniques often impose a trade-off between fairness and accuracy, inadvertently degrading performance for certain demographic groups. In high-stakes domains like clinical diagnosis, such trade-offs are ethically and practically unacceptable. In this study, we propose a fairness-without-harm approach by learning distinct representations for different demographic groups and selectively applying demographic experts consisting of group-specific representations and personalized classifiers through a no-harm constrained selection. We evaluate our approach on three real-world medical datasets -- covering eye disease, skin cancer, and X-ray diagnosis -- as well as two face datasets. Extensive empirical results demonstrate the effectiveness of our approach in achieving fairness without harm.
Related papers
- Medical Imaging AI Competitions Lack Fairness [50.895929923643905]
We assess fairness along two complementary dimensions: whether challenge datasets are representative of real-world clinical diversity, and whether they are accessible and legally reusable in line with the FAIR principles.<n>Our findings show substantial biases in dataset composition, including geographic location, modality, and problem type-related biases, indicating that current benchmarks do not adequately reflect real-world clinical diversity.<n>These shortcomings expose foundational limitations in our benchmarking ecosystem and highlight a disconnect between leaderboard success and clinical relevance.
arXiv Detail & Related papers (2025-12-19T13:48:10Z) - Incorporating Rather Than Eliminating: Achieving Fairness for Skin Disease Diagnosis Through Group-Specific Expert [18.169924728540487]
We introduce FairMoE, a framework that employs layer-wise mixture-of-experts modules to serve as group-specific learners.<n>Unlike traditional methods that rigidly assign data based on group labels, FairMoE dynamically routes data to the most suitable expert.
arXiv Detail & Related papers (2025-06-21T18:42:00Z) - FairREAD: Re-fusing Demographic Attributes after Disentanglement for Fair Medical Image Classification [3.615240611746158]
We propose Fair Re-fusion After Disentanglement (FairREAD), a framework that mitigates unfairness by re-integrating sensitive demographic attributes into fair image representations.<n>FairREAD employs adversarial training to disentangle demographic information while using a controlled re-fusion mechanism to preserve clinically relevant details.<n> Comprehensive evaluations on a large-scale clinical X-ray dataset demonstrate that FairREAD significantly reduces unfairness metrics while maintaining diagnostic accuracy.
arXiv Detail & Related papers (2024-12-20T22:17:57Z) - Fair Distillation: Teaching Fairness from Biased Teachers in Medical Imaging [16.599189934420885]
We propose the Fair Distillation (FairDi) method to address fairness concerns in deep learning.
We show that FairDi achieves significant gains in both overall and group-specific accuracy, along with improved fairness, compared to existing methods.
FairDi is adaptable to various medical tasks, such as classification and segmentation, and provides an effective solution for equitable model performance.
arXiv Detail & Related papers (2024-11-18T16:50:34Z) - Evaluating Fair Feature Selection in Machine Learning for Healthcare [0.9222623206734782]
We explore algorithmic fairness from the perspective of feature selection.
We evaluate a fair feature selection method that considers equal importance to all demographic groups.
We tested our approach on three publicly available healthcare datasets.
arXiv Detail & Related papers (2024-03-28T06:24:04Z) - Generative models improve fairness of medical classifiers under
distribution shifts [49.10233060774818]
We show that learning realistic augmentations automatically from data is possible in a label-efficient manner using generative models.
We demonstrate that these learned augmentations can surpass ones by making models more robust and statistically fair in- and out-of-distribution.
arXiv Detail & Related papers (2023-04-18T18:15:38Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Fairness-aware Model-agnostic Positive and Unlabeled Learning [38.50536380390474]
We propose a fairness-aware Positive and Unlabeled Learning (PUL) method named FairPUL.
For binary classification over individuals from two populations, we aim to achieve similar true positive rates and false positive rates.
Our framework is proven to be statistically consistent in terms of both the classification error and the fairness metric.
arXiv Detail & Related papers (2022-06-19T08:04:23Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.