Stride-Net: Fairness-Aware Disentangled Representation Learning for Chest X-Ray Diagnosis
- URL: http://arxiv.org/abs/2602.10875v1
- Date: Wed, 11 Feb 2026 14:04:52 GMT
- Title: Stride-Net: Fairness-Aware Disentangled Representation Learning for Chest X-Ray Diagnosis
- Authors: Darakshan Rashid, Raza Imam, Dwarikanath Mahapatra, Brejesh Lall,
- Abstract summary: Deep neural networks for chest X-ray classification achieve strong average performance, yet often underperform for specific demographic subgroups.<n>We propose Stride-Net, a fairness-aware framework that learns disease-discriminative yet demographically invariant representations for chest X-ray analysis.
- Score: 13.827727377759361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks for chest X-ray classification achieve strong average performance, yet often underperform for specific demographic subgroups, raising critical concerns about clinical safety and equity. Existing debiasing methods frequently yield inconsistent improvements across datasets or attain fairness by degrading overall diagnostic utility, treating fairness as a post hoc constraint rather than a property of the learned representation. In this work, we propose Stride-Net (Sensitive Attribute Resilient Learning via Disentanglement and Learnable Masking with Embedding Alignment), a fairness-aware framework that learns disease-discriminative yet demographically invariant representations for chest X-ray analysis. Stride-Net operates at the patch level, using a learnable stride-based mask to select label-aligned image regions while suppressing sensitive attribute information through adversarial confusion loss. To anchor representations in clinical semantics and discourage shortcut learning, we further enforce semantic alignment between image features and BioBERT-based disease label embeddings via Group Optimal Transport. We evaluate Stride-Net on the MIMIC-CXR and CheXpert benchmarks across race and intersectional race-gender subgroups. Across architectures including ResNet and Vision Transformers, Stride-Net consistently improves fairness metrics while matching or exceeding baseline accuracy, achieving a more favorable accuracy-fairness trade-off than prior debiasing approaches. Our code is available at https://github.com/Daraksh/Fairness_StrideNet.
Related papers
- MIRNet: Integrating Constrained Graph-Based Reasoning with Pre-training for Diagnostic Medical Imaging [67.74482877175797]
MIRNet is a novel framework that integrates self-supervised pre-training with constrained graph-based reasoning.<n>We introduce TongueAtlas-4K, a benchmark comprising 4,000 images annotated with 22 diagnostic labels.
arXiv Detail & Related papers (2025-11-13T06:30:41Z) - FairREAD: Re-fusing Demographic Attributes after Disentanglement for Fair Medical Image Classification [3.615240611746158]
We propose Fair Re-fusion After Disentanglement (FairREAD), a framework that mitigates unfairness by re-integrating sensitive demographic attributes into fair image representations.<n>FairREAD employs adversarial training to disentangle demographic information while using a controlled re-fusion mechanism to preserve clinically relevant details.<n> Comprehensive evaluations on a large-scale clinical X-ray dataset demonstrate that FairREAD significantly reduces unfairness metrics while maintaining diagnostic accuracy.
arXiv Detail & Related papers (2024-12-20T22:17:57Z) - Looking Beyond What You See: An Empirical Analysis on Subgroup Intersectional Fairness for Multi-label Chest X-ray Classification Using Social Determinants of Racial Health Inequities [4.351859373879489]
Inherited biases in deep learning models can lead to disparities in prediction accuracy across protected groups.
We propose a framework to achieve accurate diagnostic outcomes and ensure fairness across intersectional groups.
arXiv Detail & Related papers (2024-03-27T02:13:20Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Cross-supervised Dual Classifiers for Semi-supervised Medical Image
Segmentation [10.18427897663732]
Semi-supervised medical image segmentation offers a promising solution for large-scale medical image analysis.
This paper proposes a cross-supervised learning framework based on dual classifiers (DC-Net)
Experiments on LA and Pancreas-CT dataset illustrate that DC-Net outperforms other state-of-the-art methods for semi-supervised segmentation.
arXiv Detail & Related papers (2023-05-25T16:23:39Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z) - Cross-denoising Network against Corrupted Labels in Medical Image
Segmentation with Domain Shift [28.940670115918728]
We propose a novel cross-denoising framework using two peer networks to address domain shift and corrupted label problems.
Specifically, each network performs as a mentor, mutually supervised to learn from reliable samples selected by the peer network to combat with corrupted labels.
In addition, a noise-tolerant loss is proposed to encourage the network to capture the key location and filter the discrepancy under various noise-contaminant labels.
arXiv Detail & Related papers (2020-06-19T07:35:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.