AI analysis of medical images at scale as a health disparities probe: a feasibility demonstration using chest radiographs
- URL: http://arxiv.org/abs/2504.05990v1
- Date: Tue, 08 Apr 2025 12:53:14 GMT
- Title: AI analysis of medical images at scale as a health disparities probe: a feasibility demonstration using chest radiographs
- Authors: Heather M. Whitney, Hui Li, Karen Drukker, Elbert Huang, Maryellen L. Giger,
- Abstract summary: Social determinants of health (SDOH) are domains frequently studied for potential association with health disparities.<n>We developed a pipeline for using quantitative measures automatically extracted from medical images as inputs into health disparities index calculations.<n>Large-scale AI analysis of medical images can serve as a probe for a novel data source for health disparities research.
- Score: 1.8351424954311537
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Health disparities (differences in non-genetic conditions that influence health) can be associated with differences in burden of disease by groups within a population. Social determinants of health (SDOH) are domains such as health care access, dietary access, and economics frequently studied for potential association with health disparities. Evaluating SDOH-related phenotypes using routine medical images as data sources may enhance health disparities research. We developed a pipeline for using quantitative measures automatically extracted from medical images as inputs into health disparities index calculations. Our study focused on the use case of two SDOH demographic correlates (sex and race) and data extracted from chest radiographs of 1,571 unique patients. The likelihood of severe disease within the lung parenchyma from each image type, measured using an established deep learning model, was merged into a single numerical image-based phenotype for each patient. Patients were then separated into phenogroups by unsupervised clustering of the image-based phenotypes. The health rate for each phenogroup was defined as the median image-based phenotype for each SDOH used as inputs to four imaging-derived health disparities indices (iHDIs): one absolute measure (between-group variance) and three relative measures (index of disparity, Theil index, and mean log deviation). The iHDI measures demonstrated feasible values for each SDOH demographic correlate, showing potential for medical images to serve as a novel probe for health disparities. Large-scale AI analysis of medical images can serve as a probe for a novel data source for health disparities research.
Related papers
- Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.<n>Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.<n>Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Integrating Social Determinants of Health into Knowledge Graphs: Evaluating Prediction Bias and Fairness in Healthcare [47.23120247002356]
Social determinants of health (SDoH) play a crucial role in patient health outcomes, yet their integration into biomedical knowledge graphs remains underexplored.<n>This study addresses this gap by constructing an SDoH-enriched knowledge graph using the MIMIC-III dataset and PrimeKG.
arXiv Detail & Related papers (2024-11-29T20:35:01Z) - FairSkin: Fair Diffusion for Skin Disease Image Generation [54.29840149709033]
Diffusion Model (DM) has become a leading method in generating synthetic medical images, but it suffers from a critical twofold bias.
We propose FairSkin, a novel DM framework that mitigates these biases through a three-level resampling mechanism.
Our approach significantly improves the diversity and quality of generated images, contributing to more equitable skin disease detection in clinical settings.
arXiv Detail & Related papers (2024-10-29T21:37:03Z) - FedMedICL: Towards Holistic Evaluation of Distribution Shifts in Federated Medical Imaging [68.6715007665896]
FedMedICL is a unified framework and benchmark to holistically evaluate federated medical imaging challenges.
We comprehensively evaluate several popular methods on six diverse medical imaging datasets.
We find that a simple batch balancing technique surpasses advanced methods in average performance across FedMedICL experiments.
arXiv Detail & Related papers (2024-07-11T19:12:23Z) - On the notion of Hallucinations from the lens of Bias and Validity in
Synthetic CXR Images [0.35998666903987897]
Generative models, such as diffusion models, aim to mitigate data quality and clinical information disparities.
At Stanford, researchers explored the utility of a fine-tuned Stable Diffusion model (RoentGen) for medical imaging data augmentation.
We leveraged RoentGen to produce synthetic Chest-XRay (CXR) images and conducted assessments on bias, validity, and hallucinations.
arXiv Detail & Related papers (2023-12-12T04:41:20Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Mitigating Health Disparities in EHR via Deconfounder [5.511343163506091]
We propose a novel framework, Parity Medical Deconfounder (PriMeD), to deal with the disparity issue in healthcare datasets.
PriMeD adopts a Conditional Variational Autoencoder (CVAE) to learn latent factors (substitute confounders) for observational data.
arXiv Detail & Related papers (2022-10-28T05:16:50Z) - HealthyGAN: Learning from Unannotated Medical Images to Detect Anomalies
Associated with Human Disease [13.827062843105365]
A typical technique in the current medical imaging literature has focused on deriving diagnostic models from healthy subjects only.
HealthyGAN learns to translate the images from the mixed dataset to only healthy images.
Being one-directional, HealthyGAN relaxes the requirement of cycle consistency of existing unpaired image-to-image translation methods.
arXiv Detail & Related papers (2022-09-05T08:10:52Z) - RadFusion: Benchmarking Performance and Fairness for Multimodal
Pulmonary Embolism Detection from CT and EHR [14.586822005217485]
We present RadFusion, a benchmark dataset of 1794 patients with corresponding EHR data and CT scans labeled for pulmonary embolism.
Our results suggest that integrating imaging and EHR data can improve classification performance without introducing large disparities in the true positive rate between population groups.
arXiv Detail & Related papers (2021-11-23T06:10:07Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.