Towards reliable use of artificial intelligence to classify otitis media using otoscopic images: Addressing bias and improving data quality
- URL: http://arxiv.org/abs/2507.18842v1
- Date: Thu, 24 Jul 2025 22:44:01 GMT
- Title: Towards reliable use of artificial intelligence to classify otitis media using otoscopic images: Addressing bias and improving data quality
- Authors: Yixi Xu, Al-Rahim Habib, Graeme Crossland, Hemi Patel, Chris Perry, Kris Bock, Tony Lian, William B. Weeks, Rahul Dodhia, Juan Lavista Ferres, Narinder Pal Singh,
- Abstract summary: This study systematically evaluated three public otoscopic image datasets (Chile; Ohio, USA; T"urkiye) using quantitative and qualitative methods.<n> Quantitative analysis revealed significant biases in the Chile and Ohio, USA datasets.<n>Addressing these biases through standardized imaging protocols, diverse dataset inclusion, and improved labeling methods is crucial.
- Score: 1.5600956077751196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ear disease contributes significantly to global hearing loss, with recurrent otitis media being a primary preventable cause in children, impacting development. Artificial intelligence (AI) offers promise for early diagnosis via otoscopic image analysis, but dataset biases and inconsistencies limit model generalizability and reliability. This retrospective study systematically evaluated three public otoscopic image datasets (Chile; Ohio, USA; T\"urkiye) using quantitative and qualitative methods. Two counterfactual experiments were performed: (1) obscuring clinically relevant features to assess model reliance on non-clinical artifacts, and (2) evaluating the impact of hue, saturation, and value on diagnostic outcomes. Quantitative analysis revealed significant biases in the Chile and Ohio, USA datasets. Counterfactual Experiment I found high internal performance (AUC > 0.90) but poor external generalization, because of dataset-specific artifacts. The T\"urkiye dataset had fewer biases, with AUC decreasing from 0.86 to 0.65 as masking increased, suggesting higher reliance on clinically meaningful features. Counterfactual Experiment II identified common artifacts in the Chile and Ohio, USA datasets. A logistic regression model trained on clinically irrelevant features from the Chile dataset achieved high internal (AUC = 0.89) and external (Ohio, USA: AUC = 0.87) performance. Qualitative analysis identified redundancy in all the datasets and stylistic biases in the Ohio, USA dataset that correlated with clinical outcomes. In summary, dataset biases significantly compromise reliability and generalizability of AI-based otoscopic diagnostic models. Addressing these biases through standardized imaging protocols, diverse dataset inclusion, and improved labeling methods is crucial for developing robust AI solutions, improving high-quality healthcare access, and enhancing diagnostic accuracy.
Related papers
- Predictive Representativity: Uncovering Racial Bias in AI-based Skin Cancer Detection [0.0]
This paper introduces the concept of Predictive Representativity (PR)<n>PR shifts the focus from the composition of the data set to outcomes-level equity.<n>Our analysis reveals substantial performance disparities by skin phototype.
arXiv Detail & Related papers (2025-07-10T22:21:06Z) - Metrics that matter: Evaluating image quality metrics for medical image generation [48.85783422900129]
This study comprehensively assesses commonly used no-reference image quality metrics using brain MRI data.<n>We evaluate metric sensitivity to a range of challenges, including noise, distribution shifts, and, critically, morphological alterations designed to mimic clinically relevant inaccuracies.
arXiv Detail & Related papers (2025-05-12T01:57:25Z) - Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework [8.017827642932746]
Generalized Attribute Utility and Detectability-Induced bias Testing (G-AUDIT) for datasets is a modality-agnostic dataset auditing framework.<n>Our method examines the relationship between task-level annotations and data properties including patient attributes.<n>G-AUDIT successfully identifies subtle biases commonly overlooked by traditional qualitative methods.
arXiv Detail & Related papers (2025-03-13T02:16:48Z) - Machine Learning for ALSFRS-R Score Prediction: Making Sense of the Sensor Data [44.99833362998488]
Amyotrophic Lateral Sclerosis (ALS) is a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options.
The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app.
arXiv Detail & Related papers (2024-07-10T19:17:23Z) - The Limits of Fair Medical Imaging AI In The Wild [43.97266228706059]
We investigate the extent to which medical AI utilizes demographic encodings.
We confirm that medical imaging AI leverages demographic shortcuts in disease classification.
We find that models with less encoding of demographic attributes are often most "globally optimal"
arXiv Detail & Related papers (2023-12-11T18:59:50Z) - The Utility of the Virtual Imaging Trials Methodology for Objective Characterization of AI Systems and Training Data [1.6040478776985583]
The study was conducted for the case example of COVID-19 diagnosis using clinical and virtual computed tomography (CT) and chest radiography (CXR) processed with convolutional neural networks.<n>Multiple AI models were developed and tested using 3D ResNet-like and 2D EfficientNetv2 architectures across diverse datasets.<n>The VIT approach can be used to enhance model transparency and reliability, offering nuanced insights into the factors driving AI performance and bridging the gap between experimental and clinical settings.
arXiv Detail & Related papers (2023-08-17T19:12:32Z) - Generative models improve fairness of medical classifiers under
distribution shifts [49.10233060774818]
We show that learning realistic augmentations automatically from data is possible in a label-efficient manner using generative models.
We demonstrate that these learned augmentations can surpass ones by making models more robust and statistically fair in- and out-of-distribution.
arXiv Detail & Related papers (2023-04-18T18:15:38Z) - Key-Exchange Convolutional Auto-Encoder for Data Augmentation in Early Knee Osteoarthritis Detection [8.193689534916988]
Key-Exchange Convolutional Auto-Encoder (KECAE) is an AI-based data augmentation strategy for early KOA classification.<n>Our model employs a convolutional autoencoder with a novel key-exchange mechanism that generates synthetic images.<n> Experimental results demonstrate that the KECAE-generated data significantly improve the performance of KOA classification models.
arXiv Detail & Related papers (2023-02-26T15:45:19Z) - Learning brain MRI quality control: a multi-factorial generalization
problem [0.0]
This work aimed at evaluating the performances of the MRIQC pipeline on various large-scale datasets.
We focused our analysis on the MRIQC preprocessing steps and tested the pipeline with and without them.
We concluded that a model trained with data from a heterogeneous population, such as the CATI dataset, provides the best scores on unseen data.
arXiv Detail & Related papers (2022-05-31T15:46:44Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Deep learning-based COVID-19 pneumonia classification using chest CT
images: model generalizability [54.86482395312936]
Deep learning (DL) classification models were trained to identify COVID-19-positive patients on 3D computed tomography (CT) datasets from different countries.
We trained nine identical DL-based classification models by using combinations of the datasets with a 72% train, 8% validation, and 20% test data split.
The models trained on multiple datasets and evaluated on a test set from one of the datasets used for training performed better.
arXiv Detail & Related papers (2021-02-18T21:14:52Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.