Reconstruction of Patient-Specific Confounders in AI-based Radiologic
Image Interpretation using Generative Pretraining
- URL: http://arxiv.org/abs/2309.17123v1
- Date: Fri, 29 Sep 2023 10:38:08 GMT
- Title: Reconstruction of Patient-Specific Confounders in AI-based Radiologic
Image Interpretation using Generative Pretraining
- Authors: Tianyu Han, Laura \v{Z}igutyt\.e, Luisa Huck, Marc Huppertz, Robert
Siepmann, Yossi Gandelsman, Christian Bl\"uthgen, Firas Khader, Christiane
Kuhl, Sven Nebelung, Jakob Kather, Daniel Truhn
- Abstract summary: We propose a self-conditioned diffusion model termed DiffChest and train it on a dataset of chest radiographs.
DiffChest explains classifications on a patient-specific level and visualizes the confounding factors that may mislead the model.
Our findings highlight the potential of pretraining based on diffusion models in medical image classification.
- Score: 12.656718786788758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting misleading patterns in automated diagnostic assistance systems,
such as those powered by Artificial Intelligence, is critical to ensuring their
reliability, particularly in healthcare. Current techniques for evaluating deep
learning models cannot visualize confounding factors at a diagnostic level.
Here, we propose a self-conditioned diffusion model termed DiffChest and train
it on a dataset of 515,704 chest radiographs from 194,956 patients from
multiple healthcare centers in the United States and Europe. DiffChest explains
classifications on a patient-specific level and visualizes the confounding
factors that may mislead the model. We found high inter-reader agreement when
evaluating DiffChest's capability to identify treatment-related confounders,
with Fleiss' Kappa values of 0.8 or higher across most imaging findings.
Confounders were accurately captured with 11.1% to 100% prevalence rates.
Furthermore, our pretraining process optimized the model to capture the most
relevant information from the input radiographs. DiffChest achieved excellent
diagnostic accuracy when diagnosing 11 chest conditions, such as pleural
effusion and cardiac insufficiency, and at least sufficient diagnostic accuracy
for the remaining conditions. Our findings highlight the potential of
pretraining based on diffusion models in medical image classification,
specifically in providing insights into confounding factors and model
robustness.
Related papers
- Privacy-Preserving Federated Foundation Model for Generalist Ultrasound Artificial Intelligence [83.02106623401885]
We present UltraFedFM, an innovative privacy-preserving ultrasound foundation model.
UltraFedFM is collaboratively pre-trained using federated learning across 16 distributed medical institutions in 9 countries.
It achieves an average area under the receiver operating characteristic curve of 0.927 for disease diagnosis and a dice similarity coefficient of 0.878 for lesion segmentation.
arXiv Detail & Related papers (2024-11-25T13:40:11Z) - Leveraging Pre-trained Models for Robust Federated Learning for Kidney Stone Type Recognition [1.7243216387069678]
Using pre-trained models, this research suggests a strong FL framework to improve kidney stone diagnosis.
We achieved a peak accuracy of 84.1% with seven epochs and 10 rounds during LPO stage, and 77.2% during FRV stage, showing enhanced diagnostic accuracy and robustness against image corruption.
arXiv Detail & Related papers (2024-09-30T04:23:47Z) - Multi-Label Classification of Thoracic Diseases using Dense Convolutional Network on Chest Radiographs [0.0]
We propose a multi-label disease prediction model that allows the detection of more than one pathology at a given test time.
Our proposed model achieved the highest AUC score of 0.896 for the condition Cardiomegaly.
arXiv Detail & Related papers (2022-02-08T00:43:57Z) - Advancing COVID-19 Diagnosis with Privacy-Preserving Collaboration in
Artificial Intelligence [79.038671794961]
We launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the AI model can be distributedly trained and independently executed at each host institution.
Our study is based on 9,573 chest computed tomography scans (CTs) from 3,336 patients collected from 23 hospitals located in China and the UK.
arXiv Detail & Related papers (2021-11-18T00:43:41Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Potential Features of ICU Admission in X-ray Images of COVID-19 Patients [8.83608410540057]
This paper presents an original methodology for extracting semantic features that correlate to severity from a data set with patient ICU admission labels.
The methodology employs a neural network trained to recognise lung pathologies to extract the semantic features.
The method has shown to be capable of selecting images for the learned features, which could translate some information about their common locations in the lung.
arXiv Detail & Related papers (2020-09-26T13:48:39Z) - Quantifying and Leveraging Predictive Uncertainty for Medical Image
Assessment [13.330243305948278]
We propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure.
We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams.
In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks.
arXiv Detail & Related papers (2020-07-08T16:47:55Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - CNN-CASS: CNN for Classification of Coronary Artery Stenosis Score in
MPR Images [0.0]
We develop an automated model to identify stenosis severity in MPR images.
The model predicts one of three classes: 'no stenosis' for normal, 'non-significant' - 1-50% of stenosis detected,'significant' - more than 50% of stenosis.
For stenosis score classification, the method shows improved performance comparing to previous works, achieving 80% accuracy on the patient level.
arXiv Detail & Related papers (2020-01-23T15:20:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.