Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs
- URL: http://arxiv.org/abs/2107.06618v1
- Date: Wed, 14 Jul 2021 11:37:28 GMT
- Title: Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs
- Authors: Shruthi Bannur, Ozan Oktay, Melanie Bernhardt, Anton Schwaighofer,
Rajesh Jena, Besmira Nushi, Sharan Wadhwani, Aditya Nori, Kal Natarajan,
Shazad Ashraf, Javier Alvarez-Valle, Daniel C. Castro
- Abstract summary: We model radiological features with a human-interpretable class hierarchy that aligns with the radiological decision process.
Experiments show that model failures highly correlate with ICU imaging conditions and with the inherent difficulty in distinguishing certain types of radiological features.
- Score: 5.832030105874915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chest radiography has been a recommended procedure for patient triaging and
resource management in intensive care units (ICUs) throughout the COVID-19
pandemic. The machine learning efforts to augment this workflow have been long
challenged due to deficiencies in reporting, model evaluation, and failure mode
analysis. To address some of those shortcomings, we model radiological features
with a human-interpretable class hierarchy that aligns with the radiological
decision process. Also, we propose the use of a data-driven error analysis
methodology to uncover the blind spots of our model, providing further
transparency on its clinical utility. For example, our experiments show that
model failures highly correlate with ICU imaging conditions and with the
inherent difficulty in distinguishing certain types of radiological features.
Also, our hierarchical interpretation and analysis facilitates the comparison
with respect to radiologists' findings and inter-variability, which in return
helps us to better assess the clinical applicability of models.
Related papers
- Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images [5.395912799904941]
variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features.
LTDiff++ is a multiscale latent diffusion model designed to enhance feature extraction in medical imaging.
arXiv Detail & Related papers (2024-10-05T02:13:57Z) - Uncovering Knowledge Gaps in Radiology Report Generation Models through Knowledge Graphs [18.025481751074214]
We introduce a system, named ReXKG, which extracts structured information from processed reports to construct a radiology knowledge graph.
We conduct an in-depth comparative analysis of AI-generated and human-written radiology reports, assessing the performance of both specialist and generalist models.
arXiv Detail & Related papers (2024-08-26T16:28:56Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - An Empirical Analysis for Zero-Shot Multi-Label Classification on
COVID-19 CT Scans and Uncurated Reports [0.5527944417831603]
pandemic resulted in vast repositories of unstructured data, including radiology reports, due to increased medical examinations.
Previous research on automated diagnosis of COVID-19 primarily focuses on X-ray images, despite their lower precision compared to computed tomography (CT) scans.
In this work, we leverage unstructured data from a hospital and harness the fine-grained details offered by CT scans to perform zero-shot multi-label classification based on contrastive visual language learning.
arXiv Detail & Related papers (2023-09-04T17:58:01Z) - Instrumental Variable Learning for Chest X-ray Classification [52.68170685918908]
We propose an interpretable instrumental variable (IV) learning framework to eliminate the spurious association and obtain accurate causal representation.
Our approach's performance is demonstrated using the MIMIC-CXR, NIH ChestX-ray 14, and CheXpert datasets.
arXiv Detail & Related papers (2023-05-20T03:12:23Z) - Vision Transformer-based Model for Severity Quantification of Lung
Pneumonia Using Chest X-ray Images [11.12596879975844]
We present a Vision Transformer-based neural network model that relies on a small number of trainable parameters to quantify the severity of COVID-19 and other lung diseases.
Our model can provide peak performance in quantifying severity with high generalizability at a relatively low computational cost.
arXiv Detail & Related papers (2023-03-18T12:38:23Z) - Interpretability Analysis of Deep Models for COVID-19 Detection [1.5742621967219992]
We present an interpretability analysis of a convolutional neural network based model for COVID-19 detection in audios.
Our best model has 94.44% of accuracy in detection, with results indicating that models favors spectrograms for the decision process.
arXiv Detail & Related papers (2022-11-25T20:56:23Z) - CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19
Patients Using Deep Learning [133.87426554801252]
We adopted an approach based on using an ensemble of deep convolutionalneural networks for segmentation of lung CT scans.
Using our models we are able to segment the lesions, evaluatepatients dynamics, estimate relative volume of lungs affected by lesions and evaluate the lung damage stage.
arXiv Detail & Related papers (2021-05-25T12:06:55Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.