Detecting Underdiagnosed Medical Conditions with Deep Learning-Based Opportunistic CT Imaging
- URL: http://arxiv.org/abs/2409.11686v1
- Date: Wed, 18 Sep 2024 03:56:56 GMT
- Title: Detecting Underdiagnosed Medical Conditions with Deep Learning-Based Opportunistic CT Imaging
- Authors: Asad Aali, Andrew Johnston, Louis Blankemeier, Dave Van Veen, Laura T Derry, David Svec, Jason Hom, Robert D. Boutin, Akshay S. Chaudhari,
- Abstract summary: Opportunistic CT involves repurposing routine CT images to extract diagnostic information.
We analyze 2,674 inpatient CT scans to identify discrepancies between imaging phenotypes and their corresponding documentation.
We find that only 0.5%, 3.2%, and 30.7% of scans diagnosed with sarcopenia, hepatic steatosis, and ascites through either opportunistic imaging or radiology reports were ICD-coded.
- Score: 2.0635695607210227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abdominal computed tomography (CT) scans are frequently performed in clinical settings. Opportunistic CT involves repurposing routine CT images to extract diagnostic information and is an emerging tool for detecting underdiagnosed conditions such as sarcopenia, hepatic steatosis, and ascites. This study utilizes deep learning methods to promote accurate diagnosis and clinical documentation. We analyze 2,674 inpatient CT scans to identify discrepancies between imaging phenotypes (characteristics derived from opportunistic CT scans) and their corresponding documentation in radiology reports and ICD coding. Through our analysis, we find that only 0.5%, 3.2%, and 30.7% of scans diagnosed with sarcopenia, hepatic steatosis, and ascites (respectively) through either opportunistic imaging or radiology reports were ICD-coded. Our findings demonstrate opportunistic CT's potential to enhance diagnostic precision and accuracy of risk adjustment models, offering advancements in precision medicine.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Exploiting Liver CT scans in Colorectal Carcinoma genomics mutation
classification [0.0]
We propose the first DeepLearning-based exploration, to our knowledge, of such classification approach from the patient medical imaging.
Our method is able to identify CRC RAS mutation family from CT images with 0.73 F1 score.
arXiv Detail & Related papers (2024-01-25T14:40:58Z) - Expert Uncertainty and Severity Aware Chest X-Ray Classification by
Multi-Relationship Graph Learning [48.29204631769816]
We re-extract disease labels from CXR reports to make them more realistic by considering disease severity and uncertainty in classification.
Our experimental results show that models considering disease severity and uncertainty outperform previous state-of-the-art methods.
arXiv Detail & Related papers (2023-09-06T19:19:41Z) - An Empirical Analysis for Zero-Shot Multi-Label Classification on
COVID-19 CT Scans and Uncurated Reports [0.5527944417831603]
pandemic resulted in vast repositories of unstructured data, including radiology reports, due to increased medical examinations.
Previous research on automated diagnosis of COVID-19 primarily focuses on X-ray images, despite their lower precision compared to computed tomography (CT) scans.
In this work, we leverage unstructured data from a hospital and harness the fine-grained details offered by CT scans to perform zero-shot multi-label classification based on contrastive visual language learning.
arXiv Detail & Related papers (2023-09-04T17:58:01Z) - PHE-SICH-CT-IDS: A Benchmark CT Image Dataset for Evaluation Semantic
Segmentation, Object Detection and Radiomic Feature Extraction of
Perihematomal Edema in Spontaneous Intracerebral Hemorrhage [2.602118060856794]
Intracerebral hemorrhage is one of the diseases with the highest mortality and poorest prognosis worldwide.
This study establishes a publicly available CT dataset named PHE-SICH-CT-IDS for perihematomal edema in spontaneous intracerebral hemorrhage.
arXiv Detail & Related papers (2023-08-21T07:18:51Z) - Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via
Volumetric Pseudo-Labeling [66.75096111651062]
We created a large-scale dataset of 10,021 thoracic CTs with 157 labels.
We applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels.
Our resulting segmentation models demonstrated remarkable performance on CXR.
arXiv Detail & Related papers (2023-06-06T18:01:08Z) - HGT: A Hierarchical GCN-Based Transformer for Multimodal Periprosthetic
Joint Infection Diagnosis Using CT Images and Text [0.0]
Prosthetic Joint Infection (PJI) is a prevalent and severe complication.
Currently, a unified diagnostic standard incorporating both computed tomography (CT) images and numerical text data for PJI remains unestablished.
This study introduces a diagnostic method, HGT, based on deep learning and multimodal techniques.
arXiv Detail & Related papers (2023-05-29T11:25:57Z) - Context-Aware Transformers For Spinal Cancer Detection and Radiological
Grading [70.04389979779195]
This paper proposes a novel transformer-based model architecture for medical imaging problems involving analysis of vertebrae.
It considers two applications of such models in MR images: (a) detection of spinal metastases and the related conditions of vertebral fractures and metastatic cord compression.
We show that by considering the context of vertebral bodies in the image, SCT improves the accuracy for several gradings compared to previously published model.
arXiv Detail & Related papers (2022-06-27T10:31:03Z) - Robust Weakly Supervised Learning for COVID-19 Recognition Using
Multi-Center CT Images [8.207602203708799]
coronavirus disease 2019 (i.e., COVID-19) is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)
Due to various technical specifications of CT scanners located in different hospitals, the appearance of CT images can be significantly different leading to the failure of many automated image recognition approaches.
We propose a COVID-19 CT scan recognition model namely coronavirus information fusion and diagnosis network (CIFD-Net)
Our model can resolve the problem of different appearance in CT scan images reliably and efficiently while attaining higher accuracy compared to other state-of-the-art methods.
arXiv Detail & Related papers (2021-12-09T15:22:03Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.