Cascading Neural Network Methodology for Artificial
Intelligence-Assisted Radiographic Detection and Classification of Lead-Less
Implanted Electronic Devices within the Chest
- URL: http://arxiv.org/abs/2108.11954v1
- Date: Wed, 25 Aug 2021 19:29:48 GMT
- Title: Cascading Neural Network Methodology for Artificial
Intelligence-Assisted Radiographic Detection and Classification of Lead-Less
Implanted Electronic Devices within the Chest
- Authors: Mutlu Demirer, Richard D. White, Vikash Gupta, Ronnie A. Sebro,
Barbaros S. Erdal
- Abstract summary: This work focused on developing CXR interpretation-assisting Artificial Intelligence (AI) methodology with: 1. 100% detection for LLIED presence/location; and 2. High classification in LLIED typing.
For developing the cascading neural network (detection via Faster R-CNN and classification via Inception V3), "ground-truth" CXR annotation (ROI labeling per LLIED), as well as inference display (as Generated Bounding Boxes (GBBs))
- Score: 0.7874708385247353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background & Purpose: Chest X-Ray (CXR) use in pre-MRI safety screening for
Lead-Less Implanted Electronic Devices (LLIEDs), easily overlooked or
misidentified on a frontal view (often only acquired), is common. Although most
LLIED types are "MRI conditional": 1. Some are stringently conditional; 2.
Different conditional types have specific patient- or device- management
requirements; and 3. Particular types are "MRI unsafe". This work focused on
developing CXR interpretation-assisting Artificial Intelligence (AI)
methodology with: 1. 100% detection for LLIED presence/location; and 2. High
classification in LLIED typing. Materials & Methods: Data-mining
(03/1993-02/2021) produced an AI Model Development Population (1,100
patients/4,871 images) creating 4,924 LLIED Region-Of-Interests (ROIs) (with
image-quality grading) used in Training, Validation, and Testing. For
developing the cascading neural network (detection via Faster R-CNN and
classification via Inception V3), "ground-truth" CXR annotation (ROI labeling
per LLIED), as well as inference display (as Generated Bounding Boxes (GBBs)),
relied on a GPU-based graphical user interface. Results: To achieve 100% LLIED
detection, probability threshold reduction to 0.00002 was required by Model 1,
resulting in increasing GBBs per LLIED-related ROI. Targeting LLIED-type
classification following detection of all LLIEDs, Model 2 multi-classified to
reach high-performance while decreasing falsely positive GBBs. Despite 24%
suboptimal ROI image quality, classification was correct in 98.9% and AUCs for
the 9 LLIED-types were 1.00 for 8 and 0.92 for 1. For all misclassification
cases: 1. None involved stringently conditional or unsafe LLIEDs; and 2. Most
were attributable to suboptimal images. Conclusion: This project successfully
developed a LLIED-related AI methodology supporting: 1. 100% detection; and 2.
Typically 100% type classification.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Uncertainty-inspired Open Set Learning for Retinal Anomaly
Identification [71.06194656633447]
We establish an uncertainty-inspired open-set (UIOS) model, which was trained with fundus images of 9 retinal conditions.
Our UIOS model with thresholding strategy achieved an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set.
UIOS correctly predicted high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images.
arXiv Detail & Related papers (2023-04-08T10:47:41Z) - Deep Learning for Segmentation-based Hepatic Steatosis Detection on Open
Data: A Multicenter International Validation Study [5.117364766785943]
This three-step AI workflow consists of 3D liver segmentation, liver attenuation measurements, and hepatic steatosis detection.
The deep-learning segmentation achieved a mean coefficient of 0.957.
If adopted for universal detection, this deep learning system could potentially allow early non-invasive, non-pharmacological preventative interventions.
arXiv Detail & Related papers (2022-10-27T03:24:52Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - Optimising Knee Injury Detection with Spatial Attention and Validating
Localisation Ability [0.5772546394254112]
This work employs a pre-trained, multi-view Convolutional Neural Network (CNN) with a spatial attention block to optimise knee injury detection.
An open-source Magnetic Resonance Imaging (MRI) data set with image-level labels was leveraged for this analysis.
arXiv Detail & Related papers (2021-08-18T13:24:17Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Performance of Dual-Augmented Lagrangian Method and Common Spatial
Patterns applied in classification of Motor-Imagery BCI [68.8204255655161]
Motor-imagery based brain-computer interfaces (MI-BCI) have the potential to become ground-breaking technologies for neurorehabilitation.
Due to the noisy nature of the used EEG signal, reliable BCI systems require specialized procedures for features optimization and extraction.
arXiv Detail & Related papers (2020-10-13T20:50:13Z) - COVID-19 Classification Using Staked Ensembles: A Comprehensive Analysis [0.0]
COVID-19, increasing with a massive mortality rate, led to the WHO declaring it as a pandemic.
It is crucial to perform efficient and fast diagnosis.
The reverse transcript polymerase chain reaction (RTPCR) test is conducted to detect the presence of SARS-CoV-2.
Instead chest CT (or Chest X-ray) can be used for a fast and accurate diagnosis.
arXiv Detail & Related papers (2020-10-07T07:43:57Z) - Exploration of Interpretability Techniques for Deep COVID-19
Classification using Chest X-ray Images [10.01138352319106]
Five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their Ensemble have been used in this paper to classify COVID-19, pneumoniae and healthy subjects using Chest X-Ray images.
The mean Micro-F1 score of the models for COVID-19 classifications ranges from 0.66 to 0.875, and is 0.89 for the Ensemble of the network models.
arXiv Detail & Related papers (2020-06-03T22:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.