An interpretable object detection based model for the diagnosis of
neonatal lung diseases using Ultrasound images
- URL: http://arxiv.org/abs/2105.10081v1
- Date: Fri, 21 May 2021 01:12:35 GMT
- Title: An interpretable object detection based model for the diagnosis of
neonatal lung diseases using Ultrasound images
- Authors: Rodina Bassiouny (1), Adel Mohamed (2), Karthi Umapathy (1) and Naimul
Khan (1) ((1) Ryerson University, Toronto, Canada, (2) Mount Sinai Hospital,
University of Toronto, Toronto, Canada)
- Abstract summary: Lung Ultrasound (LUS) has been increasingly used to diagnose and monitor different lung diseases in neonates.
Mixed artifact patterns found in different respiratory diseases may limit LUS readability by the operator.
We present a unique approach for extracting seven meaningful LUS features that can be easily associated with a specific lung condition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last few decades, Lung Ultrasound (LUS) has been increasingly used
to diagnose and monitor different lung diseases in neonates. It is a non
invasive tool that allows a fast bedside examination while minimally handling
the neonate. Acquiring a LUS scan is easy, but understanding the artifacts
concerned with each respiratory disease is challenging. Mixed artifact patterns
found in different respiratory diseases may limit LUS readability by the
operator. While machine learning (ML), especially deep learning can assist in
automated analysis, simply feeding the ultrasound images to an ML model for
diagnosis is not enough to earn the trust of medical professionals. The
algorithm should output LUS features that are familiar to the operator instead.
Therefore, in this paper we present a unique approach for extracting seven
meaningful LUS features that can be easily associated with a specific
pathological lung condition: Normal pleura, irregular pleura, thick pleura,
Alines, Coalescent B-lines, Separate B-lines and Consolidations. These
artifacts can lead to early prediction of infants developing later respiratory
distress symptoms. A single multi-class region proposal-based object detection
model faster-RCNN (fRCNN) was trained on lower posterior lung ultrasound videos
to detect these LUS features which are further linked to four common neonatal
diseases. Our results show that fRCNN surpasses single stage models such as
RetinaNet and can successfully detect the aforementioned LUS features with a
mean average precision of 86.4%. Instead of a fully automatic diagnosis from
images without any interpretability, detection of such LUS features leave the
ultimate control of diagnosis to the clinician, which can result in a more
trustworthy intelligent system.
Related papers
- Using Explainable AI for EEG-based Reduced Montage Neonatal Seizure Detection [2.206534289238751]
The gold-standard for neonatal seizure detection currently relies on continuous video-EEG monitoring.
A novel explainable deep learning model to automate the neonatal seizure detection process with a reduced EEG montage is proposed.
The presented model achieves an absolute improvement of 8.31% and 42.86% in area under curve (AUC) and recall, respectively.
arXiv Detail & Related papers (2024-06-04T10:53:56Z) - Breast Ultrasound Report Generation using LangChain [58.07183284468881]
We propose the integration of multiple image analysis tools through a LangChain using Large Language Models (LLM) into the breast reporting process.
Our method can accurately extract relevant features from ultrasound images, interpret them in a clinical context, and produce comprehensive and standardized reports.
arXiv Detail & Related papers (2023-12-05T00:28:26Z) - Using Spatio-Temporal Dual-Stream Network with Self-Supervised Learning
for Lung Tumor Classification on Radial Probe Endobronchial Ultrasound Video [0.0]
During the biopsy process of lung cancer, physicians use real-time ultrasound images to find suitable lesion locations for sampling.
Previous studies have employed 2D convolutional neural networks to effectively differentiate between benign and malignant lung lesions.
This study designs an automatic diagnosis system based on a 3D neural network.
arXiv Detail & Related papers (2023-05-04T10:39:37Z) - covEcho Resource constrained lung ultrasound image analysis tool for
faster triaging and active learning [2.4432369908176543]
Real-time light weight active learning-based approach is presented for faster triaging in COVID-19 subjects.
The proposed tool has a mean average precision (mAP) of 66% at an Intersection over Union (IoU) threshold of 0.5 for the prediction of LUS landmarks.
The 14MB lightweight YOLOv5s network achieves 123 FPS while running in a Quadro P4000 GPU.
arXiv Detail & Related papers (2022-06-21T08:38:45Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via
Disease-specific Atlas Maps [18.37280146564769]
We present an interpretable, atlas-learning segmentation method for automatic diagnosis of Hypo-plastic Left Heart Syndrome.
We propose to extend the recently introduced Image-and-Spatial Transformer Networks (Atlas-ISTN) into a framework that enables sensitising atlas generation to disease.
arXiv Detail & Related papers (2021-07-06T14:31:19Z) - Learning the Imaging Landmarks: Unsupervised Key point Detection in Lung
Ultrasound Videos [0.0]
Lung ultrasound (LUS) is an increasingly popular diagnostic imaging modality for continuous and periodic monitoring of lung infection.
Key landmarks assessed by clinicians for triaging using LUS are pleura, A and B lines.
This work is a first of its kind attempt towards unsupervised detection of the key LUS landmarks in LUS videos of COVID-19 subjects during various stages of infection.
arXiv Detail & Related papers (2021-06-13T13:27:12Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.