The interpretation of endobronchial ultrasound image using 3D
convolutional neural network for differentiating malignant and benign
mediastinal lesions
- URL: http://arxiv.org/abs/2107.13820v2
- Date: Mon, 2 Aug 2021 04:48:55 GMT
- Title: The interpretation of endobronchial ultrasound image using 3D
convolutional neural network for differentiating malignant and benign
mediastinal lesions
- Authors: Ching-Kai Lin, Shao-Hua Wu, Jerry Chang, Yun-Chien Cheng
- Abstract summary: The purpose of this study is to differentiate malignant and benign lesions by using endobronchial ultrasound (EBUS) image.
Our model is robust to noise and able to fuse various imaging features and aspiration of EBUS videos.
- Score: 3.0969191504482247
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The purpose of this study is to differentiate malignant and benign
mediastinal lesions by using the three-dimensional convolutional neural network
through the endobronchial ultrasound (EBUS) image. Compared with previous
study, our proposed model is robust to noise and able to fuse various imaging
features and spatiotemporal features of EBUS videos. Endobronchial
ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a diagnostic
tool for intrathoracic lymph nodes. Physician can observe the characteristics
of the lesion using grayscale mode, doppler mode, and elastography during the
procedure. To process the EBUS data in the form of a video and appropriately
integrate the features of multiple imaging modes, we used a time-series
three-dimensional convolutional neural network (3D CNN) to learn the
spatiotemporal features and design a variety of architectures to fuse each
imaging mode. Our model (Res3D_UDE) took grayscale mode, Doppler mode, and
elastography as training data and achieved an accuracy of 82.00% and area under
the curve (AUC) of 0.83 on the validation set. Compared with previous study, we
directly used videos recorded during procedure as training and validation data,
without additional manual selection, which might be easier for clinical
application. In addition, model designed with 3D CNN can also effectively learn
spatiotemporal features and improve accuracy. In the future, our model may be
used to guide physicians to quickly and correctly find the target lesions for
slice sampling during the inspection process, reduce the number of slices of
benign lesions, and shorten the inspection time.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Towards Enhanced Analysis of Lung Cancer Lesions in EBUS-TBNA -- A Semi-Supervised Video Object Detection Method [0.0]
This study aims to establish a computer-aided diagnostic system for lung lesions using endobronchial ultrasound (EBUS)
Previous research has lacked the application of object detection models to EBUS-TBNA.
arXiv Detail & Related papers (2024-04-02T13:23:21Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Extremely weakly-supervised blood vessel segmentation with
physiologically based synthesis and domain adaptation [7.107236806113722]
Accurate analysis and modeling of renal functions require a precise segmentation of the renal blood vessels.
Deep-learning-based methods have shown state-of-the-art performance in automatic blood vessel segmentations.
We train a generative model on unlabeled scans and simulate synthetic renal vascular trees physiologically.
We demonstrate that the model can directly segment blood vessels on real scans and validate our method on both 3D micro-CT scans of rat kidneys and a proof-of-concept experiment on 2D retinal images.
arXiv Detail & Related papers (2023-05-26T16:01:49Z) - Using Spatio-Temporal Dual-Stream Network with Self-Supervised Learning
for Lung Tumor Classification on Radial Probe Endobronchial Ultrasound Video [0.0]
During the biopsy process of lung cancer, physicians use real-time ultrasound images to find suitable lesion locations for sampling.
Previous studies have employed 2D convolutional neural networks to effectively differentiate between benign and malignant lung lesions.
This study designs an automatic diagnosis system based on a 3D neural network.
arXiv Detail & Related papers (2023-05-04T10:39:37Z) - Improving Classification of Retinal Fundus Image Using Flow Dynamics
Optimized Deep Learning Methods [0.0]
Diabetic Retinopathy (DR) refers to a barrier that takes place in diabetes mellitus damaging the blood vessel network present in the retina.
It can take some time to perform a DR diagnosis using color fundus pictures because experienced clinicians are required to identify the tumors in the imagery used to identify the illness.
arXiv Detail & Related papers (2023-04-29T16:11:34Z) - MIA-3DCNN: COVID-19 Detection Based on a 3D CNN [0.0]
Convolutional neural networks have been widely used to detect pneumonia caused by COVID-19 in lung images.
This work describes an architecture based on 3D convolutional neural networks for detecting COVID-19 in computed tomography images.
arXiv Detail & Related papers (2023-03-19T18:55:22Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.