Using Spatio-Temporal Dual-Stream Network with Self-Supervised Learning
for Lung Tumor Classification on Radial Probe Endobronchial Ultrasound Video
- URL: http://arxiv.org/abs/2305.02719v2
- Date: Sun, 7 May 2023 02:43:11 GMT
- Title: Using Spatio-Temporal Dual-Stream Network with Self-Supervised Learning
for Lung Tumor Classification on Radial Probe Endobronchial Ultrasound Video
- Authors: Ching-Kai Lin, Chin-Wen Chen, Yun-Chien Cheng
- Abstract summary: During the biopsy process of lung cancer, physicians use real-time ultrasound images to find suitable lesion locations for sampling.
Previous studies have employed 2D convolutional neural networks to effectively differentiate between benign and malignant lung lesions.
This study designs an automatic diagnosis system based on a 3D neural network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The purpose of this study is to develop a computer-aided diagnosis system for
classifying benign and malignant lung lesions, and to assist physicians in
real-time analysis of radial probe endobronchial ultrasound (EBUS) videos.
During the biopsy process of lung cancer, physicians use real-time ultrasound
images to find suitable lesion locations for sampling. However, most of these
images are difficult to classify and contain a lot of noise. Previous studies
have employed 2D convolutional neural networks to effectively differentiate
between benign and malignant lung lesions, but doctors still need to manually
select good-quality images, which can result in additional labor costs. In
addition, the 2D neural network has no ability to capture the temporal
information of the ultrasound video, so it is difficult to obtain the
relationship between the features of the continuous images. This study designs
an automatic diagnosis system based on a 3D neural network, uses the SlowFast
architecture as the backbone to fuse temporal and spatial features, and uses
the SwAV method of contrastive learning to enhance the noise robustness of the
model. The method we propose includes the following advantages, such as (1)
using clinical ultrasound films as model input, thereby reducing the need for
high-quality image selection by physicians, (2) high-accuracy classification of
benign and malignant lung lesions can assist doctors in clinical diagnosis and
reduce the time and risk of surgery, and (3) the capability to classify well
even in the presence of significant image noise. The AUC, accuracy, precision,
recall and specificity of our proposed method on the validation set reached
0.87, 83.87%, 86.96%, 90.91% and 66.67%, respectively. The results have
verified the importance of incorporating temporal information and the
effectiveness of using the method of contrastive learning on feature
extraction.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Two-Stage Deep Learning Framework for Quality Assessment of Left Atrial
Late Gadolinium Enhanced MRI Images [0.22585387137796725]
We propose a two-stage deep-learning approach for automated LGE-MRI image diagnostic quality assessment.
The method includes a left atrium detector to focus on relevant regions and a deep network to evaluate diagnostic quality.
arXiv Detail & Related papers (2023-10-13T01:27:36Z) - Swin-Tempo: Temporal-Aware Lung Nodule Detection in CT Scans as Video
Sequences Using Swin Transformer-Enhanced UNet [2.7547288571938795]
We present an innovative model that harnesses the strengths of both convolutional neural networks and vision transformers.
Inspired by object detection in videos, we treat each 3D CT image as a video, individual slices as frames, and lung nodules as objects, enabling a time-series application.
arXiv Detail & Related papers (2023-10-05T07:48:55Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Intra-operative Brain Tumor Detection with Deep Learning-Optimized
Hyperspectral Imaging [37.21885467891782]
Surgery for gliomas (intrinsic brain tumors) is challenging due to the infiltrative nature of the lesion.
No real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors.
We build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance.
arXiv Detail & Related papers (2023-02-06T15:52:03Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A Deep Learning Approach to Predicting Collateral Flow in Stroke
Patients Using Radiomic Features from Perfusion Images [58.17507437526425]
Collateral circulation results from specialized anastomotic channels which provide oxygenated blood to regions with compromised blood flow.
The actual grading is mostly done through manual inspection of the acquired images.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data.
arXiv Detail & Related papers (2021-10-24T18:58:40Z) - The interpretation of endobronchial ultrasound image using 3D
convolutional neural network for differentiating malignant and benign
mediastinal lesions [3.0969191504482247]
The purpose of this study is to differentiate malignant and benign lesions by using endobronchial ultrasound (EBUS) image.
Our model is robust to noise and able to fuse various imaging features and aspiration of EBUS videos.
arXiv Detail & Related papers (2021-07-29T08:38:17Z) - Computer-aided Tumor Diagnosis in Automated Breast Ultrasound using 3D
Detection Network [18.31577982955252]
The efficacy of our network is verified from a collected dataset of 418 patients with 145 benign tumors and 273 malignant tumors.
Experiments show our network attains a sensitivity of 97.66% with 1.23 false positives (FPs), and has an area under the curve(AUC) value of 0.8720.
arXiv Detail & Related papers (2020-07-31T15:25:07Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.