Automatic nodule identification and differentiation in ultrasound videos
to facilitate per-nodule examination
- URL: http://arxiv.org/abs/2310.06339v1
- Date: Tue, 10 Oct 2023 06:20:14 GMT
- Title: Automatic nodule identification and differentiation in ultrasound videos
to facilitate per-nodule examination
- Authors: Siyuan Jiang, Yan Ding, Yuling Wang, Lei Xu, Wenli Dai, Wanru Chang,
Jianfeng Zhang, Jie Yu, Jianqiao Zhou, Chunquan Zhang, Ping Liang, Dexing
Kong
- Abstract summary: Sonographers usually discriminate different nodules by examining the nodule features and the surrounding structures.
We built a reidentification system that consists of two parts: an extractor based on the deep learning model that can extract feature vectors from the input video clips and a real-time clustering algorithm that automatically groups feature vectors by nodules.
- Score: 12.75726717324889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound is a vital diagnostic technique in health screening, with the
advantages of non-invasive, cost-effective, and radiation free, and therefore
is widely applied in the diagnosis of nodules. However, it relies heavily on
the expertise and clinical experience of the sonographer. In ultrasound images,
a single nodule might present heterogeneous appearances in different
cross-sectional views which makes it hard to perform per-nodule examination.
Sonographers usually discriminate different nodules by examining the nodule
features and the surrounding structures like gland and duct, which is
cumbersome and time-consuming. To address this problem, we collected hundreds
of breast ultrasound videos and built a nodule reidentification system that
consists of two parts: an extractor based on the deep learning model that can
extract feature vectors from the input video clips and a real-time clustering
algorithm that automatically groups feature vectors by nodules. The system
obtains satisfactory results and exhibits the capability to differentiate
ultrasound videos. As far as we know, it's the first attempt to apply
re-identification technique in the ultrasonic field.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Breast Ultrasound Report Generation using LangChain [58.07183284468881]
We propose the integration of multiple image analysis tools through a LangChain using Large Language Models (LLM) into the breast reporting process.
Our method can accurately extract relevant features from ultrasound images, interpret them in a clinical context, and produce comprehensive and standardized reports.
arXiv Detail & Related papers (2023-12-05T00:28:26Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Breast Lesion Diagnosis Using Static Images and Dynamic Video [12.71602984461284]
We propose a multi-modality breast tumor diagnosis model to imitate the diagnosing process of radiologists.
Our work is validated on a breast ultrasound dataset composed of 897 sets of ultrasound images and videos.
arXiv Detail & Related papers (2023-08-19T11:09:58Z) - Key-frame Guided Network for Thyroid Nodule Recognition using Ultrasound
Videos [13.765306481109988]
This paper proposes a novel method for the automated recognition of thyroid nodules through an exploration of ultrasound videos and key-frames.
We first propose a detection-localization framework to automatically identify the clinical key-frame with a typical nodule in each ultrasound video.
Based on the localized key-frame, we develop a key-frame guided video classification model for thyroid recognition.
arXiv Detail & Related papers (2022-06-27T14:03:26Z) - Factored Attention and Embedding for Unstructured-view Topic-related
Ultrasound Report Generation [70.7778938191405]
We propose a novel factored attention and embedding model (termed FAE-Gen) for the unstructured-view topic-related ultrasound report generation.
The proposed FAE-Gen mainly consists of two modules, i.e., view-guided factored attention and topic-oriented factored embedding, which capture the homogeneous and heterogeneous morphological characteristic across different views.
arXiv Detail & Related papers (2022-03-12T15:24:03Z) - Learning Ultrasound Scanning Skills from Human Demonstrations [6.971573270058377]
We propose a learning-based framework to acquire ultrasound scanning skills from human demonstrations.
The parameters of the model are learned using the data collected from skilled sonographers' demonstrations.
The robustness of the proposed framework is validated with the experiments on real data from sonographers.
arXiv Detail & Related papers (2021-11-09T12:29:25Z) - Learning Robotic Ultrasound Scanning Skills via Human Demonstrations and
Guided Explorations [12.894853456160924]
We propose a learning-based approach to learn the robotic ultrasound scanning skills from human demonstrations.
First, the robotic ultrasound scanning skill is encapsulated into a high-dimensional multi-modal model, which takes the ultrasound images, the pose/position of the probe and the contact force into account.
Second, we leverage the power of imitation learning to train the multi-modal model with the training data collected from the demonstrations of experienced ultrasound physicians.
arXiv Detail & Related papers (2021-11-02T14:38:09Z) - Pediatric Otoscopy Video Screening with Shift Contrastive Anomaly
Detection [4.922640055654283]
We present a two stage method that first, identifies valid frames by detecting and extracting ear drum patches from the video sequence.
Second, performs the proposed shift contrastive anomaly detection to flag the otoscopy video sequences as normal or abnormal.
Our method achieves an AUROC of 88.0% on the patient-level and also outperforms the average of a group of 25 clinicians in a comparative study.
arXiv Detail & Related papers (2021-10-25T20:39:28Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.