Training-free image style alignment for self-adapting domain shift on
handheld ultrasound devices
- URL: http://arxiv.org/abs/2402.11211v1
- Date: Sat, 17 Feb 2024 07:15:23 GMT
- Title: Training-free image style alignment for self-adapting domain shift on
handheld ultrasound devices
- Authors: Hongye Zeng, Ke Zou, Zhihao Chen, Yuchong Gao, Hongbo Chen, Haibin
Zhang, Kang Zhou, Meng Wang, Rick Siow Mong Goh, Yong Liu, Chang Jiang, Rui
Zheng, Huazhu Fu
- Abstract summary: We propose the Training-free Image Style Alignment (TISA) framework to align the style of handheld device data to those of standard devices.
TISA can directly infer handheld device images without extra training and is suited for clinical applications.
- Score: 54.476120039032594
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Handheld ultrasound devices face usage limitations due to user inexperience
and cannot benefit from supervised deep learning without extensive expert
annotations. Moreover, the models trained on standard ultrasound device data
are constrained by training data distribution and perform poorly when directly
applied to handheld device data. In this study, we propose the Training-free
Image Style Alignment (TISA) framework to align the style of handheld device
data to those of standard devices. The proposed TISA can directly infer
handheld device images without extra training and is suited for clinical
applications. We show that TISA performs better and more stably in medical
detection and segmentation tasks for handheld device data. We further validate
TISA as the clinical model for automatic measurements of spinal curvature and
carotid intima-media thickness. The automatic measurements agree well with
manual measurements made by human experts and the measurement errors remain
within clinically acceptable ranges. We demonstrate the potential for TISA to
facilitate automatic diagnosis on handheld ultrasound devices and expedite
their eventual widespread use.
Related papers
- Automated Patient Positioning with Learned 3D Hand Gestures [29.90000893655248]
We propose an automated patient positioning system that utilizes a camera to detect specific hand gestures from technicians.
Our approach relies on a novel multi-stage pipeline to recognize and interpret the technicians' gestures.
Results show that our system achieves accurate and precise patient positioning with minimal technician intervention.
arXiv Detail & Related papers (2024-07-20T15:32:24Z) - Unlocking Telemetry Potential: Self-Supervised Learning for Continuous Clinical Electrocardiogram Monitoring [0.0]
This paper applies deep learning to a large volume of unlabeled electrocardiogram (ECG) telemetry signals.
We applied self-supervised learning to pretrain a spectrum of deep networks on approximately 147,000 hours of ECG telemetry data.
arXiv Detail & Related papers (2024-06-07T18:00:00Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AI-Assisted Cervical Cancer Screening [0.7124971549479362]
Visual Inspection with Acetic Acid (VIA) remains the most feasible cervical cancer screening test in resource-constrained settings of low- and middle-income countries (LMICs)
Various handheld devices integrating cameras or smartphones have been recently explored to capture cervical images during VIA and aid decision-making via telemedicine or AI models.
We present a novel approach and describe the end-to-end design process to build a robust smartphone-based AI-assisted system that does not require buying a separate integrated device.
arXiv Detail & Related papers (2024-03-18T16:34:38Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - A Deep Learning Localization Method for Measuring Abdominal Muscle
Dimensions in Ultrasound Images [2.309018557701645]
Two- Dimensional (2D) Ultrasound (US) images can be used to measure abdominal muscles dimensions for the diagnosis and creation of customized treatment plans for patients with Low Back Pain (LBP)
Due to high variability, skilled professionals with specialized training are required to take measurements to avoid low intra-observer reliability.
In this paper, we use a Deep Learning (DL) approach to automate the measurement of the abdominal muscle thickness in 2D US images.
arXiv Detail & Related papers (2021-09-30T08:36:50Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z) - Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound [4.893896929103368]
The system employs an artificial neural network that receives the ultrasound video signal and the motion signal of an inertial measurement unit (IMU) that is attached to the probe.
The network termed US-GuideNet predicts either the movement towards the standard plane position (goal prediction), or the next movement that an expert sonographer would perform.
arXiv Detail & Related papers (2020-07-08T23:58:41Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.