Transducer Adaptive Ultrasound Volume Reconstruction
- URL: http://arxiv.org/abs/2011.08419v1
- Date: Tue, 17 Nov 2020 04:46:57 GMT
- Title: Transducer Adaptive Ultrasound Volume Reconstruction
- Authors: Hengtao Guo, Sheng Xu, Bradford J. Wood, Pingkun Yan
- Abstract summary: 3D volume reconstruction from freehand 2D scans is a very challenging problem, especially without the use of external tracking devices.
Recent deep learning based methods demonstrate the potential of directly estimating inter-frame motion between consecutive ultrasound frames.
We propose a novel domain adaptation strategy to adapt deep learning algorithms to data acquired with different transducers.
- Score: 17.19369561039399
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reconstructed 3D ultrasound volume provides more context information compared
to a sequence of 2D scanning frames, which is desirable for various clinical
applications such as ultrasound-guided prostate biopsy. Nevertheless, 3D volume
reconstruction from freehand 2D scans is a very challenging problem, especially
without the use of external tracking devices. Recent deep learning based
methods demonstrate the potential of directly estimating inter-frame motion
between consecutive ultrasound frames. However, such algorithms are specific to
particular transducers and scanning trajectories associated with the training
data, which may not be generalized to other image acquisition settings. In this
paper, we tackle the data acquisition difference as a domain shift problem and
propose a novel domain adaptation strategy to adapt deep learning algorithms to
data acquired with different transducers. Specifically, feature extractors that
generate transducer-invariant features from different datasets are trained by
minimizing the discrepancy between deep features of paired samples in a latent
space. Our results show that the proposed domain adaptation method can
successfully align different feature distributions while preserving the
transducer-specific information for universal freehand ultrasound volume
reconstruction.
Related papers
- Standardisation of Convex Ultrasound Data Through Geometric Analysis and Augmentation [5.87276808100259]
Ultrasound research and development has historically lagged, particularly in the case of applications with data-driven algorithms.
A significant issue with ultrasound is the extreme variability of the images, due to the number of different machines available.
The method proposed in this article is an approach to alleviating this issue of disorganisation.
arXiv Detail & Related papers (2025-02-13T16:45:39Z) - Enhancing Free-hand 3D Photoacoustic and Ultrasound Reconstruction using Deep Learning [3.8426872518410997]
This study introduces a motion-based learning network with a global-local self-attention module (MoGLo-Net) to enhance 3D reconstruction in handheld photoacoustic and ultrasound (PAUS) imaging.
MoGLo-Net exploits the critical regions, such as fully-developed speckle area or high-echogenic tissue area within successive ultrasound images to accurately estimate motion parameters.
arXiv Detail & Related papers (2025-02-05T11:59:23Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Privileged Anatomical and Protocol Discrimination in Trackerless 3D
Ultrasound Reconstruction [18.351571641356195]
Three-dimensional (3D) freehand ultrasound (US) reconstruction without using any additional external tracking device has seen recent advances with deep neural networks (DNNs)
We first investigated two identified contributing factors of the learned inter-frame correlation that enable the DNN-based reconstruction: anatomy and protocol.
We propose to incorporate the ability to represent these two factors as the privileged information to improve existing DNN-based methods.
arXiv Detail & Related papers (2023-08-20T15:30:20Z) - DA-VSR: Domain Adaptable Volumetric Super-Resolution For Medical Images [69.63915773870758]
We present a novel algorithm called domain adaptable super-resolution (DA-VSR) to better bridge the domain inconsistency gap.
DA-VSR uses a unified feature extraction backbone and a series of network heads to improve image quality over different planes.
We demonstrate that DA-VSR significantly improves super-resolution quality across numerous datasets of different domains.
arXiv Detail & Related papers (2022-10-11T03:16:35Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Comparison of Representation Learning Techniques for Tracking in time
resolved 3D Ultrasound [0.7734726150561088]
3D ultrasound (3DUS) becomes more interesting for target tracking in radiation therapy due to its capability to provide volumetric images in real-time without using ionizing radiation.
For this, a method for learning meaningful representations would be useful to recognize anatomical structures in different time frames in representation space (r-space)
In this study, 3DUS patches are reduced into a 128-dimensional r-space using conventional autoencoder, variational autoencoder and sliced-wasserstein autoencoder.
arXiv Detail & Related papers (2022-01-10T12:38:22Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - End-to-end Ultrasound Frame to Volume Registration [9.738024231762465]
We propose an end-to-end frame-to-volume registration network (FVR-Net) for 2D and 3D registration.
Our model shows superior efficiency for real-time interventional guidance with highly competitive registration accuracy.
arXiv Detail & Related papers (2021-07-14T01:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.