Epicardium Prompt-guided Real-time Cardiac Ultrasound Frame-to-volume Registration
- URL: http://arxiv.org/abs/2406.14534v2
- Date: Fri, 28 Jun 2024 02:12:20 GMT
- Title: Epicardium Prompt-guided Real-time Cardiac Ultrasound Frame-to-volume Registration
- Authors: Long Lei, Jun Zhou, Jialun Pei, Baoliang Zhao, Yueming Jin, Yuen-Chun Jeremy Teoh, Jing Qin, Pheng-Ann Heng,
- Abstract summary: This paper introduces a lightweight end-to-end Cardiac Ultrasound frame-to-volume Registration network, termed CU-Reg.
We use epicardium prompt-guided anatomical clues to reinforce the interaction of 2D sparse and 3D dense features, followed by a voxel-wise local-global aggregation of enhanced features.
- Score: 50.602074919305636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A comprehensive guidance view for cardiac interventional surgery can be provided by the real-time fusion of the intraoperative 2D images and preoperative 3D volume based on the ultrasound frame-to-volume registration. However, cardiac ultrasound images are characterized by a low signal-to-noise ratio and small differences between adjacent frames, coupled with significant dimension variations between 2D frames and 3D volumes to be registered, resulting in real-time and accurate cardiac ultrasound frame-to-volume registration being a very challenging task. This paper introduces a lightweight end-to-end Cardiac Ultrasound frame-to-volume Registration network, termed CU-Reg. Specifically, the proposed model leverages epicardium prompt-guided anatomical clues to reinforce the interaction of 2D sparse and 3D dense features, followed by a voxel-wise local-global aggregation of enhanced features, thereby boosting the cross-dimensional matching effectiveness of low-quality ultrasound modalities. We further embed an inter-frame discriminative regularization term within the hybrid supervised learning to increase the distinction between adjacent slices in the same ultrasound volume to ensure registration stability. Experimental results on the reprocessed CAMUS dataset demonstrate that our CU-Reg surpasses existing methods in terms of registration accuracy and efficiency, meeting the guidance requirements of clinical cardiac interventional surgery.
Related papers
- Real-time guidewire tracking and segmentation in intraoperative x-ray [52.51797358201872]
We propose a two-stage deep learning framework for real-time guidewire segmentation and tracking.
In the first stage, a Yolov5 detector is trained, using the original X-ray images as well as synthetic ones, to output the bounding boxes of possible target guidewires.
In the second stage, a novel and efficient network is proposed to segment the guidewire in each detected bounding box.
arXiv Detail & Related papers (2024-04-12T20:39:19Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - A Comprehensive 3-D Framework for Automatic Quantification of Late
Gadolinium Enhanced Cardiac Magnetic Resonance Images [5.947543669357994]
Late gadolinium enhanced (LGE) cardiac magnetic resonance (CMR) can directly visualize nonviable myocardium with hyperenhanced intensities.
For heart attack patients, it is crucial to facilitate the decision of appropriate therapy by analyzing and quantifying their LGE CMR images.
To achieve accurate quantification, LGE CMR images need to be processed in two steps: segmentation of the myocardium followed by classification of infarcts.
arXiv Detail & Related papers (2022-05-21T11:54:39Z) - Three-Dimensional Segmentation of the Left Ventricle in Late Gadolinium
Enhanced MR Images of Chronic Infarction Combining Long- and Short-Axis
Information [5.947543669357994]
We present a comprehensive framework for automatic 3D segmentation of the LV in LGE CMR images.
We propose a novel parametric model of the LV for consistent myocardial edge points detection.
We have evaluated the proposed framework with 21 sets of real patient and 4 sets of simulated phantom data.
arXiv Detail & Related papers (2022-05-21T09:47:50Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Segmentation of Cardiac Structures via Successive Subspace Learning with
Saab Transform from Cine MRI [29.894633364282555]
We propose a machine learning model, successive subspace learning with the subspace approximation with adjusted bias (Saab) transform, for accurate and efficient segmentation from cine MRI.
Our framework performed better than state-of-the-art U-Net models with 200$times$ fewer parameters in the left ventricle, right ventricle, and myocardium.
arXiv Detail & Related papers (2021-07-22T14:50:48Z) - End-to-end Ultrasound Frame to Volume Registration [9.738024231762465]
We propose an end-to-end frame-to-volume registration network (FVR-Net) for 2D and 3D registration.
Our model shows superior efficiency for real-time interventional guidance with highly competitive registration accuracy.
arXiv Detail & Related papers (2021-07-14T01:59:42Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Enhanced 3D Myocardial Strain Estimation from Multi-View 2D CMR Imaging [0.0]
We propose an enhanced 3D myocardial strain estimation procedure, which combines complementary displacement information from multiple orientations of a single imaging modality (untagged CMR SSFP images)
We register the sets of short-axis, four-chamber and two-chamber views via a 2D non-rigid registration algorithm implemented in a commercial software (Segment, Medviso)
We then create a series of interpolating functions for the three directions of motion and use them to deform a tetrahedral mesh representation of a patient-specific left ventricle.
arXiv Detail & Related papers (2020-09-25T22:47:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.