Deep Regression 2D-3D Ultrasound Registration for Liver Motion Correction in Focal Tumor Thermal Ablation
- URL: http://arxiv.org/abs/2410.02579v1
- Date: Thu, 3 Oct 2024 15:24:45 GMT
- Title: Deep Regression 2D-3D Ultrasound Registration for Liver Motion Correction in Focal Tumor Thermal Ablation
- Authors: Shuwei Xing, Derek W. Cool, David Tessier, Elvis C. S. Chen, Terry M. Peters, Aaron Fenster,
- Abstract summary: Liver tumor ablation procedures require accurate placement of the needle applicator at the tumor centroid.
Image registration techniques can aid in interpreting anatomical details and identifying tumors, but their clinical application has been hindered by the tradeoff between alignment accuracy and runtime performance.
We propose a 2D-3D US registration approach to enable intra-procedural alignment that mitigates errors caused by liver motion.
- Score: 5.585625844344932
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Liver tumor ablation procedures require accurate placement of the needle applicator at the tumor centroid. The lower-cost and real-time nature of ultrasound (US) has advantages over computed tomography (CT) for applicator guidance, however, in some patients, liver tumors may be occult on US and tumor mimics can make lesion identification challenging. Image registration techniques can aid in interpreting anatomical details and identifying tumors, but their clinical application has been hindered by the tradeoff between alignment accuracy and runtime performance, particularly when compensating for liver motion due to patient breathing or movement. Therefore, we propose a 2D-3D US registration approach to enable intra-procedural alignment that mitigates errors caused by liver motion. Specifically, our approach can correlate imbalanced 2D and 3D US image features and use continuous 6D rotation representations to enhance the model's training stability. The dataset was divided into 2388, 196 and 193 image pairs for training, validation and testing, respectively. Our approach achieved a mean Euclidean distance error of 2.28 mm $\pm$ 1.81 mm and a mean geodesic angular error of 2.99$^{\circ}$ $\pm$ 1.95$^{\circ}$, with a runtime of 0.22 seconds per 2D-3D US image pair. These results demonstrate that our approach can achieve accurate alignment and clinically acceptable runtime, indicating potential for clinical translation.
Related papers
- Epicardium Prompt-guided Real-time Cardiac Ultrasound Frame-to-volume Registration [50.602074919305636]
This paper introduces a lightweight end-to-end Cardiac Ultrasound frame-to-volume Registration network, termed CU-Reg.
We use epicardium prompt-guided anatomical clues to reinforce the interaction of 2D sparse and 3D dense features, followed by a voxel-wise local-global aggregation of enhanced features.
arXiv Detail & Related papers (2024-06-20T17:47:30Z) - Accurate Patient Alignment without Unnecessary Imaging Dose via Synthesizing Patient-specific 3D CT Images from 2D kV Images [10.538839084727975]
Tumor visibility is constrained due to the projection of patient's anatomy onto a 2D plane.
In treatment room with 3D-OBI such as cone beam CT(CBCT), the field of view(FOV) of CBCT is limited with unnecessarily high imaging dose.
We propose a dual-models framework built with hierarchical ViT blocks to reconstruct 3D CT from kV images obtained at the treatment position.
arXiv Detail & Related papers (2024-04-01T19:55:03Z) - An objective comparison of methods for augmented reality in laparoscopic
liver resection by preoperative-to-intraoperative image fusion [33.12510773034339]
Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image.
Most of the algorithms make use of anatomical landmarks to guide registration.
These landmarks include the liver's inferior ridge, the falciform ligament, and the occluding contours.
We present the Preoperative-to-Intraoperative Laparoscopic Fusion Challenge (P2ILF), which investigates the possibilities of detecting these landmarks automatically and using them in registration.
arXiv Detail & Related papers (2024-01-28T20:30:14Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - The Impact of Loss Functions and Scene Representations for 3D/2D
Registration on Single-view Fluoroscopic X-ray Pose Estimation [1.758213853394712]
We first develop a differentiable projection rendering framework for the efficient computation of Digitally Reconstructed Radiographs (DRRs)
We then perform pose estimation by iterative descent using various candidate loss functions, that quantify the image discrepancy of the synthesized DRR with respect to the ground-truth fluoroscopic X-ray image.
Using the Mutual Information loss, a comprehensive evaluation of pose estimation performed on a tomographic X-ray dataset of 50 patients$'$ skulls shows that utilizing either discretized (CBCT) or neural (NeTT/mNeRF) scene representations in DiffProj leads to
arXiv Detail & Related papers (2023-08-01T01:12:29Z) - Using Spatio-Temporal Dual-Stream Network with Self-Supervised Learning
for Lung Tumor Classification on Radial Probe Endobronchial Ultrasound Video [0.0]
During the biopsy process of lung cancer, physicians use real-time ultrasound images to find suitable lesion locations for sampling.
Previous studies have employed 2D convolutional neural networks to effectively differentiate between benign and malignant lung lesions.
This study designs an automatic diagnosis system based on a 3D neural network.
arXiv Detail & Related papers (2023-05-04T10:39:37Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - A Deep Learning Localization Method for Measuring Abdominal Muscle
Dimensions in Ultrasound Images [2.309018557701645]
Two- Dimensional (2D) Ultrasound (US) images can be used to measure abdominal muscles dimensions for the diagnosis and creation of customized treatment plans for patients with Low Back Pain (LBP)
Due to high variability, skilled professionals with specialized training are required to take measurements to avoid low intra-observer reliability.
In this paper, we use a Deep Learning (DL) approach to automate the measurement of the abdominal muscle thickness in 2D US images.
arXiv Detail & Related papers (2021-09-30T08:36:50Z) - A Benchmark for Studying Diabetic Retinopathy: Segmentation, Grading,
and Transferability [76.64661091980531]
People with diabetes are at risk of developing diabetic retinopathy (DR)
Computer-aided DR diagnosis is a promising tool for early detection of DR and severity grading.
This dataset has 1,842 images with pixel-level DR-related lesion annotations, and 1,000 images with image-level labels graded by six board-certified ophthalmologists.
arXiv Detail & Related papers (2020-08-22T07:48:04Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.