Learning the Update Operator for 2D/3D Image Registration
- URL: http://arxiv.org/abs/2102.02861v1
- Date: Thu, 4 Feb 2021 19:52:59 GMT
- Title: Learning the Update Operator for 2D/3D Image Registration
- Authors: Srikrishna Jaganathan, Jian Wang, Anja Borsdorf, Andreas Maier
- Abstract summary: preoperative volume can be overlaid over the 2D images using 2D/3D image registration.
Deep learning-based 2D/3D registration methods have shown promising results by improving computational efficiency and robustness.
We show an improvement of 1.8 times in terms of registration accuracy for the update step prediction compared to learning without the known operator.
- Score: 10.720342813316531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image guidance in minimally invasive interventions is usually provided using
live 2D X-ray imaging. To enhance the information available during the
intervention, the preoperative volume can be overlaid over the 2D images using
2D/3D image registration. Recently, deep learning-based 2D/3D registration
methods have shown promising results by improving computational efficiency and
robustness. However, there is still a gap in terms of registration accuracy
compared to traditional optimization-based methods. We aim to address this gap
by incorporating traditional methods in deep neural networks using known
operator learning. As an initial step in this direction, we propose to learn
the update step of an iterative 2D/3D registration framework based on the
Point-to-Plane Correspondence model. We embed the Point-to-Plane Correspondence
model as a known operator in our deep neural network and learn the update step
for the iterative registration. We show an improvement of 1.8 times in terms of
registration accuracy for the update step prediction compared to learning
without the known operator.
Related papers
- Interpretable 2D Vision Models for 3D Medical Images [47.75089895500738]
This study proposes a simple approach of adapting 2D networks with an intermediate feature representation for processing 3D images.
We show on all 3D MedMNIST datasets as benchmark and two real-world datasets consisting of several hundred high-resolution CT or MRI scans that our approach performs on par with existing methods.
arXiv Detail & Related papers (2023-07-13T08:27:09Z) - 3D Point Cloud Pre-training with Knowledge Distillation from 2D Images [128.40422211090078]
We propose a knowledge distillation method for 3D point cloud pre-trained models to acquire knowledge directly from the 2D representation learning model.
Specifically, we introduce a cross-attention mechanism to extract concept features from 3D point cloud and compare them with the semantic information from 2D images.
In this scheme, the point cloud pre-trained models learn directly from rich information contained in 2D teacher models.
arXiv Detail & Related papers (2022-12-17T23:21:04Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - Data Efficient 3D Learner via Knowledge Transferred from 2D Model [30.077342050473515]
We deal with the data scarcity challenge of 3D tasks by transferring knowledge from strong 2D models via RGB-D images.
We utilize a strong and well-trained semantic segmentation model for 2D images to augment RGB-D images with pseudo-label.
Our method already outperforms existing state-of-the-art that is tailored for 3D label efficiency.
arXiv Detail & Related papers (2022-03-16T09:14:44Z) - Deep Iterative 2D/3D Registration [9.813316061451392]
We propose a novel Deep Learning driven 2D/3D registration framework that can be used end-to-end for iterative registration tasks.
We accomplish this by learning the update step of the 2D/3D registration framework using Point-to-Plane Correspondences.
Our proposed method achieves an average runtime of around 8s, a mean re-projection distance error of 0.60 $pm$ 0.40 mm with a success ratio of 97 percent and a capture range of 60 mm.
arXiv Detail & Related papers (2021-07-21T10:51:29Z) - 3D Registration for Self-Occluded Objects in Context [66.41922513553367]
We introduce the first deep learning framework capable of effectively handling this scenario.
Our method consists of an instance segmentation module followed by a pose estimation one.
It allows us to perform 3D registration in a one-shot manner, without requiring an expensive iterative procedure.
arXiv Detail & Related papers (2020-11-23T08:05:28Z) - Bridging the Reality Gap for Pose Estimation Networks using Sensor-Based
Domain Randomization [1.4290119665435117]
Methods trained on synthetic data use 2D images, as domain randomization in 2D is more developed.
Our method integrates the 3D data into the network to increase the accuracy of the pose estimation.
Experiments on three large pose estimation benchmarks show that the presented method outperforms previous methods trained on synthetic data.
arXiv Detail & Related papers (2020-11-17T09:12:11Z) - Human Body Model Fitting by Learned Gradient Descent [48.79414884222403]
We propose a novel algorithm for the fitting of 3D human shape to images.
We show that this algorithm is fast (avg. 120ms convergence), robust to dataset, and achieves state-of-the-art results on public evaluation datasets.
arXiv Detail & Related papers (2020-08-19T14:26:47Z) - 2.75D: Boosting learning by representing 3D Medical imaging to 2D
features for small data [54.223614679807994]
3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks.
Applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models.
In this work, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D.
As a result, 2D CNN networks can also be used to learn volumetric information.
arXiv Detail & Related papers (2020-02-11T08:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.