F3RNet: Full-Resolution Residual Registration Network for Deformable
Image Registration
- URL: http://arxiv.org/abs/2009.07151v3
- Date: Mon, 7 Dec 2020 03:08:38 GMT
- Title: F3RNet: Full-Resolution Residual Registration Network for Deformable
Image Registration
- Authors: Zhe Xu, Jie Luo, Jiangpeng Yan, Xiu Li, Jagadeesan Jayender
- Abstract summary: Deformable image registration (DIR) is essential for many image-guided therapies.
We propose a novel unsupervised registration network, namely the Full-Resolution Residual Registration Network (F3RNet)
One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration.
The other stream learns the deep multi-scale residual representations to obtain robust recognition.
- Score: 21.99118499516863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deformable image registration (DIR) is essential for many image-guided
therapies. Recently, deep learning approaches have gained substantial
popularity and success in DIR. Most deep learning approaches use the so-called
mono-stream "high-to-low, low-to-high" network structure, and can achieve
satisfactory overall registration results. However, accurate alignments for
some severely deformed local regions, which are crucial for pinpointing
surgical targets, are often overlooked. Consequently, these approaches are not
sensitive to some hard-to-align regions, e.g., intra-patient registration of
deformed liver lobes. In this paper, we propose a novel unsupervised
registration network, namely the Full-Resolution Residual Registration Network
(F3RNet), for deformable registration of severely deformed organs. The proposed
method combines two parallel processing streams in a residual learning fashion.
One stream takes advantage of the full-resolution information that facilitates
accurate voxel-level registration. The other stream learns the deep multi-scale
residual representations to obtain robust recognition. We also factorize the 3D
convolution to reduce the training parameters and enhance network efficiency.
We validate the proposed method on a clinically acquired intra-patient
abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT
dataset. Experiments on both multimodal and unimodal registration demonstrate
promising results compared to state-of-the-art approaches.
Related papers
- Recurrent Inference Machine for Medical Image Registration [11.351457718409788]
We propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network.
RIIR is formulated as a meta-learning solver to the registration problem in an iterative manner.
Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only $5%$ of the training data.
arXiv Detail & Related papers (2024-06-19T10:06:35Z) - GSMorph: Gradient Surgery for cine-MRI Cardiac Deformable Registration [62.41725951450803]
Learning-based deformable registration relies on weighted objective functions trading off registration accuracy and smoothness of the field.
We construct a registration model based on the gradient surgery mechanism, named GSMorph, to achieve a hyper parameter-free balance on multiple losses.
Our method is model-agnostic and can be merged into any deep registration network without introducing extra parameters or slowing down inference.
arXiv Detail & Related papers (2023-06-26T13:32:09Z) - Recurrence With Correlation Network for Medical Image Registration [66.63200823918429]
We present Recurrence with Correlation Network (RWCNet), a medical image registration network with multi-scale features and a cost volume layer.
We demonstrate that these architectural features improve medical image registration accuracy in two image registration datasets.
arXiv Detail & Related papers (2023-02-05T02:41:46Z) - Joint segmentation and discontinuity-preserving deformable registration:
Application to cardiac cine-MR images [74.99415008543276]
Most deep learning-based registration methods assume that the deformation fields are smooth and continuous everywhere in the image domain.
We propose a novel discontinuity-preserving image registration method to tackle this challenge, which ensures globally discontinuous and locally smooth deformation fields.
A co-attention block is proposed in the segmentation component of the network to learn the structural correlations in the input images.
We evaluate our method on the task of intra-subject-temporal image registration using large-scale cinematic cardiac magnetic resonance image sequences.
arXiv Detail & Related papers (2022-11-24T23:45:01Z) - Affine Medical Image Registration with Coarse-to-Fine Vision Transformer [11.4219428942199]
We present a learning-based algorithm, Coarse-to-Fine Vision Transformer (C2FViT), for 3D affine medical image registration.
Our method is superior to the existing CNNs-based affine registration methods in terms of registration accuracy, robustness and generalizability.
arXiv Detail & Related papers (2022-03-29T03:18:43Z) - Mutual information neural estimation for unsupervised multi-modal
registration of brain images [0.0]
We propose guiding the training of a deep learning-based registration method with MI estimation between an image-pair in an end-to-end trainable network.
Our results show that a small, 2-layer network produces competitive results in both mono- and multimodal registration, with sub-second run-times.
Real-time clinical application will benefit from a better visual matching of anatomical structures and less registration failures/outliers.
arXiv Detail & Related papers (2022-01-25T13:22:34Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Unsupervised Multimodal Image Registration with Adaptative Gradient
Guidance [23.461130560414805]
Unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration.
The estimated deformation fields of the existing methods fully rely on the to-be-registered image pair.
We propose a novel multimodal registration framework, which leverages the deformation fields estimated from both.
arXiv Detail & Related papers (2020-11-12T05:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.