Dual-Flow Transformation Network for Deformable Image Registration with
Region Consistency Constraint
- URL: http://arxiv.org/abs/2112.02249v1
- Date: Sat, 4 Dec 2021 05:30:44 GMT
- Title: Dual-Flow Transformation Network for Deformable Image Registration with
Region Consistency Constraint
- Authors: Xinke Ma, Yibo Yang, Yong Xia, Dacheng Tao
- Abstract summary: Current deep learning (DL)-based image registration approaches learn the spatial transformation from one image to another by leveraging a convolutional neural network.
We present a novel dual-flow transformation network with region consistency constraint which maximizes the similarity of ROIs within a pair of images.
Experiments on four public 3D MRI datasets show that the proposed method achieves the best registration performance in accuracy and generalization.
- Score: 95.30864269428808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deformable image registration is able to achieve fast and accurate alignment
between a pair of images and thus plays an important role in many medical image
studies. The current deep learning (DL)-based image registration approaches
directly learn the spatial transformation from one image to another by
leveraging a convolutional neural network, requiring ground truth or similarity
metric. Nevertheless, these methods only use a global similarity energy
function to evaluate the similarity of a pair of images, which ignores the
similarity of regions of interest (ROIs) within images. Moreover, DL-based
methods often estimate global spatial transformations of image directly, which
never pays attention to region spatial transformations of ROIs within images.
In this paper, we present a novel dual-flow transformation network with region
consistency constraint which maximizes the similarity of ROIs within a pair of
images and estimates both global and region spatial transformations
simultaneously. Experiments on four public 3D MRI datasets show that the
proposed method achieves the best registration performance in accuracy and
generalization compared with other state-of-the-art methods.
Related papers
- Progressive Retinal Image Registration via Global and Local Deformable Transformations [49.032894312826244]
We propose a hybrid registration framework called HybridRetina.
We use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation.
Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods.
arXiv Detail & Related papers (2024-09-02T08:43:50Z) - Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - Learning Non-Local Spatial-Angular Correlation for Light Field Image
Super-Resolution [36.69391399634076]
Exploiting spatial-angular correlation is crucial to light field (LF) image super-resolution (SR)
We propose a simple yet effective method to learn the non-local spatial-angular correlation for LF image SR.
Our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line.
arXiv Detail & Related papers (2023-02-16T03:40:40Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Affine Medical Image Registration with Coarse-to-Fine Vision Transformer [11.4219428942199]
We present a learning-based algorithm, Coarse-to-Fine Vision Transformer (C2FViT), for 3D affine medical image registration.
Our method is superior to the existing CNNs-based affine registration methods in terms of registration accuracy, robustness and generalizability.
arXiv Detail & Related papers (2022-03-29T03:18:43Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution
Homography Estimation [52.63874513999119]
Cross-resolution image alignment is a key problem in multiscale giga photography.
Existing deep homography methods neglecting the explicit formulation of correspondences between them, which leads to degraded accuracy in cross-resolution challenges.
We propose a local transformer network embedded within a multiscale structure to explicitly learn correspondences between the multimodal inputs.
arXiv Detail & Related papers (2021-06-08T02:51:45Z) - MDReg-Net: Multi-resolution diffeomorphic image registration using fully
convolutional networks with deep self-supervision [2.0178765779788486]
We present a diffeomorphic image registration algorithm to learn spatial transformations between pairs of images to be registered using fully convolutional networks (FCNs)
The network is trained to estimate diffeomorphic spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and warped moving images.
Experimental results for registering high resolution 3D structural brain magnetic resonance (MR) images have demonstrated that image registration networks trained by our method obtain robust, diffeomorphic image registration results within seconds.
arXiv Detail & Related papers (2020-10-04T02:00:37Z) - Fast Symmetric Diffeomorphic Image Registration with Convolutional
Neural Networks [11.4219428942199]
We present a novel, efficient unsupervised symmetric image registration method.
We evaluate our method on 3D image registration with a large scale brain image dataset.
arXiv Detail & Related papers (2020-03-20T22:07:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.