Unsupervised Multimodal Image Registration with Adaptative Gradient
Guidance
- URL: http://arxiv.org/abs/2011.06216v1
- Date: Thu, 12 Nov 2020 05:47:20 GMT
- Title: Unsupervised Multimodal Image Registration with Adaptative Gradient
Guidance
- Authors: Zhe Xu, Jiangpeng Yan, Jie Luo, Xiu Li, Jayender Jagadeesan
- Abstract summary: Unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration.
The estimated deformation fields of the existing methods fully rely on the to-be-registered image pair.
We propose a novel multimodal registration framework, which leverages the deformation fields estimated from both.
- Score: 23.461130560414805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal image registration (MIR) is a fundamental procedure in many
image-guided therapies. Recently, unsupervised learning-based methods have
demonstrated promising performance over accuracy and efficiency in deformable
image registration. However, the estimated deformation fields of the existing
methods fully rely on the to-be-registered image pair. It is difficult for the
networks to be aware of the mismatched boundaries, resulting in unsatisfactory
organ boundary alignment. In this paper, we propose a novel multimodal
registration framework, which leverages the deformation fields estimated from
both: (i) the original to-be-registered image pair, (ii) their corresponding
gradient intensity maps, and adaptively fuses them with the proposed gated
fusion module. With the help of auxiliary gradient-space guidance, the network
can concentrate more on the spatial relationship of the organ boundary.
Experimental results on two clinically acquired CT-MRI datasets demonstrate the
effectiveness of our proposed approach.
Related papers
- Weakly supervised alignment and registration of MR-CT for cervical cancer radiotherapy [9.060365057476133]
Cervical cancer is one of the leading causes of death in women.
We propose a preliminary spatial alignment algorithm and a weakly supervised multimodal registration network.
arXiv Detail & Related papers (2024-05-21T15:05:51Z) - Improving Misaligned Multi-modality Image Fusion with One-stage
Progressive Dense Registration [67.23451452670282]
Misalignments between multi-modality images pose challenges in image fusion.
We propose a Cross-modality Multi-scale Progressive Dense Registration scheme.
This scheme accomplishes the coarse-to-fine registration exclusively using a one-stage optimization.
arXiv Detail & Related papers (2023-08-22T03:46:24Z) - Joint segmentation and discontinuity-preserving deformable registration:
Application to cardiac cine-MR images [74.99415008543276]
Most deep learning-based registration methods assume that the deformation fields are smooth and continuous everywhere in the image domain.
We propose a novel discontinuity-preserving image registration method to tackle this challenge, which ensures globally discontinuous and locally smooth deformation fields.
A co-attention block is proposed in the segmentation component of the network to learn the structural correlations in the input images.
We evaluate our method on the task of intra-subject-temporal image registration using large-scale cinematic cardiac magnetic resonance image sequences.
arXiv Detail & Related papers (2022-11-24T23:45:01Z) - Multi-modal unsupervised brain image registration using edge maps [7.49320945341034]
We propose a simple yet effective unsupervised deep learning-based em multi-modal image registration approach.
The intuition behind this is that image locations with a strong gradient are assumed to denote a transition of tissues.
We evaluate our approach in the context of registering multi-modal (T1w to T2w) magnetic resonance (MR) brain images of different subjects using three different loss functions.
arXiv Detail & Related papers (2022-02-09T15:50:14Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Patch-based field-of-view matching in multi-modal images for
electroporation-based ablations [0.6285581681015912]
Multi-modal imaging sensors are currently involved at different steps of an interventional therapeutic work-flow.
Merging this information relies on a correct spatial alignment of the observed anatomy between the acquired images.
We show that a regional registration approach using voxel patches provides a good structural compromise between the voxel-wise and "global shifts" approaches.
arXiv Detail & Related papers (2020-11-09T11:27:45Z) - Adversarial Uni- and Multi-modal Stream Networks for Multimodal Image
Registration [20.637787406888478]
Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies.
In this paper, we propose a novel translation-based unsupervised deformable image registration method.
Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.
arXiv Detail & Related papers (2020-07-06T14:44:06Z) - CoMIR: Contrastive Multimodal Image Representation for Registration [4.543268895439618]
We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations)
CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures.
arXiv Detail & Related papers (2020-06-11T10:51:33Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.