Unimodal Cyclic Regularization for Training Multimodal Image
Registration Networks
- URL: http://arxiv.org/abs/2011.06214v1
- Date: Thu, 12 Nov 2020 05:37:30 GMT
- Title: Unimodal Cyclic Regularization for Training Multimodal Image
Registration Networks
- Authors: Zhe Xu, Jiangpeng Yan, Jie Luo, William Wells, Xiu Li, Jayender
Jagadeesan
- Abstract summary: We propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration.
In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods.
- Score: 22.94932232413841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The loss function of an unsupervised multimodal image registration framework
has two terms, i.e., a metric for similarity measure and regularization. In the
deep learning era, researchers proposed many approaches to automatically learn
the similarity metric, which has been shown effective in improving registration
performance. However, for the regularization term, most existing multimodal
registration approaches still use a hand-crafted formula to impose artificial
properties on the estimated deformation field. In this work, we propose a
unimodal cyclic regularization training pipeline, which learns task-specific
prior knowledge from simpler unimodal registration, to constrain the
deformation field of multimodal registration. In the experiment of abdominal
CT-MR registration, the proposed method yields better results over conventional
regularization methods, especially for severely deformed local regions.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - DISA: DIfferentiable Similarity Approximation for Universal Multimodal
Registration [39.44133108254786]
We propose a generic framework for creating expressive cross-modal descriptors.
We achieve this by approximating existing metrics with a dot-product in the feature space of a small convolutional neural network.
Our method is several orders of magnitude faster than local patch-based metrics and can be directly applied in clinical settings.
arXiv Detail & Related papers (2023-07-19T12:12:17Z) - Unsupervised 3D registration through optimization-guided cyclical
self-training [71.75057371518093]
State-of-the-art deep learning-based registration methods employ three different learning strategies.
We propose a novel self-supervised learning paradigm for unsupervised registration, relying on self-training.
We evaluate the method for abdomen and lung registration, consistently surpassing metric-based supervision and outperforming diverse state-of-the-art competitors.
arXiv Detail & Related papers (2023-06-29T14:54:10Z) - Joint segmentation and discontinuity-preserving deformable registration:
Application to cardiac cine-MR images [74.99415008543276]
Most deep learning-based registration methods assume that the deformation fields are smooth and continuous everywhere in the image domain.
We propose a novel discontinuity-preserving image registration method to tackle this challenge, which ensures globally discontinuous and locally smooth deformation fields.
A co-attention block is proposed in the segmentation component of the network to learn the structural correlations in the input images.
We evaluate our method on the task of intra-subject-temporal image registration using large-scale cinematic cardiac magnetic resonance image sequences.
arXiv Detail & Related papers (2022-11-24T23:45:01Z) - ContraReg: Contrastive Learning of Multi-modality Unsupervised
Deformable Image Registration [8.602552627077056]
This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration.
By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment.
Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task.
arXiv Detail & Related papers (2022-06-27T16:27:53Z) - Bayesian intrinsic groupwise registration via explicit hierarchical
disentanglement [18.374535632681884]
We propose a general framework which formulates groupwise registration as a procedure of hierarchical Bayesian inference.
Here, we propose a novel variational posterior and network architecture that facilitate joint learning of the common structural representation.
Results have demonstrated the efficacy of our framework in realizing multimodal groupwise registration in an end-to-end fashion.
arXiv Detail & Related papers (2022-06-06T06:13:24Z) - Mutual information neural estimation for unsupervised multi-modal
registration of brain images [0.0]
We propose guiding the training of a deep learning-based registration method with MI estimation between an image-pair in an end-to-end trainable network.
Our results show that a small, 2-layer network produces competitive results in both mono- and multimodal registration, with sub-second run-times.
Real-time clinical application will benefit from a better visual matching of anatomical structures and less registration failures/outliers.
arXiv Detail & Related papers (2022-01-25T13:22:34Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - Cross-Domain Similarity Learning for Face Recognition in Unseen Domains [90.35908506994365]
We introduce a novel cross-domain metric learning loss, which we dub Cross-Domain Triplet (CDT) loss, to improve face recognition in unseen domains.
The CDT loss encourages learning semantically meaningful features by enforcing compact feature clusters of identities from one domain.
Our method does not require careful hard-pair sample mining and filtering strategy during training.
arXiv Detail & Related papers (2021-03-12T19:48:01Z) - Constraining Volume Change in Learned Image Registration for Lung CTs [4.37795447716986]
In this paper, we identify important strategies of conventional registration methods for lung registration and successfully developed the deep-learning counterpart.
We employ a Gaussian-pyramid-based multilevel framework that can solve the image registration optimization in a coarse-to-fine fashion.
We show that it archives state-of-the-art results on the COPDGene dataset compared to the challenge winning conventional registration method with much shorter execution time.
arXiv Detail & Related papers (2020-11-29T14:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.