Deep Implicit Optimization for Robust and Flexible Image Registration
- URL: http://arxiv.org/abs/2406.07361v2
- Date: Fri, 18 Oct 2024 14:38:03 GMT
- Title: Deep Implicit Optimization for Robust and Flexible Image Registration
- Authors: Rohit Jena, Pratik Chaudhari, James C. Gee,
- Abstract summary: We bridge the gap between classical and learning methods by incorporating optimization as a layer in a deep network.
By implicitly differentiating end-to-end through an iterative optimization solver, our learned features are registration and label-aware.
Our framework shows excellent performance on in-domain datasets, and is agnostic to domain shift.
- Score: 20.34181966545357
- License:
- Abstract: Deep Learning in Image Registration (DLIR) methods have been tremendously successful in image registration due to their speed and ability to incorporate weak label supervision at training time. However, DLIR methods forego many of the benefits of classical optimization-based methods. The functional nature of deep networks do not guarantee that the predicted transformation is a local minima of the registration objective, the representation of the transformation (displacement/velocity field/affine) is fixed, and the networks are not robust to domain shift. Our method aims to bridge this gap between classical and learning methods by incorporating optimization as a layer in a deep network. A deep network is trained to predict multi-scale dense feature images that are registered using a black box iterative optimization solver. This optimal warp is then used to minimize image and label alignment errors. By implicitly differentiating end-to-end through an iterative optimization solver, our learned features are registration and label-aware, and the warp functions are guaranteed to be local minima of the registration objective in the feature space. Our framework shows excellent performance on in-domain datasets, and is agnostic to domain shift such as anisotropy and varying intensity profiles. For the first time, our method allows switching between arbitrary transformation representations (free-form to diffeomorphic) at test time with zero retraining. End-to-end feature learning also facilitates interpretability of features, and out-of-the-box promptability using additional label-fidelity terms at inference.
Related papers
- Improving Instance Optimization in Deformable Image Registration with Gradient Projection [7.6061804149819885]
Deformable image registration is inherently a multi-objective optimization problem.
These conflicting objectives often lead to poor optimization outcomes.
Deep learning methods have recently gained popularity in this domain due to their efficiency in processing large datasets.
arXiv Detail & Related papers (2024-10-21T08:27:13Z) - LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Anatomy-aware and acquisition-agnostic joint registration with SynthMorph [6.017634371712142]
Affine image registration is a cornerstone of medical image analysis.
Deep-learning (DL) methods learn a function that maps an image pair to an output transform.
Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image.
We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image.
arXiv Detail & Related papers (2023-01-26T18:59:33Z) - Non-iterative Coarse-to-fine Registration based on Single-pass Deep
Cumulative Learning [11.795108660250843]
We propose a Non-Iterative Coarse-to-finE registration network (NICE-Net) for deformable image registration.
NICE-Net can outperform state-of-the-art iterative deep registration methods while only requiring similar runtime to non-iterative methods.
arXiv Detail & Related papers (2022-06-25T08:34:59Z) - Implicit Optimizer for Diffeomorphic Image Registration [3.1970342304563037]
We propose a rapid and accurate Implicit for Diffeomorphic Image Registration (IDIR) which utilizes the Deep Implicit Function as the neural velocity field.
We evaluate our proposed method on two 3D large-scale MR brain scan datasets, the results show that our proposed method provides faster and better registration results than conventional image registration approaches.
arXiv Detail & Related papers (2022-02-25T05:04:29Z) - GraDIRN: Learning Iterative Gradient Descent-based Energy Minimization
for Deformable Image Registration [9.684786294246749]
We present a Gradient Descent-based Image Registration Network (GraDIRN) for learning deformable image registration.
GraDIRN is based on multi-resolution gradient descent energy minimization.
We demonstrate that this approach achieves state-of-the-art registration performance while using fewer learnable parameters.
arXiv Detail & Related papers (2021-12-07T14:48:31Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - FDA: Fourier Domain Adaptation for Semantic Segmentation [82.4963423086097]
We describe a simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other.
We illustrate the method in semantic segmentation, where densely annotated images are aplenty in one domain, but difficult to obtain in another.
Our results indicate that even simple procedures can discount nuisance variability in the data that more sophisticated methods struggle to learn away.
arXiv Detail & Related papers (2020-04-11T22:20:48Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.