HyperMorph: Amortized Hyperparameter Learning for Image Registration
- URL: http://arxiv.org/abs/2101.01035v1
- Date: Mon, 4 Jan 2021 15:39:16 GMT
- Title: HyperMorph: Amortized Hyperparameter Learning for Image Registration
- Authors: Andrew Hoopes, Malte Hoffmann, Bruce Fischl, John Guttag, Adrian V.
Dalca
- Abstract summary: HyperMorph is a learning-based strategy for deformable image registration.
We show that it can be used to optimize multiple hyper parameters considerably faster than existing search strategies.
- Score: 8.13669868327082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present HyperMorph, a learning-based strategy for deformable image
registration that removes the need to tune important registration
hyperparameters during training. Classical registration methods solve an
optimization problem to find a set of spatial correspondences between two
images, while learning-based methods leverage a training dataset to learn a
function that generates these correspondences. The quality of the results for
both types of techniques depends greatly on the choice of hyperparameters.
Unfortunately, hyperparameter tuning is time-consuming and typically involves
training many separate models with various hyperparameter values, potentially
leading to suboptimal results. To address this inefficiency, we introduce
amortized hyperparameter learning for image registration, a novel strategy to
learn the effects of hyperparameters on deformation fields. The proposed
framework learns a hypernetwork that takes in an input hyperparameter and
modulates a registration network to produce the optimal deformation field for
that hyperparameter value. In effect, this strategy trains a single, rich model
that enables rapid, fine-grained discovery of hyperparameter values from a
continuous interval at test-time. We demonstrate that this approach can be used
to optimize multiple hyperparameters considerably faster than existing search
strategies, leading to a reduced computational and human burden and increased
flexibility. We also show that this has several important benefits, including
increased robustness to initialization and the ability to rapidly identify
optimal hyperparameter values specific to a registration task, dataset, or even
a single anatomical region - all without retraining the HyperMorph model. Our
code is publicly available at http://voxelmorph.mit.edu.
Related papers
- Efficient Hyperparameter Importance Assessment for CNNs [1.7778609937758323]
This paper aims to quantify the importance weights of some hyperparameters in Convolutional Neural Networks (CNNs) with an algorithm called N-RReliefF.
We conduct an extensive study by training over ten thousand CNN models across ten popular image classification datasets.
arXiv Detail & Related papers (2024-10-11T15:47:46Z) - Optimization Hyper-parameter Laws for Large Language Models [56.322914260197734]
We present Opt-Laws, a framework that captures the relationship between hyper- parameters and training outcomes.
Our validation across diverse model sizes and data scales demonstrates Opt-Laws' ability to accurately predict training loss.
This approach significantly reduces computational costs while enhancing overall model performance.
arXiv Detail & Related papers (2024-09-07T09:37:19Z) - HyperPredict: Estimating Hyperparameter Effects for Instance-Specific Regularization in Deformable Image Registration [2.2252684361733293]
Methods for medical image registration infer geometric transformations that align pairs/groups of images by maximising an image similarity metric.
Regularization terms are essential to obtain meaningful registration results.
We propose a method for evaluating the influence of hyper parameters and subsequently selecting an optimal value for given image pairs.
arXiv Detail & Related papers (2024-03-04T14:17:30Z) - Learning the Effect of Registration Hyperparameters with HyperMorph [7.313453912494172]
We introduce HyperMorph, a framework that facilitates efficient hyperparameter tuning in learning-based deformable image registration.
We show that it enables fast, high-resolution hyperparameter search at test-time, reducing the inefficiency of traditional approaches.
arXiv Detail & Related papers (2022-03-30T21:30:06Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Scalable One-Pass Optimisation of High-Dimensional Weight-Update
Hyperparameters by Implicit Differentiation [0.0]
We develop an approximate hypergradient-based hyper parameter optimiser.
It requires only one training episode, with no restarts.
We also provide a motivating argument for convergence to the true hypergradient.
arXiv Detail & Related papers (2021-10-20T09:57:57Z) - HyperNP: Interactive Visual Exploration of Multidimensional Projection
Hyperparameters [61.354362652006834]
HyperNP is a scalable method that allows for real-time interactive exploration of projection methods by training neural network approximations.
We evaluate the performance of the HyperNP across three datasets in terms of performance and speed.
arXiv Detail & Related papers (2021-06-25T17:28:14Z) - Conditional Deformable Image Registration with Convolutional Neural
Network [15.83842747998493]
We propose a conditional image registration method and a new self-supervised learning paradigm for deep deformable image registration.
Our proposed method enables the precise control of the smoothness of the deformation field without sacrificing the runtime advantage or registration accuracy.
arXiv Detail & Related papers (2021-06-23T22:25:28Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Rethinking the Hyperparameters for Fine-tuning [78.15505286781293]
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.
Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper parameters.
This paper re-examines several common practices of setting hyper parameters for fine-tuning.
arXiv Detail & Related papers (2020-02-19T18:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.