Computing Multiple Image Reconstructions with a Single Hypernetwork
- URL: http://arxiv.org/abs/2202.11009v1
- Date: Tue, 22 Feb 2022 16:27:23 GMT
- Title: Computing Multiple Image Reconstructions with a Single Hypernetwork
- Authors: Alan Q. Wang, Adrian V. Dalca, Mert R. Sabuncu
- Abstract summary: In this work, we present a hypernetwork based approach, called HyperRecon, to train reconstruction models.
We demonstrate our method in compressed sensing, super-resolution and denoising tasks, using two large-scale and publicly-available MRI datasets.
- Score: 19.573768098158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning based techniques achieve state-of-the-art results in a wide
range of image reconstruction tasks like compressed sensing. These methods
almost always have hyperparameters, such as the weight coefficients that
balance the different terms in the optimized loss function. The typical
approach is to train the model for a hyperparameter setting determined with
some empirical or theoretical justification. Thus, at inference time, the model
can only compute reconstructions corresponding to the pre-determined
hyperparameter values. In this work, we present a hypernetwork based approach,
called HyperRecon, to train reconstruction models that are agnostic to
hyperparameter settings. At inference time, HyperRecon can efficiently produce
diverse reconstructions, which would each correspond to different
hyperparameter values. In this framework, the user is empowered to select the
most useful output(s) based on their own judgement. We demonstrate our method
in compressed sensing, super-resolution and denoising tasks, using two
large-scale and publicly-available MRI datasets. Our code is available at
https://github.com/alanqrwang/hyperrecon.
Related papers
- Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Online Continuous Hyperparameter Optimization for Generalized Linear Contextual Bandits [55.03293214439741]
In contextual bandits, an agent sequentially makes actions from a time-dependent action set based on past experience.
We propose the first online continuous hyperparameter tuning framework for contextual bandits.
We show that it could achieve a sublinear regret in theory and performs consistently better than all existing methods on both synthetic and real datasets.
arXiv Detail & Related papers (2023-02-18T23:31:20Z) - Learning the Effect of Registration Hyperparameters with HyperMorph [7.313453912494172]
We introduce HyperMorph, a framework that facilitates efficient hyperparameter tuning in learning-based deformable image registration.
We show that it enables fast, high-resolution hyperparameter search at test-time, reducing the inefficiency of traditional approaches.
arXiv Detail & Related papers (2022-03-30T21:30:06Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Hyperparameter Tuning is All You Need for LISTA [92.7008234085887]
Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unrolling an iterative algorithm and training it like a neural network.
We show that adding momentum to intermediate variables in the LISTA network achieves a better convergence rate.
We call this new ultra-light weight network HyperLISTA.
arXiv Detail & Related papers (2021-10-29T16:35:38Z) - Scalable One-Pass Optimisation of High-Dimensional Weight-Update
Hyperparameters by Implicit Differentiation [0.0]
We develop an approximate hypergradient-based hyper parameter optimiser.
It requires only one training episode, with no restarts.
We also provide a motivating argument for convergence to the true hypergradient.
arXiv Detail & Related papers (2021-10-20T09:57:57Z) - HyperNP: Interactive Visual Exploration of Multidimensional Projection
Hyperparameters [61.354362652006834]
HyperNP is a scalable method that allows for real-time interactive exploration of projection methods by training neural network approximations.
We evaluate the performance of the HyperNP across three datasets in terms of performance and speed.
arXiv Detail & Related papers (2021-06-25T17:28:14Z) - Surrogate Model Based Hyperparameter Tuning for Deep Learning with SPOT [0.40611352512781856]
This article demonstrates how the architecture-level parameters of deep learning models that were implemented in Keras/tensorflow can be optimized.
The implementation of the tuning procedure is 100 % based on R, the software environment for statistical computing.
arXiv Detail & Related papers (2021-05-30T21:16:51Z) - HyperMorph: Amortized Hyperparameter Learning for Image Registration [8.13669868327082]
HyperMorph is a learning-based strategy for deformable image registration.
We show that it can be used to optimize multiple hyper parameters considerably faster than existing search strategies.
arXiv Detail & Related papers (2021-01-04T15:39:16Z) - Weighted Random Search for CNN Hyperparameter Optimization [0.0]
We introduce the weighted Random Search (WRS) method, a combination of Random Search (RS) and probabilistic greedy.
The criterion is the classification accuracy achieved within the same number of tested combinations of hyperparameter values.
According to our experiments, the WRS algorithm outperforms the other methods.
arXiv Detail & Related papers (2020-03-30T09:40:14Z) - Rethinking the Hyperparameters for Fine-tuning [78.15505286781293]
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.
Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper parameters.
This paper re-examines several common practices of setting hyper parameters for fine-tuning.
arXiv Detail & Related papers (2020-02-19T18:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.