Total Deep Variation for Linear Inverse Problems
- URL: http://arxiv.org/abs/2001.05005v2
- Date: Mon, 17 Feb 2020 19:39:23 GMT
- Title: Total Deep Variation for Linear Inverse Problems
- Authors: Erich Kobler and Alexander Effland and Karl Kunisch and Thomas Pock
- Abstract summary: We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
- Score: 71.90933869570914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diverse inverse problems in imaging can be cast as variational problems
composed of a task-specific data fidelity term and a regularization term. In
this paper, we propose a novel learnable general-purpose regularizer exploiting
recent architectural design patterns from deep learning. We cast the learning
problem as a discrete sampled optimal control problem, for which we derive the
adjoint state equations and an optimality condition. By exploiting the
variational structure of our approach, we perform a sensitivity analysis with
respect to the learned parameters obtained from different training datasets.
Moreover, we carry out a nonlinear eigenfunction analysis, which reveals
interesting properties of the learned regularizer. We show state-of-the-art
performance for classical image restoration and medical image reconstruction
problems.
Related papers
- Iteratively Refined Image Reconstruction with Learned Attentive Regularizers [14.93489065234423]
We propose a regularization scheme for image reconstruction that leverages the power of deep learning.
Our scheme is interpretable because it corresponds to the minimization of a series of convex problems.
We offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.
arXiv Detail & Related papers (2024-07-09T07:22:48Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Shared Prior Learning of Energy-Based Models for Image Reconstruction [69.72364451042922]
We propose a novel learning-based framework for image reconstruction particularly designed for training without ground truth data.
In the absence of ground truth data, we change the loss functional to a patch-based Wasserstein functional.
In shared prior learning, both aforementioned optimal control problems are optimized simultaneously with shared learned parameters of the regularizer.
arXiv Detail & Related papers (2020-11-12T17:56:05Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Model-Aware Regularization For Learning Approaches To Inverse Problems [11.314492463814817]
We provide an analysis of the generalisation error of deep learning methods applicable to inverse problems.
We propose a 'plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network.
We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches.
arXiv Detail & Related papers (2020-06-18T21:59:03Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.