Model-Aware Regularization For Learning Approaches To Inverse Problems
- URL: http://arxiv.org/abs/2006.10869v1
- Date: Thu, 18 Jun 2020 21:59:03 GMT
- Title: Model-Aware Regularization For Learning Approaches To Inverse Problems
- Authors: Jaweria Amjad, Zhaoyan Lyu, Miguel R. D. Rodrigues
- Abstract summary: We provide an analysis of the generalisation error of deep learning methods applicable to inverse problems.
We propose a 'plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network.
We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches.
- Score: 11.314492463814817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are various inverse problems -- including reconstruction problems
arising in medical imaging -- where one is often aware of the forward operator
that maps variables of interest to the observations. It is therefore natural to
ask whether such knowledge of the forward operator can be exploited in deep
learning approaches increasingly used to solve inverse problems.
In this paper, we provide one such way via an analysis of the generalisation
error of deep learning methods applicable to inverse problems. In particular,
by building on the algorithmic robustness framework, we offer a generalisation
error bound that encapsulates key ingredients associated with the learning
problem such as the complexity of the data space, the size of the training set,
the Jacobian of the deep neural network and the Jacobian of the composition of
the forward operator with the neural network. We then propose a 'plug-and-play'
regulariser that leverages the knowledge of the forward map to improve the
generalization of the network. We likewise also propose a new method allowing
us to tightly upper bound the Lipschitz constants of the relevant functions
that is much more computational efficient than existing ones. We demonstrate
the efficacy of our model-aware regularised deep learning algorithms against
other state-of-the-art approaches on inverse problems involving various
sub-sampling operators such as those used in classical compressed sensing setup
and accelerated Magnetic Resonance Imaging (MRI).
Related papers
- A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Transformer Meets Boundary Value Inverse Problems [4.165221477234755]
Transformer-based deep direct sampling method is proposed for solving a class of boundary value inverse problem.
A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and reconstructed images.
arXiv Detail & Related papers (2022-09-29T17:45:25Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Provably Convergent Algorithms for Solving Inverse Problems Using
Generative Models [47.208080968675574]
We study the use of generative models in inverse problems with more complete understanding.
We support our claims with experimental results for solving various inverse problems.
We propose an extension of our approach that can handle model mismatch (i.e., situations where the generative prior is not exactly applicable)
arXiv Detail & Related papers (2021-05-13T15:58:27Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Learning the Travelling Salesperson Problem Requires Rethinking
Generalization [9.176056742068813]
End-to-end training of neural network solvers for graph optimization problems such as the Travelling Salesperson Problem (TSP) have seen a surge of interest recently.
While state-of-the-art learning-driven approaches perform closely to classical solvers when trained on trivially small sizes, they are unable to generalize the learnt policy to larger instances at practical scales.
This work presents an end-to-end neural optimization pipeline that unifies several recent papers in order to identify the principled biases, model architectures and learning algorithms that promote generalization to instances larger than those seen in training.
arXiv Detail & Related papers (2020-06-12T10:14:15Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.