End-to-end reconstruction meets data-driven regularization for inverse
problems
- URL: http://arxiv.org/abs/2106.03538v1
- Date: Mon, 7 Jun 2021 12:05:06 GMT
- Title: End-to-end reconstruction meets data-driven regularization for inverse
problems
- Authors: Subhadip Mukherjee, Marcello Carioni, Ozan \"Oktem, Carola-Bibiane
Sch\"onlieb
- Abstract summary: We propose an unsupervised approach for learning end-to-end reconstruction operators for ill-posed inverse problems.
The proposed method combines the classical variational framework with iterative unrolling.
We demonstrate with the example of X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods.
- Score: 2.800608984818919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an unsupervised approach for learning end-to-end reconstruction
operators for ill-posed inverse problems. The proposed method combines the
classical variational framework with iterative unrolling, which essentially
seeks to minimize a weighted combination of the expected distortion in the
measurement space and the Wasserstein-1 distance between the distributions of
the reconstruction and ground-truth. More specifically, the regularizer in the
variational setting is parametrized by a deep neural network and learned
simultaneously with the unrolled reconstruction operator. The variational
problem is then initialized with the reconstruction of the unrolled operator
and solved iteratively till convergence. Notably, it takes significantly fewer
iterations to converge, thanks to the excellent initialization obtained via the
unrolled operator. The resulting approach combines the computational efficiency
of end-to-end unrolled reconstruction with the well-posedness and
noise-stability guarantees of the variational setting. Moreover, we demonstrate
with the example of X-ray computed tomography (CT) that our approach
outperforms state-of-the-art unsupervised methods, and that it outperforms or
is on par with state-of-the-art supervised learned reconstruction approaches.
Related papers
- Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A Lifted Bregman Formulation for the Inversion of Deep Neural Networks [28.03724379169264]
We propose a novel framework for the regularised inversion of deep neural networks.
The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables.
We present theoretical results and support their practical application with numerical examples.
arXiv Detail & Related papers (2023-03-01T20:30:22Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Transformer Meets Boundary Value Inverse Problems [4.165221477234755]
Transformer-based deep direct sampling method is proposed for solving a class of boundary value inverse problem.
A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and reconstructed images.
arXiv Detail & Related papers (2022-09-29T17:45:25Z) - Bayesian Recurrent Units and the Forward-Backward Algorithm [91.39701446828144]
Using Bayes's theorem, we derive a unit-wise recurrence as well as a backward recursion similar to the forward-backward algorithm.
The resulting Bayesian recurrent units can be integrated as recurrent neural networks within deep learning frameworks.
Experiments on speech recognition indicate that adding the derived units at the end of state-of-the-art recurrent architectures can improve the performance at a very low cost in terms of trainable parameters.
arXiv Detail & Related papers (2022-07-21T14:00:52Z) - Deep Equilibrium Architectures for Inverse Problems in Imaging [14.945209750917483]
Recent efforts on solving inverse problems in imaging via deep neural networks use architectures inspired by a fixed number of iterations of an optimization method.
This paper describes an alternative approach corresponding to an em infinite number of iterations, yielding up to a 4dB PSNR improvement in reconstruction accuracy.
arXiv Detail & Related papers (2021-02-16T03:49:58Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.