Learned convex regularizers for inverse problems
- URL: http://arxiv.org/abs/2008.02839v2
- Date: Mon, 1 Mar 2021 18:56:30 GMT
- Title: Learned convex regularizers for inverse problems
- Authors: Subhadip Mukherjee, S\"oren Dittmer, Zakhar Shumaylov, Sebastian Lunz,
Ozan \"Oktem, and Carola-Bibiane Sch\"onlieb
- Abstract summary: We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
- Score: 3.294199808987679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the variational reconstruction framework for inverse problems and
propose to learn a data-adaptive input-convex neural network (ICNN) as the
regularization functional. The ICNN-based convex regularizer is trained
adversarially to discern ground-truth images from unregularized
reconstructions. Convexity of the regularizer is desirable since (i) one can
establish analytical convergence guarantees for the corresponding variational
reconstruction problem and (ii) devise efficient and provable algorithms for
reconstruction. In particular, we show that the optimal solution to the
variational problem converges to the ground-truth if the penalty parameter
decays sub-linearly with respect to the norm of the noise. Further, we prove
the existence of a sub-gradient-based algorithm that leads to a monotonically
decreasing error in the parameter space with iterations. To demonstrate the
performance of our approach for solving inverse problems, we consider the tasks
of deblurring natural images and reconstructing images in computed tomography
(CT), and show that the proposed convex regularizer is at least competitive
with and sometimes superior to state-of-the-art data-driven techniques for
inverse problems.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - A Primal-dual algorithm for image reconstruction with ICNNs [3.4797100095791706]
We address the optimization problem in a data-driven variational framework, where the regularizer is parameterized by an input- neural network (ICNN)
While gradient-based methods are commonly used to solve such problems, they struggle to effectively handle nonsmoothness.
We show that a proposed approach outperforms subgradient methods in terms of both speed and stability.
arXiv Detail & Related papers (2024-10-16T10:36:29Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Deep unfolding as iterative regularization for imaging inverse problems [6.485466095579992]
Deep unfolding methods guide the design of deep neural networks (DNNs) through iterative algorithms.
We prove that the unfolded DNN will converge to it stably.
We demonstrate with an example of MRI reconstruction that the proposed method outperforms conventional unfolding methods.
arXiv Detail & Related papers (2022-11-24T07:38:47Z) - Equivariance Regularization for Image Reconstruction [5.025654873456756]
We propose a structure-adaptive regularization scheme for solving imaging inverse problems under incomplete measurements.
This regularization scheme utilizes the equivariant structure in the physics of the measurements to mitigate the ill-poseness of the inverse problem.
Our proposed scheme can be applied in a plug-and-play manner alongside with any classic first-order optimization algorithm.
arXiv Detail & Related papers (2022-02-10T14:38:08Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - End-to-end reconstruction meets data-driven regularization for inverse
problems [2.800608984818919]
We propose an unsupervised approach for learning end-to-end reconstruction operators for ill-posed inverse problems.
The proposed method combines the classical variational framework with iterative unrolling.
We demonstrate with the example of X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-06-07T12:05:06Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.