Learning Sparsity-Promoting Regularizers using Bilevel Optimization
- URL: http://arxiv.org/abs/2207.08939v2
- Date: Tue, 5 Sep 2023 20:53:55 GMT
- Title: Learning Sparsity-Promoting Regularizers using Bilevel Optimization
- Authors: Avrajit Ghosh, Michael T. McCann, Madeline Mitchell, and Saiprasad
Ravishankar
- Abstract summary: We present a method for supervised learning of sparsity-promoting regularizers for denoising signals and images.
Experiments with structured 1D signals and natural images show that the proposed method can learn an operator that outperforms well-known regularizers.
- Score: 9.18465987536469
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a method for supervised learning of sparsity-promoting
regularizers for denoising signals and images. Sparsity-promoting
regularization is a key ingredient in solving modern signal reconstruction
problems; however, the operators underlying these regularizers are usually
either designed by hand or learned from data in an unsupervised way. The recent
success of supervised learning (mainly convolutional neural networks) in
solving image reconstruction problems suggests that it could be a fruitful
approach to designing regularizers. Towards this end, we propose to denoise
signals using a variational formulation with a parametric, sparsity-promoting
regularizer, where the parameters of the regularizer are learned to minimize
the mean squared error of reconstructions on a training set of ground truth
image and measurement pairs. Training involves solving a challenging bilievel
optimization problem; we derive an expression for the gradient of the training
loss using the closed-form solution of the denoising problem and provide an
accompanying gradient descent algorithm to minimize it. Our experiments with
structured 1D signals and natural images show that the proposed method can
learn an operator that outperforms well-known regularizers (total variation,
DCT-sparsity, and unsupervised dictionary learning) and collaborative filtering
for denoising. While the approach we present is specific to denoising, we
believe that it could be adapted to the larger class of inverse problems with
linear measurement models, giving it applicability in a wide range of signal
reconstruction settings.
Related papers
- Image Restoration by Denoising Diffusion Models with Iteratively Preconditioned Guidance [9.975341265604577]
Training deep neural networks has become a common approach for addressing image restoration problems.
In low-noise settings, guidance that is based on back-projection (BP) has been shown to be a promising strategy.
We propose a novel guidance technique, based on preconditioning that allows traversing from BP-based guidance to at least squares based guidance.
arXiv Detail & Related papers (2023-12-27T10:57:03Z) - Batch-less stochastic gradient descent for compressive learning of deep
regularization for image denoising [0.0]
We consider the problem of denoising with the help of prior information taken from a database of clean signals or images.
With deep neural networks (DNN), complex distributions can be recovered from a large training database.
We propose two variants of gradient descent (SGD) for the recovery of deep regularization parameters.
arXiv Detail & Related papers (2023-10-02T11:46:11Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Bilevel learning of l1-regularizers with closed-form gradients(BLORC) [8.138650738423722]
We present a method for supervised learning of sparsity-promoting regularizers.
The parameters are learned to minimize the mean squared error of reconstruction on a training set of ground truth signal and measurement pairs.
arXiv Detail & Related papers (2021-11-21T17:01:29Z) - Preconditioned Plug-and-Play ADMM with Locally Adjustable Denoiser for
Image Restoration [54.23646128082018]
We extend the concept of plug-and-play optimization to use denoisers that can be parameterized for non-constant noise variance.
We show that our pixel-wise adjustable denoiser, along with a suitable preconditioning strategy, can further improve the plug-and-play ADMM approach for several applications.
arXiv Detail & Related papers (2021-10-01T15:46:35Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Fully Unsupervised Diversity Denoising with Convolutional Variational
Autoencoders [81.30960319178725]
We propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs)
First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder.
We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training.
arXiv Detail & Related papers (2020-06-10T21:28:13Z) - Supervised Learning of Sparsity-Promoting Regularizers for Denoising [13.203765985718205]
We present a method for supervised learning of sparsity-promoting regularizers for image denoising.
Our experiments show that the proposed method can learn an operator that outperforms well-known regularizers.
arXiv Detail & Related papers (2020-06-09T21:38:05Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.