Iterative Reweighted Least Squares Networks With Convergence Guarantees
for Solving Inverse Imaging Problems
- URL: http://arxiv.org/abs/2308.05745v1
- Date: Thu, 10 Aug 2023 17:59:46 GMT
- Title: Iterative Reweighted Least Squares Networks With Convergence Guarantees
for Solving Inverse Imaging Problems
- Authors: Iaroslav Koshelev and Stamatios Lefkimmiatis
- Abstract summary: We present a novel optimization strategy for image reconstruction tasks under analysis-based image regularization.
We parameterize such regularizers using potential functions that correspond to weighted extensions of the $ell_pp$-vector and $mathcalS_pp$ Schatten-matrix quasi-norms.
We show that thanks to the convergence guarantees of our proposed minimization strategy, such optimization can be successfully performed with a memory-efficient implicit back-propagation scheme.
- Score: 12.487990897680422
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work we present a novel optimization strategy for image
reconstruction tasks under analysis-based image regularization, which promotes
sparse and/or low-rank solutions in some learned transform domain. We
parameterize such regularizers using potential functions that correspond to
weighted extensions of the $\ell_p^p$-vector and $\mathcal{S}_p^p$
Schatten-matrix quasi-norms with $0 < p \le 1$. Our proposed minimization
strategy extends the Iteratively Reweighted Least Squares (IRLS) method,
typically used for synthesis-based $\ell_p$ and $\mathcal{S}_p$ norm and
analysis-based $\ell_1$ and nuclear norm regularization. We prove that under
mild conditions our minimization algorithm converges linearly to a stationary
point, and we provide an upper bound for its convergence rate. Further, to
select the parameters of the regularizers that deliver the best results for the
problem at hand, we propose to learn them from training data by formulating the
supervised learning process as a stochastic bilevel optimization problem. We
show that thanks to the convergence guarantees of our proposed minimization
strategy, such optimization can be successfully performed with a
memory-efficient implicit back-propagation scheme. We implement our learned
IRLS variants as recurrent networks and assess their performance on the
challenging image reconstruction tasks of non-blind deblurring,
super-resolution and demosaicking. The comparisons against other existing
learned reconstruction approaches demonstrate that our overall method is very
competitive and in many cases outperforms existing unrolled networks, whose
number of parameters is orders of magnitude higher than in our case.
Related papers
- Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - From Inverse Optimization to Feasibility to ERM [11.731853838892487]
We study the contextual inverse setting that utilizes additional contextual information to better predict parameters.
We experimentally validate our approach on synthetic and real-world problems and demonstrate improved performance compared to existing methods.
arXiv Detail & Related papers (2024-02-27T21:06:42Z) - Diffusion Stochastic Optimization for Min-Max Problems [33.73046548872663]
The optimistic gradient method is useful in addressing minimax optimization problems.
Motivated by the observation that the conventional version suffers from the need for a large batch size, we introduce and analyze a new formulation termed Samevareps-generativeOGOG.
arXiv Detail & Related papers (2024-01-26T01:16:59Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Learning Sparse and Low-Rank Priors for Image Recovery via Iterative
Reweighted Least Squares Minimization [12.487990897680422]
We introduce a novel optimization algorithm for image recovery under learned sparse and low-rank constraints.
Our proposed algorithm generalizes the Iteratively Reweighted Least Squares (IRLS) method, used for signal recovery.
Our reconstruction results are shown to be very competitive and in many cases outperform those of existing unrolled networks.
arXiv Detail & Related papers (2023-04-20T17:59:45Z) - Smooth over-parameterized solvers for non-smooth structured optimization [3.756550107432323]
Non-smoothness encodes structural constraints on the solutions, such as sparsity, group sparsity, low-rank edges and sharp edges.
We operate a non-weighted but smooth overparametrization of the underlying nonsmooth optimization problems.
Our main contribution is to apply the Variable Projection (VarPro) which defines a new formulation by explicitly minimizing over part of the variables.
arXiv Detail & Related papers (2022-05-03T09:23:07Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.