WARPd: A linearly convergent first-order method for inverse problems
with approximate sharpness conditions
- URL: http://arxiv.org/abs/2110.12437v1
- Date: Sun, 24 Oct 2021 13:19:41 GMT
- Title: WARPd: A linearly convergent first-order method for inverse problems
with approximate sharpness conditions
- Authors: Matthew J. Colbrook
- Abstract summary: Sharpness conditions directly control the recovery performance of restart schemes for first-order methods.
We provide a first-order method: Weighted, Accelerated and Restarted Primal-dual (WARPd)
Under a generic approximate sharpness condition, WARPd achieves stable linear convergence to the desired vector.
We show how WARPd compares favorably with specialized state-of-the-art methods and is ideally suited for solving large-scale problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstruction of signals from undersampled and noisy measurements is a topic
of considerable interest. Sharpness conditions directly control the recovery
performance of restart schemes for first-order methods without the need for
restrictive assumptions such as strong convexity. However, they are challenging
to apply in the presence of noise or approximate model classes (e.g.,
approximate sparsity). We provide a first-order method: Weighted, Accelerated
and Restarted Primal-dual (WARPd), based on primal-dual iterations and a novel
restart-reweight scheme. Under a generic approximate sharpness condition, WARPd
achieves stable linear convergence to the desired vector. Many problems of
interest fit into this framework. For example, we analyze sparse recovery in
compressed sensing, low-rank matrix recovery, matrix completion, TV
regularization, minimization of $\|Bx\|_{l^1}$ under constraints
($l^1$-analysis problems for general $B$), and mixed regularization problems.
We show how several quantities controlling recovery performance also provide
explicit approximate sharpness constants. Numerical experiments show that WARPd
compares favorably with specialized state-of-the-art methods and is ideally
suited for solving large-scale problems. We also present a noise-blind variant
based on the Square-Root LASSO decoder. Finally, we show how to unroll WARPd as
neural networks. This approximation theory result provides lower bounds for
stable and accurate neural networks for inverse problems and sheds light on
architecture choices. Code and a gallery of examples are made available online
as a MATLAB package.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Performative Reinforcement Learning with Linear Markov Decision Process [14.75815792682734]
We study the setting of emphperformative reinforcement learning where the deployed policy affects both the reward and the transition of the underlying Markov decision process.
We generalize the results to emphlinear Markov decision processes which is the primary theoretical model of large-scale MDPs.
arXiv Detail & Related papers (2024-11-07T23:04:48Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Retire: Robust Expectile Regression in High Dimensions [3.9391041278203978]
Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data.
We propose and study (penalized) robust expectile regression (retire)
We show that the proposed procedure can be efficiently solved by a semismooth Newton coordinate descent algorithm.
arXiv Detail & Related papers (2022-12-11T18:03:12Z) - Restricted Strong Convexity of Deep Learning Models with Smooth
Activations [31.003601717265006]
We study the problem of optimization of deep learning models with smooth activation functions.
We introduce a new analysis of optimization based on Restricted Strong Convexity (RSC)
Ours is the first result on establishing geometric convergence of GD based on RSC for deep learning models.
arXiv Detail & Related papers (2022-09-29T21:24:26Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.
We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - Distributional Hardness Against Preconditioned Lasso via Erasure-Robust
Designs [22.41443027099101]
We show that standard sparse random designs are with high probability robust to adversarial measurement erasures.
This is the first time that partial recoverability of arbitrary sparse signals under erasures has been studied in compressed sensing.
arXiv Detail & Related papers (2022-03-05T22:16:05Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.