Projected Gradient Descent Algorithms for Solving Nonlinear Inverse
Problems with Generative Priors
- URL: http://arxiv.org/abs/2209.10093v1
- Date: Wed, 21 Sep 2022 04:05:12 GMT
- Title: Projected Gradient Descent Algorithms for Solving Nonlinear Inverse
Problems with Generative Priors
- Authors: Zhaoqiang Liu, Jun Han
- Abstract summary: We assume that the unknown $p$-dimensional signal lies near the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs.
We propose a nonlinear least-squares estimator that is guaranteed to enjoy an optimal statistical rate.
- Score: 17.426500577203505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose projected gradient descent (PGD) algorithms for
signal estimation from noisy nonlinear measurements. We assume that the unknown
$p$-dimensional signal lies near the range of an $L$-Lipschitz continuous
generative model with bounded $k$-dimensional inputs. In particular, we
consider two cases when the nonlinear link function is either unknown or known.
For unknown nonlinearity, similarly to \cite{liu2020generalized}, we make the
assumption of sub-Gaussian observations and propose a linear least-squares
estimator. We show that when there is no representation error and the sensing
vectors are Gaussian, roughly $O(k \log L)$ samples suffice to ensure that a
PGD algorithm converges linearly to a point achieving the optimal statistical
rate using arbitrary initialization. For known nonlinearity, we assume
monotonicity as in \cite{yang2016sparse}, and make much weaker assumptions on
the sensing vectors and allow for representation error. We propose a nonlinear
least-squares estimator that is guaranteed to enjoy an optimal statistical
rate. A corresponding PGD algorithm is provided and is shown to also converge
linearly to the estimator using arbitrary initialization. In addition, we
present experimental results on image datasets to demonstrate the performance
of our PGD algorithms.
Related papers
- Nonlinear Laplacians: Tunable principal component analysis under directional prior information [9.884862296109892]
We introduce a new family of algorithms for detecting and estimating a rank-one signal from a noisy observation.<n>We study the performance of such algorithms compared to direct spectral algorithms.<n>We find both theoretically and empirically that, if $sigma$ is chosen appropriately, then nonlinear Laplacian spectral algorithms substantially outperform direct spectral algorithms.
arXiv Detail & Related papers (2025-05-18T19:37:47Z) - Uncertainty quantification for iterative algorithms in linear models with application to early stopping [4.150180443030652]
This paper investigates the iterates $hbb1,dots,hbbT$ obtained from iterative algorithms in high-dimensional linear regression problems.
The analysis and proposed estimators are applicable to Gradient Descent (GD), GD and their accelerated variants such as Fast Iterative Soft-Thresholding (FISTA)
arXiv Detail & Related papers (2024-04-27T10:20:41Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Alternating minimization for generalized rank one matrix sensing: Sharp predictions from a random initialization [5.900674344455754]
We show a technique for estimating properties of a rank random matrix with i.i.d.
We show sharp convergence guarantees exact recovery in a single step.
Our analysis also exposes several other properties of this problem.
arXiv Detail & Related papers (2022-07-20T05:31:05Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.
We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Sample Complexity Bounds for Two Timescale Value-based Reinforcement
Learning Algorithms [65.09383385484007]
Two timescale approximation (SA) has been widely used in value-based reinforcement learning algorithms.
We study the non-asymptotic convergence rate of two timescale linear and nonlinear TDC and Greedy-GQ algorithms.
arXiv Detail & Related papers (2020-11-10T11:36:30Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Tight Nonparametric Convergence Rates for Stochastic Gradient Descent
under the Noiseless Linear Model [0.0]
We analyze the convergence of single-pass, fixed step-size gradient descent on the least-square risk under this model.
As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points.
arXiv Detail & Related papers (2020-06-15T08:25:50Z) - Asymptotic errors for convex penalized linear regression beyond Gaussian
matrices [23.15629681360836]
We consider the problem of learning a coefficient vector $x_0$ in $RN$ from noisy linear observations.
We provide a rigorous derivation of an explicit formula for the mean squared error.
We show that our predictions agree remarkably well with numerics even for very moderate sizes.
arXiv Detail & Related papers (2020-02-11T13:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.