Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy
Blind Deconvolution under Random Designs
- URL: http://arxiv.org/abs/2008.01724v2
- Date: Tue, 13 Jul 2021 01:56:10 GMT
- Title: Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy
Blind Deconvolution under Random Designs
- Authors: Yuxin Chen, Jianqing Fan, Bingyan Wang, Yuling Yan
- Abstract summary: We investigate the effectiveness of convex relaxation and nonoptimal optimization in solving bi$a-vis random noise.
Results significantly improve upon the state-of-the-art theoretical guarantees.
- Score: 12.089409241521185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the effectiveness of convex relaxation and nonconvex
optimization in solving bilinear systems of equations under two different
designs (i.e.$~$a sort of random Fourier design and Gaussian design). Despite
the wide applicability, the theoretical understanding about these two paradigms
remains largely inadequate in the presence of random noise. The current paper
makes two contributions by demonstrating that: (1) a two-stage nonconvex
algorithm attains minimax-optimal accuracy within a logarithmic number of
iterations. (2) convex relaxation also achieves minimax-optimal statistical
accuracy vis-\`a-vis random noise. Both results significantly improve upon the
state-of-the-art theoretical guarantees.
Related papers
- Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees [60.571030754252824]
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
arXiv Detail & Related papers (2023-08-17T21:15:04Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Computationally Efficient and Statistically Optimal Robust
High-Dimensional Linear Regression [15.389011827844572]
High-tailed linear regression under heavy-tailed noise or objective corruption is challenging, both computationally statistically.
In this paper, we introduce an algorithm for both the noise Gaussian or heavy 1 + epsilon regression problems.
arXiv Detail & Related papers (2023-05-10T14:31:03Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Computationally Efficient and Statistically Optimal Robust Low-rank
Matrix and Tensor Estimation [15.389011827844572]
Low-rank linear shrinkage estimation undertailed noise is challenging, both computationally statistically.
We introduce a novel sub-ient (RsGrad) which is not only computationally efficient but also statistically optimal.
arXiv Detail & Related papers (2022-03-02T09:05:15Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Sharp Asymptotics and Optimal Performance for Inference in Binary Models [41.7567932118769]
We study convex empirical risk for high-dimensional inference in binary models.
For binary linear classification under the Logistic and Probit models, we prove that the performance of least-squares is no worse than 0.997 and 0.98 times the optimal one.
arXiv Detail & Related papers (2020-02-17T22:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.