A Validation Approach to Over-parameterized Matrix and Image Recovery
- URL: http://arxiv.org/abs/2209.10675v1
- Date: Wed, 21 Sep 2022 22:01:23 GMT
- Title: A Validation Approach to Over-parameterized Matrix and Image Recovery
- Authors: Lijun Ding, Zhen Qin, Liwei Jiang, Jinxin Zhou, Zhihui Zhu
- Abstract summary: We consider the problem of recovering a low-rank from a number of random linear measurements.
We show that as a long as the measurement operators satisfy the restricted isometry (RIP) with its rank scaling with the ground-specified matrix, gradient descent are on a particular trajectory towards the ground-specified matrix.
- Score: 26.374729978056838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of recovering a low-rank matrix from a
number of noisy random linear measurements. We consider the setting where the
rank of the ground-truth matrix is unknown a prior and use an overspecified
factored representation of the matrix variable, where the global optimal
solutions overfit and do not correspond to the underlying ground-truth. We then
solve the associated nonconvex problem using gradient descent with small random
initialization. We show that as long as the measurement operators satisfy the
restricted isometry property (RIP) with its rank parameter scaling with the
rank of ground-truth matrix rather than scaling with the overspecified matrix
variable, gradient descent iterations are on a particular trajectory towards
the ground-truth matrix and achieve nearly information-theoretically optimal
recovery when stop appropriately. We then propose an efficient early stopping
strategy based on the common hold-out method and show that it detects nearly
optimal estimator provably. Moreover, experiments show that the proposed
validation approach can also be efficiently used for image restoration with
deep image prior which over-parameterizes an image with a deep network.
Related papers
- SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid Registration [76.40993825836222]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.
The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - Diff-PCR: Diffusion-Based Correspondence Searching in Doubly Stochastic
Matrix Space for Point Cloud Registration [35.82753072083472]
State-of-the-art methods have employed RAFT-like iterative updates to refine the solution.
We propose a novel approach that exploits the Denoising Diffusion Model to predict a searching for the optimal matching matrix.
Our method offers flexibility by allowing the search to start from any initial matching matrix provided by the online backbone or white noise.
arXiv Detail & Related papers (2023-12-31T09:24:28Z) - Stochastic Mirror Descent for Large-Scale Sparse Recovery [13.500750042707407]
We discuss an application of quadratic Approximation to statistical estimation of high-dimensional sparse parameters.
We show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution.
arXiv Detail & Related papers (2022-10-23T23:23:23Z) - Provable Low Rank Plus Sparse Matrix Separation Via Nonconvex
Regularizers [0.0]
This paper considers a large problem where we seek to recover a low rank matrix/or sparse vector from some set of measurements.
While methods based on convex bias estimators suffer from bias the rank or sparsity to be known as a priori, we use non regularizers.
We present a novel analysis of the proximal alternating bias descent algorithm applied to such problems.
arXiv Detail & Related papers (2021-09-26T22:09:42Z) - Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact
Recovery [37.05862765171643]
We consider a robust factorization of a low-rank matrix with no prior knowledge on the rank.
We show that our particular design of diminishing the size of the matrix effectively prevents overfitting under overized models.
arXiv Detail & Related papers (2021-09-23T05:54:46Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled
Gradient Descent [34.0533596121548]
Low-rank matrix estimation converges convex problem that finds numerous applications in signal processing, machine learning and imaging science.
We show that ScaledGD achieves the best of the best in terms of the number of the low-rank matrix.
Our analysis is also applicable to general loss that are similar to low-rank gradient descent.
arXiv Detail & Related papers (2020-05-18T17:17:16Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.