Implicit Bias of Projected Subgradient Method Gives Provable Robust
Recovery of Subspaces of Unknown Codimension
- URL: http://arxiv.org/abs/2201.09079v1
- Date: Sat, 22 Jan 2022 15:36:03 GMT
- Title: Implicit Bias of Projected Subgradient Method Gives Provable Robust
Recovery of Subspaces of Unknown Codimension
- Authors: Paris V. Giampouras, Benjamin D. Haeffele and Ren\'e Vidal
- Abstract summary: We show that Dual Principal Component Pursuit (DPCP) can provably solve problems in the it unknown subspace dimension regime.
We propose a very simple algorithm based on running multiple instances of a projected sub-gradient descent method (PSGM)
In particular, we show that 1) all of the problem instances will converge to a vector in the nullspace of the subspace and 2) the ensemble of problem instance solutions will be sufficiently diverse to fully span the nullspace of the subspace.
- Score: 12.354076490479514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust subspace recovery (RSR) is a fundamental problem in robust
representation learning. Here we focus on a recently proposed RSR method termed
Dual Principal Component Pursuit (DPCP) approach, which aims to recover a basis
of the orthogonal complement of the subspace and is amenable to handling
subspaces of high relative dimension. Prior work has shown that DPCP can
provably recover the correct subspace in the presence of outliers, as long as
the true dimension of the subspace is known. We show that DPCP can provably
solve RSR problems in the {\it unknown} subspace dimension regime, as long as
orthogonality constraints -- adopted in previous DPCP formulations -- are
relaxed and random initialization is used instead of spectral one. Namely, we
propose a very simple algorithm based on running multiple instances of a
projected sub-gradient descent method (PSGM), with each problem instance
seeking to find one vector in the null space of the subspace. We theoretically
prove that under mild conditions this approach will succeed with high
probability. In particular, we show that 1) all of the problem instances will
converge to a vector in the nullspace of the subspace and 2) the ensemble of
problem instance solutions will be sufficiently diverse to fully span the
nullspace of the subspace thus also revealing its true unknown codimension. We
provide empirical results that corroborate our theoretical results and showcase
the remarkable implicit rank regularization behavior of PSGM algorithm that
allows us to perform RSR without being aware of the subspace dimension.
Related papers
- Subspace Representation Learning for Sparse Linear Arrays to Localize More Sources than Sensors: A Deep Learning Methodology [19.100476521802243]
We develop a novel methodology that estimates the co-array subspaces from a sample covariance for sparse linear array (SLA)
To learn such representations, we propose loss functions that gauge the separation between the desired and the estimated subspace.
The computation of learning subspaces of different dimensions is accelerated by a new batch sampling strategy.
arXiv Detail & Related papers (2024-08-29T15:14:52Z) - Sparse PCA with Oracle Property [115.72363972222622]
We propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations.
We prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA.
arXiv Detail & Related papers (2023-12-28T02:52:54Z) - K-band: Self-supervised MRI Reconstruction via Stochastic Gradient Descent over K-space Subsets [16.785465381844435]
We introduce a novel mathematical framework, dubbed k-band, that enables training DL models using only partial, limited-resolution k-space data.
In each training iteration, rather than using the fully sampled k-space for computing gradients, we use only a small k-space portion.
Numerical experiments with raw MRI data indicate that k-band outperforms two other methods trained on limited-resolution data.
arXiv Detail & Related papers (2023-08-05T22:07:37Z) - A Geometric Notion of Causal Probing [91.14470073637236]
In a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.
We give a set of intrinsic criteria which characterize an ideal linear concept subspace.
We find that LEACE returns a one-dimensional subspace containing roughly half of total concept information.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Principal Component Analysis in Space Forms [7.822210329345704]
We study Principal Component Analysis (PCA) in space forms.
Finding the optimal low-dimensional affine subspace for given points in a space form amounts to dimensionality reduction.
We propose proper cost functions that enjoy two properties: (1) their optimal affine subspace is the solution to an eigenequation, and (2) optimal affine subspaces of different dimensions form a nested set.
arXiv Detail & Related papers (2023-01-06T23:48:37Z) - Regressive Domain Adaptation for Unsupervised Keypoint Detection [67.2950306888855]
Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
arXiv Detail & Related papers (2021-03-10T16:45:22Z) - Multi-point dimensionality reduction to improve projection layout
reliability [77.34726150561087]
In ordinary Dimensionality Reduction (DR), each data instance in an m-dimensional space (original space) is mapped to one point in a d-dimensional space (visual space)
Our solution, named Red Gray Plus, is built upon and extends a combination of ordinary DR and graph drawing techniques.
arXiv Detail & Related papers (2021-01-15T17:17:02Z) - A Critique of Self-Expressive Deep Subspace Clustering [23.971512395191308]
Subspace clustering is an unsupervised clustering technique designed to cluster data that is supported on a union of linear subspaces.
We show that there are a number of potential flaws with this approach which have not been adequately addressed in prior work.
arXiv Detail & Related papers (2020-10-08T00:14:59Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.