Long Random Matrices and Tensor Unfolding
- URL: http://arxiv.org/abs/2110.10210v1
- Date: Tue, 19 Oct 2021 19:13:27 GMT
- Title: Long Random Matrices and Tensor Unfolding
- Authors: G\'erard Ben Arous, Daniel Zhengyu Huang, Jiaoyang Huang
- Abstract summary: We prove there exists a critical signal-to-noise ratio depending on the dimensions of the matrix.
As a main application, we investigate the tensor unfolding algorithm for the asymmetric rank-one spiked tensor model.
- Score: 4.83420384410068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we consider the singular values and singular vectors of low
rank perturbations of large rectangular random matrices, in the regime the
matrix is "long": we allow the number of rows (columns) to grow polynomially in
the number of columns (rows). We prove there exists a critical signal-to-noise
ratio (depending on the dimensions of the matrix), and the extreme singular
values and singular vectors exhibit a BBP type phase transition. As a main
application, we investigate the tensor unfolding algorithm for the asymmetric
rank-one spiked tensor model, and obtain an exact threshold, which is
independent of the procedure of tensor unfolding. If the signal-to-noise ratio
is above the threshold, tensor unfolding detects the signals; otherwise, it
fails to capture the signals.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - A Random Matrix Approach to Low-Multilinear-Rank Tensor Approximation [24.558241146742205]
We characterize the large-dimensional spectral behavior of the unfoldings of the data tensor and exhibit relevant signal-to-noise ratios governing the detectability of the principal directions of the signal.
Results allow to accurately predict the reconstruction performance of truncated multilinear SVD (MLSVD) in the non-trivial regime.
arXiv Detail & Related papers (2024-02-05T16:38:30Z) - Householder Projector for Unsupervised Latent Semantics Discovery [58.92485745195358]
Householder Projector helps StyleGANs to discover more disentangled and precise semantic attributes without sacrificing image fidelity.
We integrate our projector into pre-trained StyleGAN2/StyleGAN3 and evaluate the models on several benchmarks.
arXiv Detail & Related papers (2023-07-16T11:43:04Z) - Detection problems in the spiked matrix models [15.125686694430573]
We first show that the principal component analysis can be improved by entrywise pre-transforming the data matrix if the noise is non-Gaussian.
We also introduce an algorithm that estimates the rank of the signal when it is not known a priori.
arXiv Detail & Related papers (2023-01-12T23:46:41Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Fast Low-Rank Tensor Decomposition by Ridge Leverage Score Sampling [5.740578698172382]
We study Tucker decompositions and use tools from randomized numerical linear algebra called ridge leverage scores.
We show how to use approximate ridge leverage scores to construct a sketched instance for any ridge regression problem.
We demonstrate the effectiveness of our approximate ridge regressioni algorithm for large, low-rank Tucker decompositions on both synthetic and real-world data.
arXiv Detail & Related papers (2021-07-22T13:32:47Z) - Detection of Signal in the Spiked Rectangular Models [8.185918509343818]
We show that the principal component analysis can be improved by pre-transforming the matrix entries if the noise is non-Gaussian.
We also propose a hypothesis test to detect the presence of signal with low computational complexity.
arXiv Detail & Related papers (2021-04-28T01:15:45Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - All-or-nothing statistical and computational phase transitions in sparse
spiked matrix estimation [35.035853993422506]
We analyze the approximate message passing algorithm in a sparse regime.
We find all-or-nothing phase transitions for the minimum and mean-square errors.
In the sparse regime the statistical-to-algorithmic gap diverges indicating that sparse recovery is hard for approximate message passing.
arXiv Detail & Related papers (2020-06-14T18:38:34Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.