Statistical and computational rates in high rank tensor estimation
- URL: http://arxiv.org/abs/2304.04043v1
- Date: Sat, 8 Apr 2023 15:34:26 GMT
- Title: Statistical and computational rates in high rank tensor estimation
- Authors: Chanwoo Lee and Miaoyan Wang
- Abstract summary: Higher-order tensor datasets arise commonly in recommendation systems, neuroimaging, and social networks.
We consider a generative latent variable tensor model that incorporates both high rank and low rank models.
We show that the statistical-computational gap emerges only for latent variable tensors of order 3 or higher.
- Score: 11.193504036335503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Higher-order tensor datasets arise commonly in recommendation systems,
neuroimaging, and social networks. Here we develop probable methods for
estimating a possibly high rank signal tensor from noisy observations. We
consider a generative latent variable tensor model that incorporates both high
rank and low rank models, including but not limited to, simple hypergraphon
models, single index models, low-rank CP models, and low-rank Tucker models.
Comprehensive results are developed on both the statistical and computational
limits for the signal tensor estimation. We find that high-dimensional latent
variable tensors are of log-rank; the fact explains the pervasiveness of
low-rank tensors in applications. Furthermore, we propose a polynomial-time
spectral algorithm that achieves the computationally optimal rate. We show that
the statistical-computational gap emerges only for latent variable tensors of
order 3 or higher. Numerical experiments and two real data applications are
presented to demonstrate the practical merits of our methods.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Provable Tensor Completion with Graph Information [49.08648842312456]
We introduce a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem.
We develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor.
In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm.
arXiv Detail & Related papers (2023-10-04T02:55:10Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Inference for Low-rank Tensors -- No Need to Debias [22.163281794187544]
In this paper, we consider the statistical inference for several low-rank tensor models.
For the rank-one PCA model, we establish the theory for inference on each individual singular tensor.
Finally, simulations are presented to corroborate our theoretical discoveries.
arXiv Detail & Related papers (2020-12-29T16:48:02Z) - Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion [1.3406858660972554]
We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
arXiv Detail & Related papers (2020-05-28T08:53:54Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - Tensor denoising and completion based on ordinal observations [11.193504036335503]
We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations.
We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees.
We show that the proposed estimator is minimax optimal under the class of low-rank models.
arXiv Detail & Related papers (2020-02-16T07:09:56Z) - Sparse and Low-Rank High-Order Tensor Regression via Parallel Proximal
Method [6.381138694845438]
We propose the Sparse and Low-rank Regression model for large-scale data with high-order structures.
Our model enforces sparsity and low-rankness of the tensor coefficient.
Our model's predictions exhibit meaningful interpretations on the video dataset.
arXiv Detail & Related papers (2019-11-29T06:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.