On $O( \max \{n_1, n_2 \}\log ( \max \{ n_1, n_2 \} n_3) )$ Sample
Entries for $n_1 \times n_2 \times n_3$ Tensor Completion via Unitary
Transformation
- URL: http://arxiv.org/abs/2012.08784v1
- Date: Wed, 16 Dec 2020 08:03:48 GMT
- Title: On $O( \max \{n_1, n_2 \}\log ( \max \{ n_1, n_2 \} n_3) )$ Sample
Entries for $n_1 \times n_2 \times n_3$ Tensor Completion via Unitary
Transformation
- Authors: Guang-Jing Song, Michael K. Ng and Xiongjun Zhang
- Abstract summary: This paper investigates the incoherence conditions of $n_3$ low-rank $n_3$ low-rank $n_3$ matrix slices.
We show that such low-rank tensors can be recovered exactly with high probability.
- Score: 20.854908850239035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the key problems in tensor completion is the number of uniformly
random sample entries required for recovery guarantee. The main aim of this
paper is to study $n_1 \times n_2 \times n_3$ third-order tensor completion and
investigate into incoherence conditions of $n_3$ low-rank $n_1$-by-$n_2$ matrix
slices under the transformed tensor singular value decomposition where the
unitary transformation is applied along $n_3$-dimension. We show that such
low-rank tensors can be recovered exactly with high probability when the number
of randomly observed entries is of order $O( r\max \{n_1, n_2 \} \log ( \max \{
n_1, n_2 \} n_3))$, where $r$ is the sum of the ranks of these $n_3$ matrix
slices in the transformed tensor. By utilizing synthetic data and imaging data
sets, we demonstrate that the theoretical result can be obtained under valid
incoherence conditions, and the tensor completion performance of the proposed
method is also better than that of existing methods in terms of sample sizes
requirement.
Related papers
- Overcomplete Tensor Decomposition via Koszul-Young Flattenings [63.01248796170617]
We give a new algorithm for decomposing an $n_times n times n_3$ tensor as the sum of a minimal number of rank-1 terms.
We show that an even more general class of degree-$d$s cannot surpass rank $Cn$ for a constant $C = C(d)$.
arXiv Detail & Related papers (2024-11-21T17:41:09Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Identification of Mixtures of Discrete Product Distributions in
Near-Optimal Sample and Time Complexity [6.812247730094931]
We show, for any $ngeq 2k-1$, how to achieve sample complexity and run-time complexity $(1/zeta)O(k)$.
We also extend the known lower bound of $eOmega(k)$ to match our upper bound across a broad range of $zeta$.
arXiv Detail & Related papers (2023-09-25T09:50:15Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - High-Dimensional Gaussian Process Inference with Derivatives [90.8033626920884]
We show that in the low-data regime $ND$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $mathcalO(N2D + (N2)3)$.
We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.
arXiv Detail & Related papers (2021-02-15T13:24:41Z) - Exact nuclear norm, completion and decomposition for random overcomplete
tensors via degree-4 SOS [0.7233897166339269]
We show that simple semidefinite programs inspired by degree $4$ SOS can exactly solve the tensor nuclear norm, tensor decomposition, and tensor completion problems on tensors with random asymmetric components.
We show that w.h.p. these semidefinite programs can exactly find the nuclear norm and components of an $(ntimes ntimes n)$-tensor $mathcalT$ with $mleq n3/2/polylog(n)$ random asymmetric components.
arXiv Detail & Related papers (2020-11-18T17:27:36Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - A Sub-sampled Tensor Method for Non-convex Optimization [0.0]
We present a method that uses a fourth-order regularized model to find local minima of smooth and potentially objective functions with a finite-sum structure.
arXiv Detail & Related papers (2019-11-23T13:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.