Smooth tensor estimation with unknown permutations
- URL: http://arxiv.org/abs/2111.04681v1
- Date: Mon, 8 Nov 2021 17:53:48 GMT
- Title: Smooth tensor estimation with unknown permutations
- Authors: Chanwoo Lee and Miaoyan Wang
- Abstract summary: We develop a family of smooth tensor models up to arbitrary index permutations.
We show that a constrained least-squares estimator in the block-wise family achieves the minimax error bound.
We also provide an efficient-time Borda count algorithm that achieves provably optimal rate.
- Score: 14.391648046717073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of structured tensor denoising in the presence of
unknown permutations. Such data problems arise commonly in recommendation
system, neuroimaging, community detection, and multiway comparison
applications. Here, we develop a general family of smooth tensor models up to
arbitrary index permutations; the model incorporates the popular tensor block
models and Lipschitz hypergraphon models as special cases. We show that a
constrained least-squares estimator in the block-wise polynomial family
achieves the minimax error bound. A phase transition phenomenon is revealed
with respect to the smoothness threshold needed for optimal recovery. In
particular, we find that a polynomial of degree up to $(m-2)(m+1)/2$ is
sufficient for accurate recovery of order-$m$ tensors, whereas higher degree
exhibits no further benefits. This phenomenon reveals the intrinsic distinction
for smooth tensor estimation problems with and without unknown permutations.
Furthermore, we provide an efficient polynomial-time Borda count algorithm that
provably achieves optimal rate under monotonicity assumptions. The efficacy of
our procedure is demonstrated through both simulations and Chicago crime data
analysis.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - TERM Model: Tensor Ring Mixture Model for Density Estimation [48.622060998018206]
In this paper, we take tensor ring decomposition for density estimator, which significantly reduces the number of permutation candidates.
A mixture model that incorporates multiple permutation candidates with adaptive weights is further designed, resulting in increased expressive flexibility.
This approach acknowledges that suboptimal permutations can offer distinctive information besides that of optimal permutations.
arXiv Detail & Related papers (2023-12-13T11:39:56Z) - Nonnegative Low-Rank Tensor Completion via Dual Formulation with
Applications to Image and Video Completion [2.1227526213206542]
Recent approaches to the tensor completion problem have often overlooked the nonnegative structure of the data.
We consider the problem of learning a nonnegative low-rank tensor, and using duality theory, we propose a novel factorization of such tensors.
We test the proposed algorithm across various tasks such as colour image inpainting, video completion, and hyperspectral image completion.
arXiv Detail & Related papers (2023-05-13T17:51:00Z) - Estimating Higher-Order Mixed Memberships via the $\ell_{2,\infty}$
Tensor Perturbation Bound [8.521132000449766]
We propose the tensor mixed-membership blockmodel, a generalization of the tensor blockmodel.
We establish the identifiability of our model and propose a computationally efficient estimation procedure.
We apply our methodology to real and simulated data, demonstrating some effects not identifiable from the model with discrete community memberships.
arXiv Detail & Related papers (2022-12-16T18:32:20Z) - Minimizing low-rank models of high-order tensors: Hardness, span, tight
relaxation, and applications [26.602371644194143]
We show that this fundamental tensor problem is NP-hard for any tensor rank higher than one.
For low-enough ranks, the proposed continuous reformulation is amenable to low-complexity gradient-based optimization.
We demonstrate promising results on a number of hard problems, including low density parity check codes and general decoding parity check codes.
arXiv Detail & Related papers (2022-10-16T11:53:42Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion [1.3406858660972554]
We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
arXiv Detail & Related papers (2020-05-28T08:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.