Exact Tensor Completion Powered by Arbitrary Linear Transforms
- URL: http://arxiv.org/abs/2402.03468v1
- Date: Fri, 2 Feb 2024 13:26:38 GMT
- Title: Exact Tensor Completion Powered by Arbitrary Linear Transforms
- Authors: Li Ge, Xue Jiang, Lin Chen
- Abstract summary: A new theoretical guarantee of exact tensor completion with arbitrary linear transforms is established.
An efficient algorithm based on alternating direction of multipliers is designed to solve the transformed tensor completion program.
Our model and proof greatly enhance the flexibility of tensor completion and extensive experiments validate the superiority of the proposed method.
- Score: 10.698749077862747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, a tensor completion problem is studied, which aims to perfectly
recover the tensor from partial observations. Existing theoretical guarantee
requires the involved transform to be orthogonal, which hinders its
applications. In this paper, jumping out of the constraints of isotropy or
self-adjointness, the theoretical guarantee of exact tensor completion with
arbitrary linear transforms is established. To that end, we define a new
tensor-tensor product, which leads us to a new definition of the tensor nuclear
norm. Equipped with these tools, an efficient algorithm based on alternating
direction of multipliers is designed to solve the transformed tensor completion
program and the theoretical bound is obtained. Our model and proof greatly
enhance the flexibility of tensor completion and extensive experiments validate
the superiority of the proposed method.
Related papers
- Born a Transformer -- Always a Transformer? [57.37263095476691]
We study a family of $textitretrieval$ and $textitcopying$ tasks inspired by Liu et al.<n>We observe an $textitinduction-versus-anti-induction$ asymmetry, where pretrained models are better at retrieving tokens to the right (induction) than the left (anti-induction) of a query token.<n>Mechanistic analysis reveals that this asymmetry is connected to the differences in the strength of induction versus anti-induction circuits within pretrained transformers.
arXiv Detail & Related papers (2025-05-27T21:36:50Z) - Beyond Progress Measures: Theoretical Insights into the Mechanism of Grokking [50.465604300990904]
Grokking refers to the abrupt improvement in test accuracy after extended overfitting.
In this work, we investigate the grokking mechanism underlying the Transformer in the task of prime number operations.
arXiv Detail & Related papers (2025-04-04T04:42:38Z) - A tensor network formulation of Lattice Gauge Theories based only on symmetric tensors [0.0]
We provide a new PEPS tensor network formulation of gauge-invariant theories based on symmetric elementary tensors.
The new formulation can be implemented in numerical simulation using available state-of-the-art tensor network libraries.
We show that such a new formulation provides a novel duality transformation between lattice gauge theories and specific sectors of globally invariant systems.
arXiv Detail & Related papers (2024-12-22T10:42:05Z) - Low-Rank Tensor Learning by Generalized Nonconvex Regularization [25.115066273660478]
We study the problem of low-rank tensor learning, where only a few samples are observed the underlying tensor.
A family of non tensor learning functions are employed to characterize the low-rankness of the underlying tensor.
An algorithm designed to solve the resulting majorization-minimization is proposed.
arXiv Detail & Related papers (2024-10-24T03:33:20Z) - A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Handling The Non-Smooth Challenge in Tensor SVD: A Multi-Objective Tensor Recovery Framework [15.16222081389267]
We introduce a novel tensor recovery model with a learnable tensor nuclear norm to address the challenge of non-smooth changes in tensor data.
We develop a new optimization algorithm named the Alternating Proximal Multiplier Method (APMM) to iteratively solve the proposed tensor completion model.
In addition, we propose a multi-objective tensor recovery framework based on APMM to efficiently explore the correlations of tensor data across its various dimensions.
arXiv Detail & Related papers (2023-11-23T12:16:33Z) - Provable Tensor Completion with Graph Information [49.08648842312456]
We introduce a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem.
We develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor.
In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm.
arXiv Detail & Related papers (2023-10-04T02:55:10Z) - Optimizing Orthogonalized Tensor Deflation via Random Tensor Theory [5.124256074746721]
This paper tackles the problem of recovering a low-rank signal tensor with possibly correlated components from a random noisy tensor.
Non-orthogonal components may alter the tensor deflation mechanism, thereby preventing efficient recovery.
An efficient tensor deflation algorithm is proposed by optimizing the parameter introduced in the deflation mechanism.
arXiv Detail & Related papers (2023-02-11T22:23:27Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Noisy Tensor Completion via Low-rank Tensor Ring [41.86521269183527]
tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations.
Existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries.
This paper proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations.
arXiv Detail & Related papers (2022-03-14T14:09:43Z) - Model based Multi-agent Reinforcement Learning with Tensor
Decompositions [52.575433758866936]
This paper investigates generalisation in state-action space over unexplored state-action pairs by modelling the transition and reward functions as tensors of low CP-rank.
Experiments on synthetic MDPs show that using tensor decompositions in a model-based reinforcement learning algorithm can lead to much faster convergence if the true transition and reward functions are indeed of low rank.
arXiv Detail & Related papers (2021-10-27T15:36:25Z) - Understanding Deflation Process in Over-parametrized Tensor
Decomposition [17.28303004783945]
We study the training dynamics for gradient flow on over-parametrized tensor decomposition problems.
Empirically, such training process often first fits larger components and then discovers smaller components.
arXiv Detail & Related papers (2021-06-11T18:51:36Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.