Tensor Completion via Tensor Networks with a Tucker Wrapper
- URL: http://arxiv.org/abs/2010.15819v1
- Date: Thu, 29 Oct 2020 17:54:01 GMT
- Title: Tensor Completion via Tensor Networks with a Tucker Wrapper
- Authors: Yunfeng Cai and Ping Li
- Abstract summary: We propose to solve low-rank tensor completion (LRTC) via tensor networks with a Tucker wrapper.
A two-level alternative least square method is then employed to update the unknown factors.
Numerical simulations show that the proposed algorithm is comparable with state-of-the-art methods.
- Score: 28.83358353043287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, low-rank tensor completion (LRTC) has received considerable
attention due to its applications in image/video inpainting, hyperspectral data
recovery, etc. With different notions of tensor rank (e.g., CP, Tucker, tensor
train/ring, etc.), various optimization based numerical methods are proposed to
LRTC. However, tensor network based methods have not been proposed yet. In this
paper, we propose to solve LRTC via tensor networks with a Tucker wrapper. Here
by "Tucker wrapper" we mean that the outermost factor matrices of the tensor
network are all orthonormal. We formulate LRTC as a problem of solving a system
of nonlinear equations, rather than a constrained optimization problem. A
two-level alternative least square method is then employed to update the
unknown factors. The computation of the method is dominated by tensor matrix
multiplications and can be efficiently performed. Also, under proper
assumptions, it is shown that with high probability, the method converges to
the exact solution at a linear rate. Numerical simulations show that the
proposed algorithm is comparable with state-of-the-art methods.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Faster Robust Tensor Power Method for Arbitrary Order [15.090593955414137]
emphTensor power method (TPM) is one of the widely-used techniques in the decomposition of tensors.
We apply sketching method, and we are able to achieve the running time of $widetildeO(np-1)$ on the power $p$ and dimension $n$ tensor.
arXiv Detail & Related papers (2023-06-01T07:12:00Z) - A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation [9.058215418134209]
We propose a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time.
It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation.
Based on this, the total cost at each iteration of the optimization algorithm is reduced to $mathcalO(n3log n +kn3)$ from $mathcalO(n4)$ achieved with standard methods.
arXiv Detail & Related papers (2023-05-19T06:26:18Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Efficient Tensor Completion via Element-wise Weighted Low-rank Tensor
Train with Overlapping Ket Augmentation [18.438177637687357]
We propose a novel tensor completion approach via the element-wise weighted technique.
We specifically consider the recovery quality of edge elements from adjacent blocks.
Our experimental results demonstrate that the proposed algorithm TWMac-TT outperforms several other competing tensor completion methods.
arXiv Detail & Related papers (2021-09-13T06:50:37Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Tensor completion via nonconvex tensor ring rank minimization with
guaranteed convergence [16.11872681638052]
In recent studies, the tensor ring (TR) rank has shown high effectiveness in tensor completion.
A recently proposed TR rank is based on capturing the structure within the weighted sum penalizing the singular value equally.
In this paper, we propose to use the logdet-based function as a non smooth relaxation for solutions practice.
arXiv Detail & Related papers (2020-05-14T03:13:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.