A high-order tensor completion algorithm based on Fully-Connected Tensor
Network weighted optimization
- URL: http://arxiv.org/abs/2204.01732v2
- Date: Wed, 6 Apr 2022 02:58:29 GMT
- Title: A high-order tensor completion algorithm based on Fully-Connected Tensor
Network weighted optimization
- Authors: Peilin Yang, Yonghui Huang, Yuning Qiu, Weijun Sun, Guoxu Zhou
- Abstract summary: We propose a new tensor completion method named the fully connected tensor network weighted optization(FCTN-WOPT)
The algorithm performs a composition of the completed tensor by initialising the factors from the FCTN decomposition.
The results show the advanced performance of our FCTN-WOPT when it is applied to higher-order tensor completion.
- Score: 8.229028597459752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor completion aimes at recovering missing data, and it is one of the
popular concerns in deep learning and signal processing. Among the higher-order
tensor decomposition algorithms, the recently proposed fully-connected tensor
network decomposition (FCTN) algorithm is the most advanced. In this paper, by
leveraging the superior expression of the fully-connected tensor network (FCTN)
decomposition, we propose a new tensor completion method named the fully
connected tensor network weighted optization(FCTN-WOPT). The algorithm performs
a composition of the completed tensor by initialising the factors from the FCTN
decomposition. We build a loss function with the weight tensor, the completed
tensor and the incomplete tensor together, and then update the completed tensor
using the lbfgs gradient descent algorithm to reduce the spatial memory
occupation and speed up iterations. Finally we test the completion with
synthetic data and real data (both image data and video data) and the results
show the advanced performance of our FCTN-WOPT when it is applied to
higher-order tensor completion.
Related papers
- Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Tensor Completion via Leverage Sampling and Tensor QR Decomposition for
Network Latency Estimation [2.982069479212266]
A large scale of network latency estimation requires a lot of computing time.
We propose a new method that is much faster and maintains high accuracy.
Numerical experiments witness that our method is faster than state-of-art algorithms with satisfactory accuracy.
arXiv Detail & Related papers (2023-06-27T07:21:26Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Efficient Tensor Completion via Element-wise Weighted Low-rank Tensor
Train with Overlapping Ket Augmentation [18.438177637687357]
We propose a novel tensor completion approach via the element-wise weighted technique.
We specifically consider the recovery quality of edge elements from adjacent blocks.
Our experimental results demonstrate that the proposed algorithm TWMac-TT outperforms several other competing tensor completion methods.
arXiv Detail & Related papers (2021-09-13T06:50:37Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Multi-version Tensor Completion for Time-delayed Spatio-temporal Data [50.762087239885936]
Real-world-temporal data is often incomplete or inaccurate due to various data loading delays.
We propose a low-rank tensor model to predict the updates over time.
We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
arXiv Detail & Related papers (2021-05-11T19:55:56Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Adaptive Learning of Tensor Network Structures [6.407946291544721]
We leverage the TN formalism to develop a generic and efficient adaptive algorithm to learn the structure and the parameters of a TN from data.
Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function.
arXiv Detail & Related papers (2020-08-12T16:41:56Z) - Grassmannian Optimization for Online Tensor Completion and Tracking with
the t-SVD [10.137631021498109]
We show the t-SVD is a specialization of the well-studied block-term decomposition for third-order tensors.
We present an algorithm under this model that can track changing free submodules from incomplete streaming 2-D data.
Our results are competitive in accuracy but much faster in compute time than state-of-the-art tensor completion algorithms on real applications.
arXiv Detail & Related papers (2020-01-30T15:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.