Latent Matrices for Tensor Network Decomposition and to Tensor
Completion
- URL: http://arxiv.org/abs/2210.03392v1
- Date: Fri, 7 Oct 2022 08:19:50 GMT
- Title: Latent Matrices for Tensor Network Decomposition and to Tensor
Completion
- Authors: Peilin Yang, Weijun Sun, Qinbin Zhao, Guoxu Zhou
- Abstract summary: We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
- Score: 8.301418317685906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prevalent fully-connected tensor network (FCTN) has achieved excellent
success to compress data. However, the FCTN decomposition suffers from slow
computational speed when facing higher-order and large-scale data. Naturally,
there arises an interesting question: can a new model be proposed that
decomposes the tensor into smaller ones and speeds up the computation of the
algorithm? This work gives a positive answer by formulating a novel
higher-order tensor decomposition model that utilizes latent matrices based on
the tensor network structure, which can decompose a tensor into smaller-scale
data than the FCTN decomposition, hence we named it Latent Matrices for Tensor
Network Decomposition (LMTN). Furthermore, three optimization algorithms,
LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the
tensor-completion task. In addition, we provide proofs of theoretical
convergence and complexity analysis for these algorithms. Experimental results
show that our algorithm has the effectiveness in both deep learning dataset
compression and higher-order tensor completion, and that our LMTN-SVD algorithm
is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy
drop.
Related papers
- Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Tensor Completion via Leverage Sampling and Tensor QR Decomposition for
Network Latency Estimation [2.982069479212266]
A large scale of network latency estimation requires a lot of computing time.
We propose a new method that is much faster and maintains high accuracy.
Numerical experiments witness that our method is faster than state-of-art algorithms with satisfactory accuracy.
arXiv Detail & Related papers (2023-06-27T07:21:26Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - A high-order tensor completion algorithm based on Fully-Connected Tensor
Network weighted optimization [8.229028597459752]
We propose a new tensor completion method named the fully connected tensor network weighted optization(FCTN-WOPT)
The algorithm performs a composition of the completed tensor by initialising the factors from the FCTN decomposition.
The results show the advanced performance of our FCTN-WOPT when it is applied to higher-order tensor completion.
arXiv Detail & Related papers (2022-04-04T13:46:32Z) - Multi-Tensor Network Representation for High-Order Tensor Completion [25.759851542474447]
This work studies the problem of high-dimensional data (referred to tensors) completion from partially observed samplings.
We consider that a tensor is a superposition of multiple low-rank components.
In this paper, we propose a fundamental tensor decomposition framework: Multi-Tensor Network decomposition (MTNR)
arXiv Detail & Related papers (2021-09-09T03:50:19Z) - Augmented Tensor Decomposition with Stochastic Optimization [46.16865811396394]
Real-world tensor data are usually high-ordered and have large dimensions with millions or billions of entries.
It is expensive to decompose the whole tensor with traditional algorithms.
This paper proposes augmented tensor decomposition, which effectively incorporates data augmentations to boost downstream classification.
arXiv Detail & Related papers (2021-06-15T06:29:05Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Adaptive Learning of Tensor Network Structures [6.407946291544721]
We leverage the TN formalism to develop a generic and efficient adaptive algorithm to learn the structure and the parameters of a TN from data.
Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function.
arXiv Detail & Related papers (2020-08-12T16:41:56Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Grassmannian Optimization for Online Tensor Completion and Tracking with
the t-SVD [10.137631021498109]
We show the t-SVD is a specialization of the well-studied block-term decomposition for third-order tensors.
We present an algorithm under this model that can track changing free submodules from incomplete streaming 2-D data.
Our results are competitive in accuracy but much faster in compute time than state-of-the-art tensor completion algorithms on real applications.
arXiv Detail & Related papers (2020-01-30T15:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.