Tensor Completion via Leverage Sampling and Tensor QR Decomposition for
Network Latency Estimation
- URL: http://arxiv.org/abs/2307.06848v1
- Date: Tue, 27 Jun 2023 07:21:26 GMT
- Title: Tensor Completion via Leverage Sampling and Tensor QR Decomposition for
Network Latency Estimation
- Authors: Jun Lei, Ji-Qian Zhao, Jing-Qi Wang, An-Bao Xu
- Abstract summary: A large scale of network latency estimation requires a lot of computing time.
We propose a new method that is much faster and maintains high accuracy.
Numerical experiments witness that our method is faster than state-of-art algorithms with satisfactory accuracy.
- Score: 2.982069479212266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the network latency estimation, which has been an
important metric for network performance. However, a large scale of network
latency estimation requires a lot of computing time. Therefore, we propose a
new method that is much faster and maintains high accuracy. The data structure
of network nodes can form a matrix, and the tensor model can be formed by
introducing the time dimension. Thus, the entire problem can be be summarized
as a tensor completion problem. The main idea of our method is improving the
tensor leverage sampling strategy and introduce tensor QR decomposition into
tensor completion. To achieve faster tensor leverage sampling, we replace
tensor singular decomposition (t-SVD) with tensor CSVD-QR to appoximate t-SVD.
To achieve faster completion for incomplete tensor, we use the tensor
$L_{2,1}$-norm rather than traditional tensor nuclear norm. Furthermore, we
introduce tensor QR decomposition into alternating direction method of
multipliers (ADMM) framework. Numerical experiments witness that our method is
faster than state-of-art algorithms with satisfactory accuracy.
Related papers
- Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Tensor Decomposition Based Attention Module for Spiking Neural Networks [18.924242014716647]
We design the textitprojected full attention (PFA) module, which demonstrates excellent results with linearly growing parameters.
Our method achieves state-of-the-art performance on both static and dynamic benchmark datasets.
arXiv Detail & Related papers (2023-10-23T05:25:49Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - A high-order tensor completion algorithm based on Fully-Connected Tensor
Network weighted optimization [8.229028597459752]
We propose a new tensor completion method named the fully connected tensor network weighted optization(FCTN-WOPT)
The algorithm performs a composition of the completed tensor by initialising the factors from the FCTN decomposition.
The results show the advanced performance of our FCTN-WOPT when it is applied to higher-order tensor completion.
arXiv Detail & Related papers (2022-04-04T13:46:32Z) - Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach [14.048989759890475]
We develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic.
We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization.
arXiv Detail & Related papers (2021-06-19T04:37:50Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Multi-version Tensor Completion for Time-delayed Spatio-temporal Data [50.762087239885936]
Real-world-temporal data is often incomplete or inaccurate due to various data loading delays.
We propose a low-rank tensor model to predict the updates over time.
We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
arXiv Detail & Related papers (2021-05-11T19:55:56Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Tensor train decompositions on recurrent networks [60.334946204107446]
Matrix product state (MPS) tensor trains have more attractive features than MPOs, in terms of storage reduction and computing time at inference.
We show that MPS tensor trains should be at the forefront of LSTM network compression through a theoretical analysis and practical experiments on NLP task.
arXiv Detail & Related papers (2020-06-09T18:25:39Z) - Grassmannian Optimization for Online Tensor Completion and Tracking with
the t-SVD [10.137631021498109]
We show the t-SVD is a specialization of the well-studied block-term decomposition for third-order tensors.
We present an algorithm under this model that can track changing free submodules from incomplete streaming 2-D data.
Our results are competitive in accuracy but much faster in compute time than state-of-the-art tensor completion algorithms on real applications.
arXiv Detail & Related papers (2020-01-30T15:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.