A Coupled Random Projection Approach to Large-Scale Canonical Polyadic
Decomposition
- URL: http://arxiv.org/abs/2105.04084v1
- Date: Mon, 10 May 2021 03:00:50 GMT
- Title: A Coupled Random Projection Approach to Large-Scale Canonical Polyadic
Decomposition
- Authors: Lu-Ming Wang, Ya-Nan Wang, Xiao-Feng Gong, Qiu-Hua Lin, Fei Xiang
- Abstract summary: We propose a novel algorithm for the computation of canonical polyadic decomposition (CPD) of large-scale tensors.
The proposed algorithm generalizes the random projection (RAP) technique, which is often used to compute large-scale decompositions.
- Score: 11.325830583924803
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel algorithm for the computation of canonical polyadic
decomposition (CPD) of large-scale tensors. The proposed algorithm generalizes
the random projection (RAP) technique, which is often used to compute
large-scale decompositions, from one single projection to multiple but coupled
random projections (CoRAP). The proposed CoRAP technique yields a set of
tensors that together admits a coupled CPD (C-CPD) and a C-CPD algorithm is
then used to jointly decompose these tensors. The results of C-CPD are finally
fused to obtain factor matrices of the original large-scale data tensor. As
more data samples are jointly exploited via C-CPD, the proposed CoRAP based CPD
is more accurate than RAP based CPD. Experiments are provided to illustrate the
performance of the proposed approach.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Deep-Learning-Aided Alternating Least Squares for Tensor CP Decomposition and Its Application to Massive MIMO Channel Estimation [31.14824776920284]
To achieve accurate and low-latency channel estimation, good and fast CP decomposition algorithms are desired.
The CP alternating least squares (CPALS) is the workhorse algorithm for calculating the CPD.
This paper proposes a deep-learning-aided CPALS (DL-CPALS) method that uses a deep neural network (DNN)
Experimental results demonstrate the significant improvements of the proposed method in terms of both speed and accuracy for CPD and channel estimation.
arXiv Detail & Related papers (2023-05-23T11:17:13Z) - Fast Learnings of Coupled Nonnegative Tensor Decomposition Using Optimal Gradient and Low-rank Approximation [7.265645216663691]
We introduce a novel coupled nonnegative CANDECOMP/PARAFAC decomposition algorithm optimized by the alternating gradient method (CoNCPD-APG)
By integrating low-rank approximation with the proposed CoNCPD-APG method, the proposed algorithm can significantly decrease the computational burden without compromising decomposition quality.
arXiv Detail & Related papers (2023-02-10T08:49:36Z) - An approach to robust ICP initialization [77.45039118761837]
We propose an approach to initialize the Iterative Closest Point (ICP) algorithm to match unlabelled point clouds related by rigid transformations.
We derive bounds on the robustness of our approach to noise and numerical experiments confirm our theoretical findings.
arXiv Detail & Related papers (2022-12-10T16:27:25Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Tensor Principal Component Analysis in High Dimensional CP Models [3.553493344868413]
We propose new algorithms for tensor CP decomposition with theoretical guarantees under mild incoherence conditions.
The composite PCA applies the principal component or singular value decompositions twice, first to a matrix unfolding of the tensor data to obtain singular vectors and then to the matrix folding of the singular vectors obtained in the first step.
We show that our implementations on synthetic data demonstrate significant practical superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-08-10T03:24:32Z) - Determinantal consensus clustering [77.34726150561087]
We propose the use of determinantal point processes or DPP for the random restart of clustering algorithms.
DPPs favor diversity of the center points within subsets.
We show through simulations that, contrary to DPP, this technique fails both to ensure diversity, and to obtain a good coverage of all data facets.
arXiv Detail & Related papers (2021-02-07T23:48:24Z) - Randomized Online CP Decomposition [13.425121208828044]
A novel CP decomposition algorithm called randomized online CP decomposition (ROCP) is proposed in this paper.
ROCP avoids forming full Khatri-Rao product, which leads to boost the speed largely and reduce memory usage.
arXiv Detail & Related papers (2020-07-21T13:41:27Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - A Unified Framework for Coupled Tensor Completion [42.19293115131073]
Coupled tensor decomposition reveals the joint data structure by incorporating priori knowledge that come from the latent coupled factors.
The TR has powerful expression ability and achieves success in some multi-dimensional data processing applications.
The proposed method is validated on numerical experiments on synthetic data, and experimental results on real-world data demonstrate its superiority over the state-of-the-art methods in terms of recovery accuracy.
arXiv Detail & Related papers (2020-01-09T02:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.