Tensor Principal Component Analysis in High Dimensional CP Models
- URL: http://arxiv.org/abs/2108.04428v1
- Date: Tue, 10 Aug 2021 03:24:32 GMT
- Title: Tensor Principal Component Analysis in High Dimensional CP Models
- Authors: Yuefeng Han and Cun-Hui Zhang
- Abstract summary: We propose new algorithms for tensor CP decomposition with theoretical guarantees under mild incoherence conditions.
The composite PCA applies the principal component or singular value decompositions twice, first to a matrix unfolding of the tensor data to obtain singular vectors and then to the matrix folding of the singular vectors obtained in the first step.
We show that our implementations on synthetic data demonstrate significant practical superiority of our approach over existing methods.
- Score: 3.553493344868413
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The CP decomposition for high dimensional non-orthogonal spike tensors is an
important problem with broad applications across many disciplines. However,
previous works with theoretical guarantee typically assume restrictive
incoherence conditions on the basis vectors for the CP components. In this
paper, we propose new computationally efficient composite PCA and concurrent
orthogonalization algorithms for tensor CP decomposition with theoretical
guarantees under mild incoherence conditions. The composite PCA applies the
principal component or singular value decompositions twice, first to a matrix
unfolding of the tensor data to obtain singular vectors and then to the matrix
folding of the singular vectors obtained in the first step. It can be used as
an initialization for any iterative optimization schemes for the tensor CP
decomposition. The concurrent orthogonalization algorithm iteratively estimates
the basis vector in each mode of the tensor by simultaneously applying
projections to the orthogonal complements of the spaces generated by others CP
components in other modes. It is designed to improve the alternating least
squares estimator and other forms of the high order orthogonal iteration for
tensors with low or moderately high CP ranks. Our theoretical investigation
provides estimation accuracy and statistical convergence rates for the two
proposed algorithms. Our implementations on synthetic data demonstrate
significant practical superiority of our approach over existing methods.
Related papers
- Solve sparse PCA problem by employing Hamiltonian system and leapfrog method [0.0]
We propose a novel sparse PCA algorithm that imposes sparsity through a smooth L1 penalty.
Experimental evaluations on a face recognition dataset-using both k-nearest neighbor and kernel ridge regressions-demonstrate that the proposed sparse PCA methods consistently achieve higher classification accuracy than conventional PCA.
arXiv Detail & Related papers (2025-03-30T06:39:11Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Fast Learnings of Coupled Nonnegative Tensor Decomposition Using Optimal Gradient and Low-rank Approximation [7.265645216663691]
We introduce a novel coupled nonnegative CANDECOMP/PARAFAC decomposition algorithm optimized by the alternating gradient method (CoNCPD-APG)
By integrating low-rank approximation with the proposed CoNCPD-APG method, the proposed algorithm can significantly decrease the computational burden without compromising decomposition quality.
arXiv Detail & Related papers (2023-02-10T08:49:36Z) - Controlling the Complexity and Lipschitz Constant improves polynomial
nets [55.121200972539114]
We derive new complexity bounds for the set of Coupled CP-Decomposition (CCP) and Nested Coupled CP-decomposition (NCP) models of Polynomial Nets.
We propose a principled regularization scheme that we evaluate experimentally in six datasets and show that it improves the accuracy as well as the robustness of the models to adversarial perturbations.
arXiv Detail & Related papers (2022-02-10T14:54:29Z) - Modelling matrix time series via a tensor CP-decomposition [7.900118935012717]
We propose to model matrix time series based on a tensor CP-decomposition.
We show that all the component coefficient in the CP-decomposition are estimated consistently with the different error rates, depending on the relative sizes between the dimensions of time series and the sample size.
arXiv Detail & Related papers (2021-12-31T13:02:06Z) - A Coupled Random Projection Approach to Large-Scale Canonical Polyadic
Decomposition [11.325830583924803]
We propose a novel algorithm for the computation of canonical polyadic decomposition (CPD) of large-scale tensors.
The proposed algorithm generalizes the random projection (RAP) technique, which is often used to compute large-scale decompositions.
arXiv Detail & Related papers (2021-05-10T03:00:50Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.