Modelling matrix time series via a tensor CP-decomposition
- URL: http://arxiv.org/abs/2112.15423v1
- Date: Fri, 31 Dec 2021 13:02:06 GMT
- Title: Modelling matrix time series via a tensor CP-decomposition
- Authors: Jinyuan Chang, Jing He, Lin Yang, Qiwei Yao
- Abstract summary: We propose to model matrix time series based on a tensor CP-decomposition.
We show that all the component coefficient in the CP-decomposition are estimated consistently with the different error rates, depending on the relative sizes between the dimensions of time series and the sample size.
- Score: 7.900118935012717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to model matrix time series based on a tensor CP-decomposition.
Instead of using an iterative algorithm which is the standard practice for
estimating CP-decompositions, we propose a new and one-pass estimation
procedure based on a generalized eigenanalysis constructed from the serial
dependence structure of the underlying process. A key idea of the new procedure
is to project a generalized eigenequation defined in terms of rank-reduced
matrices to a lower-dimensional one with full-ranked matrices, to avoid the
intricacy of the former of which the number of eigenvalues can be zero, finite
and infinity. The asymptotic theory has been established under a general
setting without the stationarity. It shows, for example, that all the component
coefficient vectors in the CP-decomposition are estimated consistently with the
different error rates, depending on the relative sizes between the dimensions
of time series and the sample size. The proposed model and the estimation
method are further illustrated with both simulated and real data; showing
effective dimension-reduction in modelling and forecasting matrix time series.
Related papers
- Identification and estimation for matrix time series CP-factor models [0.0]
We investigate the identification and the estimation for matrix time series CP-factor models.
Unlike the generalized eigenanalysis-based method of Chang et al. (2023), the newly proposed estimation can handle rank-deficient factor loading matrices.
In terms of the error rates of the estimation, the proposed procedure is equivalent to handling a vector time series of dimension $max(p,q)$ instead of $p times q$.
arXiv Detail & Related papers (2024-10-08T02:32:36Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Support Recovery with Stochastic Gates: Theory and Application for
Linear Models [9.644417971611908]
We analyze the problem of simultaneous support recovery and estimation of the coefficient vector ($beta*$) in a linear model with independent and identically distributed Normal errors.
Considering design we show that under reasonable conditions on dimension and sparsity of $beta*$ the STG based estimator converges to the true data generating coefficient vector and also detects its support set with high probability.
arXiv Detail & Related papers (2021-10-29T17:59:43Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Tensor Principal Component Analysis in High Dimensional CP Models [3.553493344868413]
We propose new algorithms for tensor CP decomposition with theoretical guarantees under mild incoherence conditions.
The composite PCA applies the principal component or singular value decompositions twice, first to a matrix unfolding of the tensor data to obtain singular vectors and then to the matrix folding of the singular vectors obtained in the first step.
We show that our implementations on synthetic data demonstrate significant practical superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-08-10T03:24:32Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Large Non-Stationary Noisy Covariance Matrices: A Cross-Validation
Approach [1.90365714903665]
We introduce a novel covariance estimator that exploits the heteroscedastic nature of financial time series.
By attenuating the noise from both the cross-sectional and time-series dimensions, we empirically demonstrate the superiority of our estimator over competing estimators.
arXiv Detail & Related papers (2020-12-10T15:41:17Z) - Graph Gamma Process Generalized Linear Dynamical Systems [60.467040479276704]
We introduce graph gamma process (GGP) linear dynamical systems to model real multivariate time series.
For temporal pattern discovery, the latent representation under the model is used to decompose the time series into a parsimonious set of multivariate sub-sequences.
We use the generated random graph, whose number of nonzero-degree nodes is finite, to define both the sparsity pattern and dimension of the latent state transition matrix.
arXiv Detail & Related papers (2020-07-25T04:16:34Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Errors for Teacher-Student Convex Generalized Linear Models
(or : How to Prove Kabashima's Replica Formula) [23.15629681360836]
We prove an analytical formula for the reconstruction performance of convex generalized linear models.
We show that an analytical continuation may be carried out to extend the result to convex (non-strongly) problems.
We illustrate our claim with numerical examples on mainstream learning methods.
arXiv Detail & Related papers (2020-06-11T16:26:35Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.