Nonparametric Factor Trajectory Learning for Dynamic Tensor
Decomposition
- URL: http://arxiv.org/abs/2207.02446v1
- Date: Wed, 6 Jul 2022 05:33:00 GMT
- Title: Nonparametric Factor Trajectory Learning for Dynamic Tensor
Decomposition
- Authors: Zheng Wang, Shandian Zhe
- Abstract summary: We propose NON FActor Trajectory learning for dynamic tensor decomposition (NONFAT)
We use a second-level GP to sample the entry values and to capture the temporal relationship between the entities.
We have shown the advantage of our method in several real-world applications.
- Score: 20.55025648415664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor decomposition is a fundamental framework to analyze data that can be
represented by multi-dimensional arrays. In practice, tensor data is often
accompanied by temporal information, namely the time points when the entry
values were generated. This information implies abundant, complex temporal
variation patterns. However, current methods always assume the factor
representations of the entities in each tensor mode are static, and never
consider their temporal evolution. To fill this gap, we propose NONparametric
FActor Trajectory learning for dynamic tensor decomposition (NONFAT). We place
Gaussian process (GP) priors in the frequency domain and conduct inverse
Fourier transform via Gauss-Laguerre quadrature to sample the trajectory
functions. In this way, we can overcome data sparsity and obtain robust
trajectory estimates across long time horizons. Given the trajectory values at
specific time points, we use a second-level GP to sample the entry values and
to capture the temporal relationship between the entities. For efficient and
scalable inference, we leverage the matrix Gaussian structure in the model,
introduce a matrix Gaussian posterior, and develop a nested sparse variational
learning algorithm. We have shown the advantage of our method in several
real-world applications.
Related papers
- Correlating Time Series with Interpretable Convolutional Kernels [18.77493756204539]
This study addresses the problem of convolutional kernel learning in time series data.
We use tensor computations to reformulate the convolutional kernel learning problem in the form of tensors.
This study lays an insightful foundation for automatically learning convolutional kernels from time series data.
arXiv Detail & Related papers (2024-09-02T16:29:21Z) - Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes [24.723536390322582]
tensor decomposition is an important tool for multiway data analysis.
We propose Dynamic EMbedIngs fOr dynamic algorithm dEcomposition (DEMOTE)
We show the advantage of our approach in both simulation study and real-world applications.
arXiv Detail & Related papers (2023-10-30T15:49:45Z) - Streaming Factor Trajectory Learning for Temporal Tensor Decomposition [33.18423605559094]
We propose Streaming Factor Trajectory Learning for temporal tensor decomposition.
We use Gaussian processes (GPs) to model the trajectory of factors so as to flexibly estimate their temporal evolution.
We have shown the advantage of SFTL in both synthetic tasks and real-world applications.
arXiv Detail & Related papers (2023-10-25T21:58:52Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Hankel-structured Tensor Robust PCA for Multivariate Traffic Time Series
Anomaly Detection [9.067182100565695]
This study proposes a Hankel-structured tensor version of RPCA for anomaly detection in spatial data.
We decompose the corrupted matrix into a low-rank Hankel tensor and a sparse matrix.
We evaluate the method by synthetic data and passenger flow time series.
arXiv Detail & Related papers (2021-10-08T19:35:39Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Efficient Variational Bayesian Structure Learning of Dynamic Graphical
Models [19.591265962713837]
Estimating time-varying graphical models is of paramount importance in various social, financial, biological, and engineering systems.
Existing methods require extensive tuning of parameters that control the graph sparsity and temporal smoothness.
We propose a low-complexity tuning-free Bayesian approach, named BADGE.
arXiv Detail & Related papers (2020-09-16T14:19:23Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.