Streaming Factor Trajectory Learning for Temporal Tensor Decomposition
- URL: http://arxiv.org/abs/2310.17021v2
- Date: Tue, 7 Nov 2023 23:05:42 GMT
- Title: Streaming Factor Trajectory Learning for Temporal Tensor Decomposition
- Authors: Shikai Fang, Xin Yu, Shibo Li, Zheng Wang, Robert Kirby, Shandian Zhe
- Abstract summary: We propose Streaming Factor Trajectory Learning for temporal tensor decomposition.
We use Gaussian processes (GPs) to model the trajectory of factors so as to flexibly estimate their temporal evolution.
We have shown the advantage of SFTL in both synthetic tasks and real-world applications.
- Score: 33.18423605559094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Practical tensor data is often along with time information. Most existing
temporal decomposition approaches estimate a set of fixed factors for the
objects in each tensor mode, and hence cannot capture the temporal evolution of
the objects' representation. More important, we lack an effective approach to
capture such evolution from streaming data, which is common in real-world
applications. To address these issues, we propose Streaming Factor Trajectory
Learning for temporal tensor decomposition. We use Gaussian processes (GPs) to
model the trajectory of factors so as to flexibly estimate their temporal
evolution. To address the computational challenges in handling streaming data,
we convert the GPs into a state-space prior by constructing an equivalent
stochastic differential equation (SDE). We develop an efficient online
filtering algorithm to estimate a decoupled running posterior of the involved
factor states upon receiving new data. The decoupled estimation enables us to
conduct standard Rauch-Tung-Striebel smoothing to compute the full posterior of
all the trajectories in parallel, without the need for revisiting any previous
data. We have shown the advantage of SFTL in both synthetic tasks and
real-world applications. The code is available at
{https://github.com/xuangu-fang/Streaming-Factor-Trajectory-Learning}.
Related papers
- Diffusion Transformer Captures Spatial-Temporal Dependencies: A Theory for Gaussian Process Data [39.41800375686212]
Diffusion Transformer, the backbone of Sora for video generation, successfully scales the capacity of diffusion models.
We make the first theoretical step towards bridging diffusion transformers for capturing spatial-temporal dependencies.
We highlight how the spatial-temporal dependencies are captured and affect learning efficiency.
arXiv Detail & Related papers (2024-07-23T02:42:43Z) - Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes [24.723536390322582]
tensor decomposition is an important tool for multiway data analysis.
We propose Dynamic EMbedIngs fOr dynamic algorithm dEcomposition (DEMOTE)
We show the advantage of our approach in both simulation study and real-world applications.
arXiv Detail & Related papers (2023-10-30T15:49:45Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [61.520225216107306]
We propose a pipeline-level solution to mitigate the issue of data scarcity in trajectory forecasting.
We adopt HD map augmentation and trajectory synthesis for generating driving data, and then we learn representations by pre-training on them.
We conduct extensive experiments to demonstrate the effectiveness of our data expansion and pre-training strategies.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - PARTIME: Scalable and Parallel Processing Over Time with Deep Neural
Networks [68.96484488899901]
We present PARTIME, a library designed to speed up neural networks whenever data is continuously streamed over time.
PARTIME starts processing each data sample at the time in which it becomes available from the stream.
Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning.
arXiv Detail & Related papers (2022-10-17T14:49:14Z) - Nonparametric Factor Trajectory Learning for Dynamic Tensor
Decomposition [20.55025648415664]
We propose NON FActor Trajectory learning for dynamic tensor decomposition (NONFAT)
We use a second-level GP to sample the entry values and to capture the temporal relationship between the entities.
We have shown the advantage of our method in several real-world applications.
arXiv Detail & Related papers (2022-07-06T05:33:00Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Emulating Spatio-Temporal Realizations of Three-Dimensional Isotropic
Turbulence via Deep Sequence Learning Models [24.025975236316842]
We use a data-driven approach to model a three-dimensional turbulent flow using cutting-edge Deep Learning techniques.
The accuracy of the model is assessed using statistical and physics-based metrics.
arXiv Detail & Related papers (2021-12-07T03:33:39Z) - Low-Rank Hankel Tensor Completion for Traffic Speed Estimation [7.346671461427793]
We propose a purely data-driven and model-free solution to the traffic state estimation problem.
By imposing a low-rank assumption on this tensor structure, we can approximate characterize both global patterns and the unknown complex local dynamics.
We conduct numerical experiments on both synthetic simulation data and real-world high-resolution data, and our results demonstrate the effectiveness and superiority of the proposed model.
arXiv Detail & Related papers (2021-05-21T00:08:06Z) - Time-Series Imputation with Wasserstein Interpolation for Optimal
Look-Ahead-Bias and Variance Tradeoff [66.59869239999459]
In finance, imputation of missing returns may be applied prior to training a portfolio optimization model.
There is an inherent trade-off between the look-ahead-bias of using the full data set for imputation and the larger variance in the imputation from using only the training data.
We propose a Bayesian posterior consensus distribution which optimally controls the variance and look-ahead-bias trade-off in the imputation.
arXiv Detail & Related papers (2021-02-25T09:05:35Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.