Latent Processes Identification From Multi-View Time Series
- URL: http://arxiv.org/abs/2305.08164v1
- Date: Sun, 14 May 2023 14:21:58 GMT
- Title: Latent Processes Identification From Multi-View Time Series
- Authors: Zenan Huang, Haobo Wang, Junbo Zhao, Nenggan Zheng
- Abstract summary: We propose a novel framework that employs the contrastive learning technique to invert the data generative process for enhanced identifiability.
MuLTI integrates a permutation mechanism that merges corresponding overlapped variables by the establishment of an optimal transport formula.
- Score: 17.33428123777779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the dynamics of time series data typically requires identifying
the unique latent factors for data generation, \textit{a.k.a.}, latent
processes identification. Driven by the independent assumption, existing works
have made great progress in handling single-view data. However, it is a
non-trivial problem that extends them to multi-view time series data because of
two main challenges: (i) the complex data structure, such as temporal
dependency, can result in violation of the independent assumption; (ii) the
factors from different views are generally overlapped and are hard to be
aggregated to a complete set. In this work, we propose a novel framework MuLTI
that employs the contrastive learning technique to invert the data generative
process for enhanced identifiability. Additionally, MuLTI integrates a
permutation mechanism that merges corresponding overlapped variables by the
establishment of an optimal transport formula. Extensive experimental results
on synthetic and real-world datasets demonstrate the superiority of our method
in recovering identifiable latent variables on multi-view time series.
Related papers
- TimeDiT: General-purpose Diffusion Transformers for Time Series Foundation Model [11.281386703572842]
TimeDiT is a diffusion transformer model that combines temporal dependency learning with probabilistic sampling.
TimeDiT employs a unified masking mechanism to harmonize the training and inference process across diverse tasks.
Our systematic evaluation demonstrates TimeDiT's effectiveness both in fundamental tasks, i.e., forecasting and imputation, through zero-shot/fine-tuning.
arXiv Detail & Related papers (2024-09-03T22:31:57Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - ComboStoc: Combinatorial Stochasticity for Diffusion Generative Models [65.82630283336051]
We show that the space spanned by the combination of dimensions and attributes is insufficiently sampled by existing training scheme of diffusion generative models.
We present a simple fix to this problem by constructing processes that fully exploit the structures, hence the name ComboStoc.
arXiv Detail & Related papers (2024-05-22T15:23:10Z) - Time-to-Pattern: Information-Theoretic Unsupervised Learning for
Scalable Time Series Summarization [7.294418916091012]
We introduce an approach to time series summarization called Time-to-Pattern (T2P)
T2P aims to find a set of diverse patterns that together encode the most salient information, following the notion of minimum description length.
Our synthetic and real-world experiments reveal that T2P discovers informative patterns, even in noisy and complex settings.
arXiv Detail & Related papers (2023-08-26T01:15:32Z) - Probabilistic Learning of Multivariate Time Series with Temporal Irregularity [21.361823581838355]
Real-world time series often suffer from temporal irregularities, including nonuniform intervals and misaligned variables.
We propose an end-to-end framework that models temporal irregularities while capturing the joint distribution of variables at arbitrary continuous-time points.
arXiv Detail & Related papers (2023-06-15T14:08:48Z) - Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models [61.10851158749843]
Key insights can be obtained by discovering lead-lag relationships inherent in the data.
We develop a clustering-driven methodology for robust detection of lead-lag relationships in lagged multi-factor models.
arXiv Detail & Related papers (2023-05-11T10:30:35Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - PIETS: Parallelised Irregularity Encoders for Forecasting with
Heterogeneous Time-Series [5.911865723926626]
Heterogeneity and irregularity of multi-source data sets present a significant challenge to time-series analysis.
In this work, we design a novel architecture, PIETS, to model heterogeneous time-series.
We show that PIETS is able to effectively model heterogeneous temporal data and outperforms other state-of-the-art approaches in the prediction task.
arXiv Detail & Related papers (2021-09-30T20:01:19Z) - On Disentanglement in Gaussian Process Variational Autoencoders [3.403279506246879]
We introduce a class of models recently introduced that have been successful in different tasks on time series data.
Our model exploits the temporal structure of the data by modeling each latent channel with a GP prior and employing a structured variational distribution.
We provide evidence that we can learn meaningful disentangled representations on real-world medical time series data.
arXiv Detail & Related papers (2021-02-10T15:49:27Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.