Interpretable Time-series Representation Learning With Multi-Level
Disentanglement
- URL: http://arxiv.org/abs/2105.08179v1
- Date: Mon, 17 May 2021 22:02:24 GMT
- Title: Interpretable Time-series Representation Learning With Multi-Level
Disentanglement
- Authors: Yuening Li, Zhengzhang Chen, Daochen Zha, Mengnan Du, Denghui Zhang,
Haifeng Chen, Xia Hu
- Abstract summary: Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
- Score: 56.38489708031278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time-series representation learning is a fundamental task for time-series
analysis. While significant progress has been made to achieve accurate
representations for downstream applications, the learned representations often
lack interpretability and do not expose semantic meanings. Different from
previous efforts on the entangled feature space, we aim to extract the
semantic-rich temporal correlations in the latent interpretable factorized
representation of the data. Motivated by the success of disentangled
representation learning in computer vision, we study the possibility of
learning semantic-rich time-series representations, which remains unexplored
due to three main challenges: 1) sequential data structure introduces complex
temporal correlations and makes the latent representations hard to interpret,
2) sequential models suffer from KL vanishing problem, and 3) interpretable
semantic concepts for time-series often rely on multiple factors instead of
individuals. To bridge the gap, we propose Disentangle Time Series (DTS), a
novel disentanglement enhancement framework for sequential data. Specifically,
to generate hierarchical semantic concepts as the interpretable and
disentangled representation of time-series, DTS introduces multi-level
disentanglement strategies by covering both individual latent factors and group
semantic segments. We further theoretically show how to alleviate the KL
vanishing problem: DTS introduces a mutual information maximization term, while
preserving a heavier penalty on the total correlation and the dimension-wise KL
to keep the disentanglement property. Experimental results on various
real-world benchmark datasets demonstrate that the representations learned by
DTS achieve superior performance in downstream applications, with high
interpretability of semantic concepts.
Related papers
- Semantic-Guided Multimodal Sentiment Decoding with Adversarial Temporal-Invariant Learning [22.54577327204281]
Multimodal sentiment analysis aims to learn representations from different modalities to identify human emotions.
Existing works often neglect the frame-level redundancy inherent in continuous time series, resulting in incomplete modality representations with noise.
We propose temporal-invariant learning for the first time, which constrains the distributional variations over time steps to effectively capture long-term temporal dynamics.
arXiv Detail & Related papers (2024-08-30T03:28:40Z) - Learning Granularity Representation for Temporal Knowledge Graph Completion [2.689675451882683]
Temporal Knowledge Graphs (TKGs) incorporate temporal information to reflect the dynamic structural knowledge and evolutionary patterns of real-world facts.
This paper proposes textbfLearning textbfGranularity textbfRepresentation (termed $mathsfLGRe$) for TKG completion.
It comprises two main components: Granularity Learning (GRL) and Adaptive Granularity Balancing (AGB)
arXiv Detail & Related papers (2024-08-27T08:19:34Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - TimesURL: Self-supervised Contrastive Learning for Universal Time Series
Representation Learning [31.458689807334228]
We propose a novel self-supervised framework named TimesURL to tackle time series representation.
Specifically, we first introduce a frequency-temporal-based augmentation to keep the temporal property unchanged.
We also construct double Universums as a special kind of hard negative to guide better contrastive learning.
arXiv Detail & Related papers (2023-12-25T12:23:26Z) - On the Consistency and Robustness of Saliency Explanations for Time
Series Classification [4.062872727927056]
Saliency maps have been applied to interpret time series windows as images.
This paper extensively analyzes the consistency and robustness of saliency maps for time series features and temporal attribution.
arXiv Detail & Related papers (2023-09-04T09:08:22Z) - Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks [50.2033914945157]
We present a representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
The framework consists of three stages: estimating pairwise intensity functions, learning a projection which minimises a notion of intensity reconstruction error.
Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses.
arXiv Detail & Related papers (2023-06-09T15:38:25Z) - A Threefold Review on Deep Semantic Segmentation: Efficiency-oriented,
Temporal and Depth-aware design [77.34726150561087]
We conduct a survey on the most relevant and recent advances in Deep Semantic in the context of vision for autonomous vehicles.
Our main objective is to provide a comprehensive discussion on the main methods, advantages, limitations, results and challenges faced from each perspective.
arXiv Detail & Related papers (2023-03-08T01:29:55Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.