A Multi-view Multi-task Learning Framework for Multi-variate Time Series
Forecasting
- URL: http://arxiv.org/abs/2109.01657v1
- Date: Thu, 2 Sep 2021 06:11:26 GMT
- Title: A Multi-view Multi-task Learning Framework for Multi-variate Time Series
Forecasting
- Authors: Jinliang Deng, Xiusi Chen, Renhe Jiang, Xuan Song, Ivor W. Tsang
- Abstract summary: We propose a novel multi-view multi-task (MVMT) learning framework for MTS forecasting.
MVMT information is deeply concealed in the MTS data, which severely hinders the model from capturing it naturally.
We develop two kinds of basic operations, namely task-wise affine transformation and task-wise normalization, respectively.
- Score: 42.061275727906256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-variate time series (MTS) data is a ubiquitous class of data
abstraction in the real world. Any instance of MTS is generated from a hybrid
dynamical system and their specific dynamics are usually unknown. The hybrid
nature of such a dynamical system is a result of complex external attributes,
such as geographic location and time of day, each of which can be categorized
into either spatial attributes or temporal attributes. Therefore, there are two
fundamental views which can be used to analyze MTS data, namely the spatial
view and the temporal view. Moreover, from each of these two views, we can
partition the set of data samples of MTS into disjoint forecasting tasks in
accordance with their associated attribute values. Then, samples of the same
task will manifest similar forthcoming pattern, which is less sophisticated to
be predicted in comparison with the original single-view setting. Considering
this insight, we propose a novel multi-view multi-task (MVMT) learning
framework for MTS forecasting. Instead of being explicitly presented in most
scenarios, MVMT information is deeply concealed in the MTS data, which severely
hinders the model from capturing it naturally. To this end, we develop two
kinds of basic operations, namely task-wise affine transformation and task-wise
normalization, respectively. Applying these two operations with prior knowledge
on the spatial and temporal view allows the model to adaptively extract MVMT
information while predicting. Extensive experiments on three datasets are
conducted to illustrate that canonical architectures can be greatly enhanced by
the MVMT learning framework in terms of both effectiveness and efficiency. In
addition, we design rich case studies to reveal the properties of
representations produced at different phases in the entire prediction
procedure.
Related papers
- Contrast Similarity-Aware Dual-Pathway Mamba for Multivariate Time Series Node Classification [9.159556125198305]
We propose contrast similarity-aware dual-pathway Mamba for MTS node classification (CS-DPMamba)
We construct a similarity matrix between MTS representations using Fast Dynamic Time Warping (FastDTW)
By considering the long-range dependencies and dynamic similarity features, we achieved precise MTS node classification.
arXiv Detail & Related papers (2024-11-19T04:32:41Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - Deciphering Movement: Unified Trajectory Generation Model for Multi-Agent [53.637837706712794]
We propose a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs.
Specifically, we introduce a Ghost Spatial Masking (GSM) module embedded within a Transformer encoder for spatial feature extraction.
We benchmark three practical sports game datasets, Basketball-U, Football-U, and Soccer-U, for evaluation.
arXiv Detail & Related papers (2024-05-27T22:15:23Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Latent Processes Identification From Multi-View Time Series [17.33428123777779]
We propose a novel framework that employs the contrastive learning technique to invert the data generative process for enhanced identifiability.
MuLTI integrates a permutation mechanism that merges corresponding overlapped variables by the establishment of an optimal transport formula.
arXiv Detail & Related papers (2023-05-14T14:21:58Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Multi-Task Dynamical Systems [5.881614676989161]
Time series datasets are often composed of a variety of sequences from the same domain, but from different entities.
This paper describes the multi-task dynamical system (MTDS); a general methodology for extending multi-task learning (MTL) to time series models.
We apply the MTDS to motion-capture data of people walking in various styles using a multi-task recurrent neural network (RNN), and to patient drug-response data using a multi-task pharmacodynamic model.
arXiv Detail & Related papers (2022-10-08T13:37:55Z) - Learning Behavior Representations Through Multi-Timescale Bootstrapping [8.543808476554695]
We introduce Bootstrap Across Multiple Scales (BAMS), a multi-scale representation learning model for behavior.
We first apply our method on a dataset of quadrupeds navigating in different terrain types, and show that our model captures the temporal complexity of behavior.
arXiv Detail & Related papers (2022-06-14T17:57:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.