Features Fusion Framework for Multimodal Irregular Time-series Events
- URL: http://arxiv.org/abs/2209.01728v1
- Date: Mon, 5 Sep 2022 02:27:12 GMT
- Title: Features Fusion Framework for Multimodal Irregular Time-series Events
- Authors: Peiwang Tang and Xianchao Zhang
- Abstract summary: multimodal irregular time-series events have different sampling frequencies, data compositions, temporal relations and characteristics.
In this paper, a features fusion framework for multimodal irregular time-series events is proposed based on the Long Short-Term Memory networks (LSTM)
Experiments on MIMIC-III dataset demonstrate that the proposed framework significantly outperforms to the existing methods in terms of AUC (the area under Receiver Operating Characteristic curve) and AP (Average Precision)
- Score: 6.497816402045097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Some data from multiple sources can be modeled as multimodal time-series
events which have different sampling frequencies, data compositions, temporal
relations and characteristics. Different types of events have complex nonlinear
relationships, and the time of each event is irregular. Neither the classical
Recurrent Neural Network (RNN) model nor the current state-of-the-art
Transformer model can deal with these features well. In this paper, a features
fusion framework for multimodal irregular time-series events is proposed based
on the Long Short-Term Memory networks (LSTM). Firstly, the complex features
are extracted according to the irregular patterns of different events.
Secondly, the nonlinear correlation and complex temporal dependencies
relationship between complex features are captured and fused into a tensor.
Finally, a feature gate are used to control the access frequency of different
tensors. Extensive experiments on MIMIC-III dataset demonstrate that the
proposed framework significantly outperforms to the existing methods in terms
of AUC (the area under Receiver Operating Characteristic curve) and AP (Average
Precision).
Related papers
- TiVaT: Joint-Axis Attention for Time Series Forecasting with Lead-Lag Dynamics [5.016178141636157]
TiVaT (Time-Variable Transformer) is a novel architecture that integrates temporal and variable dependencies.
TiVaT consistently delivers strong performance across diverse datasets.
This positions TiVaT as a new benchmark in MTS forecasting, particularly in handling datasets characterized by intricate and challenging dependencies.
arXiv Detail & Related papers (2024-10-02T13:24:24Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - TSLANet: Rethinking Transformers for Time Series Representation Learning [19.795353886621715]
Time series data is characterized by its intrinsic long and short-range dependencies.
We introduce a novel Time Series Lightweight Network (TSLANet) as a universal convolutional model for diverse time series tasks.
Our experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2024-04-12T13:41:29Z) - ContiFormer: Continuous-Time Transformer for Irregular Time Series
Modeling [30.12824131306359]
Modeling continuous-time dynamics on irregular time series is critical to account for data evolution and correlations that occur continuously.
Traditional methods including recurrent neural networks or Transformer models leverage inductive bias via powerful neural architectures to capture complex patterns.
We propose ContiFormer that extends the relation modeling of vanilla Transformer to the continuous-time domain.
arXiv Detail & Related papers (2024-02-16T12:34:38Z) - EdgeConvFormer: Dynamic Graph CNN and Transformer based Anomaly
Detection in Multivariate Time Series [7.514010315664322]
We propose a novel anomaly detection method, named EdgeConvFormer, which integrates stacked Time2vec embedding, dynamic graph CNN, and Transformer to extract global and local spatial-time information.
Experiments demonstrate that EdgeConvFormer can learn the spatial-temporal modeling from multivariate time series data and achieve better anomaly detection performance than the state-of-the-art approaches on many real-world datasets of different scales.
arXiv Detail & Related papers (2023-12-04T08:38:54Z) - Correlation-aware Spatial-Temporal Graph Learning for Multivariate
Time-series Anomaly Detection [67.60791405198063]
We propose a correlation-aware spatial-temporal graph learning (termed CST-GL) for time series anomaly detection.
CST-GL explicitly captures the pairwise correlations via a multivariate time series correlation learning module.
A novel anomaly scoring component is further integrated into CST-GL to estimate the degree of an anomaly in a purely unsupervised manner.
arXiv Detail & Related papers (2023-07-17T11:04:27Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z) - A Multi-Channel Neural Graphical Event Model with Negative Evidence [76.51278722190607]
Event datasets are sequences of events of various types occurring irregularly over the time-line.
We propose a non-parametric deep neural network approach in order to estimate the underlying intensity functions.
arXiv Detail & Related papers (2020-02-21T23:10:50Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - A Deep Structural Model for Analyzing Correlated Multivariate Time
Series [11.009809732645888]
We present a deep learning structural time series model which can handle correlated multivariate time series input.
The model explicitly learns/extracts the trend, seasonality, and event components.
We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of time series data sets.
arXiv Detail & Related papers (2020-01-02T18:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.