Multi-Faceted Representation Learning with Hybrid Architecture for Time
Series Classification
- URL: http://arxiv.org/abs/2012.11472v1
- Date: Mon, 21 Dec 2020 16:42:07 GMT
- Title: Multi-Faceted Representation Learning with Hybrid Architecture for Time
Series Classification
- Authors: Zhenyu Liu, Jian Cheng
- Abstract summary: We propose a hybrid neural architecture, called Self-Attentive Recurrent Convolutional Networks (SARCoN)
SARCoN is the synthesis of long short-term memory networks with self-attentive mechanisms and Fully Convolutional Networks.
Our work provides a novel angle that deepens the understanding of time series classification, qualifying our proposed model as an ideal choice for real-world applications.
- Score: 16.64345034889185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series classification problems exist in many fields and have been
explored for a couple of decades. However, they still remain challenging, and
their solutions need to be further improved for real-world applications in
terms of both accuracy and efficiency. In this paper, we propose a hybrid
neural architecture, called Self-Attentive Recurrent Convolutional Networks
(SARCoN), to learn multi-faceted representations for univariate time series.
SARCoN is the synthesis of long short-term memory networks with self-attentive
mechanisms and Fully Convolutional Networks, which work in parallel to learn
the representations of univariate time series from different perspectives. The
component modules of the proposed architecture are trained jointly in an
end-to-end manner and they classify the input time series in a cooperative way.
Due to its domain-agnostic nature, SARCoN is able to generalize a diversity of
domain tasks. Our experimental results show that, compared to the
state-of-the-art approaches for time series classification, the proposed
architecture can achieve remarkable improvements for a set of univariate time
series benchmarks from the UCR repository. Moreover, the self-attention and the
global average pooling in the proposed architecture enable visible
interpretability by facilitating the identification of the contribution regions
of the original time series. An overall analysis confirms that multi-faceted
representations of time series aid in capturing deep temporal corrections
within complex time series, which is essential for the improvement of time
series classification performance. Our work provides a novel angle that deepens
the understanding of time series classification, qualifying our proposed model
as an ideal choice for real-world applications.
Related papers
- Graph Neural Alchemist: An innovative fully modular architecture for time series-to-graph classification [0.0]
This paper introduces a novel Graph Neural Network (GNN) architecture for time series classification.
By representing time series as visibility graphs, it is possible to encode both temporal dependencies inherent to time series data.
Our architecture is fully modular, enabling flexible experimentation with different models.
arXiv Detail & Related papers (2024-10-12T00:03:40Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - RED CoMETS: An ensemble classifier for symbolically represented
multivariate time series [1.0878040851638]
This paper introduces a novel ensemble classifier called RED CoMETS.
Red CoMETS builds upon the success of Co-eye, an ensemble classifier specifically designed for symbolically represented univariate time series.
It achieves the highest reported accuracy in the literature for the 'HandMovementDirection' dataset.
arXiv Detail & Related papers (2023-07-25T17:36:34Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [59.125047512495456]
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.
We show that $tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures.
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - DCSF: Deep Convolutional Set Functions for Classification of
Asynchronous Time Series [5.339109578928972]
Asynchronous Time Series is a time series where all the channels are observed asynchronously-independently.
This paper proposes a novel framework, that is highly scalable and memory efficient, for the asynchronous time series classification task.
We explore convolutional neural networks, which are well researched for the closely related problem-classification of regularly sampled and fully observed time series.
arXiv Detail & Related papers (2022-08-24T08:47:36Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Novel Features for Time Series Analysis: A Complex Networks Approach [62.997667081978825]
Time series data are ubiquitous in several domains as climate, economics and health care.
Recent conceptual approach relies on time series mapping to complex networks.
Network analysis can be used to characterize different types of time series.
arXiv Detail & Related papers (2021-10-11T13:46:28Z) - Learnable Dynamic Temporal Pooling for Time Series Classification [22.931314501371805]
We present a dynamic temporal pooling (DTP) technique that reduces the temporal size of hidden representations by aggregating the features at the segment-level.
For the partition of a whole series into multiple segments, we utilize dynamic time warping (DTW) to align each time point in a temporal order with the prototypical features of the segments.
The DTP layer combined with a fully-connected layer helps to extract further discriminative features considering their temporal position within an input time series.
arXiv Detail & Related papers (2021-04-02T08:58:44Z) - Interpretable Time Series Classification using Linear Models and
Multi-resolution Multi-domain Symbolic Representations [6.6147550436077776]
We propose new time series classification algorithms to address gaps in current approaches.
Our approach is based on symbolic representations of time series, efficient sequence mining algorithms and linear classification models.
Our models are as accurate as deep learning models but are more efficient regarding running time and memory, can work with variable-length time series and can be interpreted by highlighting the discriminative symbolic features on the original time series.
arXiv Detail & Related papers (2020-05-31T15:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.