Learnable Dynamic Temporal Pooling for Time Series Classification
- URL: http://arxiv.org/abs/2104.02577v1
- Date: Fri, 2 Apr 2021 08:58:44 GMT
- Title: Learnable Dynamic Temporal Pooling for Time Series Classification
- Authors: Dongha Lee, Seonghyeon Lee, Hwanjo Yu
- Abstract summary: We present a dynamic temporal pooling (DTP) technique that reduces the temporal size of hidden representations by aggregating the features at the segment-level.
For the partition of a whole series into multiple segments, we utilize dynamic time warping (DTW) to align each time point in a temporal order with the prototypical features of the segments.
The DTP layer combined with a fully-connected layer helps to extract further discriminative features considering their temporal position within an input time series.
- Score: 22.931314501371805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increase of available time series data, predicting their class
labels has been one of the most important challenges in a wide range of
disciplines. Recent studies on time series classification show that
convolutional neural networks (CNN) achieved the state-of-the-art performance
as a single classifier. In this work, pointing out that the global pooling
layer that is usually adopted by existing CNN classifiers discards the temporal
information of high-level features, we present a dynamic temporal pooling (DTP)
technique that reduces the temporal size of hidden representations by
aggregating the features at the segment-level. For the partition of a whole
series into multiple segments, we utilize dynamic time warping (DTW) to align
each time point in a temporal order with the prototypical features of the
segments, which can be optimized simultaneously with the network parameters of
CNN classifiers. The DTP layer combined with a fully-connected layer helps to
extract further discriminative features considering their temporal position
within an input time series. Extensive experiments on both univariate and
multivariate time series datasets show that our proposed pooling significantly
improves the classification performance.
Related papers
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification [4.5939667818289385]
HiTime is a hierarchical multi-modal model that seamlessly integrates temporal information into large language models.
Our findings highlight the potential of integrating temporal features into LLMs, paving the way for advanced time series analysis.
arXiv Detail & Related papers (2024-10-24T12:32:19Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Temporal-aware Hierarchical Mask Classification for Video Semantic
Segmentation [62.275143240798236]
Video semantic segmentation dataset has limited categories per video.
Less than 10% of queries could be matched to receive meaningful gradient updates during VSS training.
Our method achieves state-of-the-art performance on the latest challenging VSS benchmark VSPW without bells and whistles.
arXiv Detail & Related papers (2023-09-14T20:31:06Z) - MTS2Graph: Interpretable Multivariate Time Series Classification with
Temporal Evolving Graphs [1.1756822700775666]
We introduce a new framework for interpreting time series data by extracting and clustering the input representative patterns.
We run experiments on eight datasets of the UCR/UEA archive, along with HAR and PAM datasets.
arXiv Detail & Related papers (2023-06-06T16:24:27Z) - TodyNet: Temporal Dynamic Graph Neural Network for Multivariate Time
Series Classification [6.76723360505692]
We propose a novel temporal dynamic neural graph network (TodyNet) that can extract hidden-temporal dependencies without undefined graph structure.
The experiments on 26 UEA benchmark datasets illustrate that the proposed TodyNet outperforms existing deep learning-based methods in the MTSC tasks.
arXiv Detail & Related papers (2023-04-11T09:21:28Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - DCSF: Deep Convolutional Set Functions for Classification of
Asynchronous Time Series [5.339109578928972]
Asynchronous Time Series is a time series where all the channels are observed asynchronously-independently.
This paper proposes a novel framework, that is highly scalable and memory efficient, for the asynchronous time series classification task.
We explore convolutional neural networks, which are well researched for the closely related problem-classification of regularly sampled and fully observed time series.
arXiv Detail & Related papers (2022-08-24T08:47:36Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Convolutional Neural Networks for Time-dependent Classification of
Variable-length Time Series [4.068599332377799]
Time series data are often obtained only within a limited time range due to interruptions during observation process.
To classify such partial time series, we need to account for 1) the variable-length data drawn from 2) different timestamps.
Existing convolutional neural networks use global pooling after convolutional layers to cancel the length differences.
This architecture suffers from the trade-off between incorporating entire temporal correlations in long data and avoiding feature collapse for short data.
arXiv Detail & Related papers (2022-07-08T07:15:13Z) - Novel Features for Time Series Analysis: A Complex Networks Approach [62.997667081978825]
Time series data are ubiquitous in several domains as climate, economics and health care.
Recent conceptual approach relies on time series mapping to complex networks.
Network analysis can be used to characterize different types of time series.
arXiv Detail & Related papers (2021-10-11T13:46:28Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.