Attention to Warp: Deep Metric Learning for Multivariate Time Series
- URL: http://arxiv.org/abs/2103.15074v1
- Date: Sun, 28 Mar 2021 07:54:01 GMT
- Title: Attention to Warp: Deep Metric Learning for Multivariate Time Series
- Authors: Shinnosuke Matsuo, Xiaomeng Wu, Gantugs Atarsaikhan, Akisato Kimura,
Kunio Kashino, Brian Kenji Iwana, Seiichi Uchida
- Abstract summary: This paper proposes a novel neural network-based approach for robust yet discriminative time series classification and verification.
We experimentally demonstrate the superiority of the proposed approach over previous non-parametric and deep models.
- Score: 28.540348999309547
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep time series metric learning is challenging due to the difficult
trade-off between temporal invariance to nonlinear distortion and
discriminative power in identifying non-matching sequences. This paper proposes
a novel neural network-based approach for robust yet discriminative time series
classification and verification. This approach adapts a parameterized attention
model to time warping for greater and more adaptive temporal invariance. It is
robust against not only local but also large global distortions, so that even
matching pairs that do not satisfy the monotonicity, continuity, and boundary
conditions can still be successfully identified. Learning of this model is
further guided by dynamic time warping to impose temporal constraints for
stabilized training and higher discriminative power. It can learn to augment
the inter-class variation through warping, so that similar but different
classes can be effectively distinguished. We experimentally demonstrate the
superiority of the proposed approach over previous non-parametric and deep
models by combining it with a deep online signature verification framework,
after confirming its promising behavior in single-letter handwriting
classification on the Unipen dataset.
Related papers
- Contrastive Learning Is Not Optimal for Quasiperiodic Time Series [4.2807943283312095]
We introduce Distilled Embedding for Almost-Periodic Time Series (DEAPS) in this paper.
DEAPS is a non-contrastive method tailored for quasiperiodic time series, such as electrocardiogram (ECG) data.
We demonstrate a notable improvement of +10% over existing SOTA methods when just a few annotated records are presented to fit a Machine Learning (ML) model.
arXiv Detail & Related papers (2024-07-24T08:02:41Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Diffeomorphic Transformations for Time Series Analysis: An Efficient
Approach to Nonlinear Warping [0.0]
The proliferation and ubiquity of temporal data across many disciplines has sparked interest for similarity, classification and clustering methods.
Traditional distance measures such as the Euclidean are not well-suited due to the time-dependent nature of the data.
This thesis proposes novel elastic alignment methods that use parametric & diffeomorphic warping transformations.
arXiv Detail & Related papers (2023-09-25T10:51:47Z) - Deep Attentive Time Warping [22.411355064531143]
We propose a neural network model for task-adaptive time warping.
We use the attention model, called the bipartite attention model, to develop an explicit time warping mechanism.
Unlike other learnable models using DTW for warping, our model predicts all local correspondences between two time series.
arXiv Detail & Related papers (2023-09-13T04:49:49Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - Persistence-Based Discretization for Learning Discrete Event Systems
from Time Series [50.591267188664666]
Persist is a discretization method that intends to create persisting symbols by using a score called persistence score.
We replace the metric used in the persistence score, the Kullback-Leibler divergence, with the Wasserstein distance.
Experiments show that the improved persistence score enhances Persist's ability to capture the information of the original time series.
arXiv Detail & Related papers (2023-01-12T14:10:30Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - TimeREISE: Time-series Randomized Evolving Input Sample Explanation [5.557646286040063]
TimeREISE is a model attribution method specifically aligned to success in the context of time series classification.
The method shows superior performance compared to existing approaches concerning different well-established measurements.
arXiv Detail & Related papers (2022-02-16T09:40:13Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Learning from Irregularly-Sampled Time Series: A Missing Data
Perspective [18.493394650508044]
Irregularly-sampled time series occur in many domains including healthcare.
We model irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function.
We propose learning methods for this framework based on variational autoencoders and generative adversarial networks.
arXiv Detail & Related papers (2020-08-17T20:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.