Learning Robust and Consistent Time Series Representations: A Dilated
Inception-Based Approach
- URL: http://arxiv.org/abs/2306.06579v1
- Date: Sun, 11 Jun 2023 04:00:11 GMT
- Title: Learning Robust and Consistent Time Series Representations: A Dilated
Inception-Based Approach
- Authors: Anh Duy Nguyen, Trang H. Tran, Hieu H. Pham, Phi Le Nguyen, Lam M.
Nguyen
- Abstract summary: We introduce a novel sampling strategy that promotes consistent representation learning with the presence of noise in natural time series.
We also propose an encoder architecture that utilizes dilated convolution within the Inception block to create a scalable and robust network architecture.
Our method consistently outperforms state-of-the-art methods in forecasting, classification, and abnormality detection tasks.
- Score: 14.344468798269622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning for time series has been an important research area
for decades. Since the emergence of the foundation models, this topic has
attracted a lot of attention in contrastive self-supervised learning, to solve
a wide range of downstream tasks. However, there have been several challenges
for contrastive time series processing. First, there is no work considering
noise, which is one of the critical factors affecting the efficacy of time
series tasks. Second, there is a lack of efficient yet lightweight encoder
architectures that can learn informative representations robust to various
downstream tasks. To fill in these gaps, we initiate a novel sampling strategy
that promotes consistent representation learning with the presence of noise in
natural time series. In addition, we propose an encoder architecture that
utilizes dilated convolution within the Inception block to create a scalable
and robust network architecture with a wide receptive field. Experiments
demonstrate that our method consistently outperforms state-of-the-art methods
in forecasting, classification, and abnormality detection tasks, e.g. ranks
first over two-thirds of the classification UCR datasets, with only $40\%$ of
the parameters compared to the second-best approach. Our source code for
CoInception framework is accessible at
https://github.com/anhduy0911/CoInception.
Related papers
- FreRA: A Frequency-Refined Augmentation for Contrastive Learning on Time Series Classification [56.925103708982164]
We present a novel perspective from the frequency domain and identify three advantages for downstream classification: global, independent, and compact.<n>We propose the lightweight yet effective Frequency Refined Augmentation (FreRA) tailored for time series contrastive learning on classification tasks.<n>FreRA consistently outperforms ten leading baselines on time series classification, anomaly detection, and transfer learning tasks.
arXiv Detail & Related papers (2025-05-29T07:18:28Z) - AVATAR: Adversarial Autoencoders with Autoregressive Refinement for Time Series Generation [0.9374652839580181]
We introduce AVATAR, a framework that combines Adversarial Autoencoders (AAE) with Autoregressive Learning to generate time series data.
Specifically, our technique integrates the autoencoder with a supervisor and introduces a novel supervised loss to assist the decoder in learning the temporal dynamics of time series data.
arXiv Detail & Related papers (2025-01-03T05:44:13Z) - Enhancing Hyperspectral Image Prediction with Contrastive Learning in Low-Label Regime [0.810304644344495]
Self-supervised contrastive learning is an effective approach for addressing the challenge of limited labelled data.
We evaluate the method's performance for both the single-label and multi-label classification tasks.
arXiv Detail & Related papers (2024-10-10T10:20:16Z) - TimeDART: A Diffusion Autoregressive Transformer for Self-Supervised Time Series Representation [47.58016750718323]
We propose TimeDART, a novel self-supervised time series pre-training framework.
TimeDART unifies two powerful generative paradigms to learn more transferable representations.
We conduct extensive experiments on public datasets for time series forecasting and classification.
arXiv Detail & Related papers (2024-10-08T06:08:33Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Unsupervised Multi-modal Feature Alignment for Time Series
Representation Learning [20.655943795843037]
We introduce an innovative approach that focuses on aligning and binding time series representations encoded from different modalities.
In contrast to conventional methods that fuse features from multiple modalities, our proposed approach simplifies the neural architecture by retaining a single time series encoder.
Our approach outperforms existing state-of-the-art URL methods across diverse downstream tasks.
arXiv Detail & Related papers (2023-12-09T22:31:20Z) - Video Anomaly Detection using GAN [0.0]
This thesis study aims to offer the solution for this use case so that human resources won't be required to keep an eye out for any unusual activity in the surveillance system records.
We have developed a novel generative adversarial network (GAN) based anomaly detection model.
arXiv Detail & Related papers (2023-11-23T16:41:30Z) - A Co-training Approach for Noisy Time Series Learning [35.61140756248812]
We conduct co-training based contrastive learning iteratively to learn the encoders.
Our experiments demonstrate that this co-training approach leads to a significant improvement in performance.
Empirical evaluations on four time series benchmarks in unsupervised and semi-supervised settings reveal that TS-CoT outperforms existing methods.
arXiv Detail & Related papers (2023-08-24T04:33:30Z) - AntPivot: Livestream Highlight Detection via Hierarchical Attention
Mechanism [64.70568612993416]
We formulate a new task Livestream Highlight Detection, discuss and analyze the difficulties listed above and propose a novel architecture AntPivot to solve this problem.
We construct a fully-annotated dataset AntHighlight to instantiate this task and evaluate the performance of our model.
arXiv Detail & Related papers (2022-06-10T05:58:11Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - A Self-Supervised Gait Encoding Approach with Locality-Awareness for 3D
Skeleton Based Person Re-Identification [65.18004601366066]
Person re-identification (Re-ID) via gait features within 3D skeleton sequences is a newly-emerging topic with several advantages.
This paper proposes a self-supervised gait encoding approach that can leverage unlabeled skeleton data to learn gait representations for person Re-ID.
arXiv Detail & Related papers (2020-09-05T16:06:04Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.