Learning Robust and Consistent Time Series Representations: A Dilated
Inception-Based Approach
- URL: http://arxiv.org/abs/2306.06579v1
- Date: Sun, 11 Jun 2023 04:00:11 GMT
- Title: Learning Robust and Consistent Time Series Representations: A Dilated
Inception-Based Approach
- Authors: Anh Duy Nguyen, Trang H. Tran, Hieu H. Pham, Phi Le Nguyen, Lam M.
Nguyen
- Abstract summary: We introduce a novel sampling strategy that promotes consistent representation learning with the presence of noise in natural time series.
We also propose an encoder architecture that utilizes dilated convolution within the Inception block to create a scalable and robust network architecture.
Our method consistently outperforms state-of-the-art methods in forecasting, classification, and abnormality detection tasks.
- Score: 14.344468798269622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning for time series has been an important research area
for decades. Since the emergence of the foundation models, this topic has
attracted a lot of attention in contrastive self-supervised learning, to solve
a wide range of downstream tasks. However, there have been several challenges
for contrastive time series processing. First, there is no work considering
noise, which is one of the critical factors affecting the efficacy of time
series tasks. Second, there is a lack of efficient yet lightweight encoder
architectures that can learn informative representations robust to various
downstream tasks. To fill in these gaps, we initiate a novel sampling strategy
that promotes consistent representation learning with the presence of noise in
natural time series. In addition, we propose an encoder architecture that
utilizes dilated convolution within the Inception block to create a scalable
and robust network architecture with a wide receptive field. Experiments
demonstrate that our method consistently outperforms state-of-the-art methods
in forecasting, classification, and abnormality detection tasks, e.g. ranks
first over two-thirds of the classification UCR datasets, with only $40\%$ of
the parameters compared to the second-best approach. Our source code for
CoInception framework is accessible at
https://github.com/anhduy0911/CoInception.
Related papers
- Hierarchical Temporal Context Learning for Camera-based Semantic Scene Completion [57.232688209606515]
We present HTCL, a novel Temporal Temporal Context Learning paradigm for improving camera-based semantic scene completion.
Our method ranks $1st$ on the Semantic KITTI benchmark and even surpasses LiDAR-based methods in terms of mIoU.
arXiv Detail & Related papers (2024-07-02T09:11:17Z) - Unsupervised Multi-modal Feature Alignment for Time Series
Representation Learning [20.655943795843037]
We introduce an innovative approach that focuses on aligning and binding time series representations encoded from different modalities.
In contrast to conventional methods that fuse features from multiple modalities, our proposed approach simplifies the neural architecture by retaining a single time series encoder.
Our approach outperforms existing state-of-the-art URL methods across diverse downstream tasks.
arXiv Detail & Related papers (2023-12-09T22:31:20Z) - Learning from One Continuous Video Stream [70.30084026960819]
We introduce a framework for online learning from a single continuous video stream.
This poses great challenges given the high correlation between consecutive video frames.
We employ pixel-to-pixel modelling as a practical and flexible way to switch between pre-training and single-stream evaluation.
arXiv Detail & Related papers (2023-12-01T14:03:30Z) - Few-shot Image Classification based on Gradual Machine Learning [6.935034849731568]
Few-shot image classification aims to accurately classify unlabeled images using only a few labeled samples.
We propose a novel approach based on the non-i.i.d paradigm of gradual machine learning (GML)
We show that the proposed approach can improve the SOTA performance by 1-5% in terms of accuracy.
arXiv Detail & Related papers (2023-07-28T12:30:41Z) - Contrastive Shapelet Learning for Unsupervised Multivariate Time Series
Representation Learning [21.437162740349045]
unsupervised representation learning (URL) has the capability in learning generalizable representation for many downstream tasks without using inaccessible labels.
We propose a novel URL framework by learning time-series-specific shapelet-based representation through a popular contrasting learning paradigm.
A unified shapelet-based encoder and a novel learning objective with multi-grained contrasting and multi-scale alignment are particularly designed to achieve our goal.
arXiv Detail & Related papers (2023-05-30T09:31:57Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - DyG2Vec: Efficient Representation Learning for Dynamic Graphs [26.792732615703372]
Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patterns.
We present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings.
arXiv Detail & Related papers (2022-10-30T18:13:04Z) - Large Scale Time-Series Representation Learning via Simultaneous Low and
High Frequency Feature Bootstrapping [7.0064929761691745]
We propose a non-contrastive self-supervised learning approach efficiently captures low and high-frequency time-varying features.
Our method takes raw time series data as input and creates two different augmented views for two branches of the model.
To demonstrate the robustness of our model we performed extensive experiments and ablation studies on five real-world time-series datasets.
arXiv Detail & Related papers (2022-04-24T14:39:47Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z) - Learning multiview 3D point cloud registration [74.39499501822682]
We present a novel, end-to-end learnable, multiview 3D point cloud registration algorithm.
Our approach outperforms the state-of-the-art by a significant margin, while being end-to-end trainable and computationally less costly.
arXiv Detail & Related papers (2020-01-15T03:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.