SMATE: Semi-Supervised Spatio-Temporal Representation Learning on
Multivariate Time Series
- URL: http://arxiv.org/abs/2110.00578v1
- Date: Fri, 1 Oct 2021 17:59:46 GMT
- Title: SMATE: Semi-Supervised Spatio-Temporal Representation Learning on
Multivariate Time Series
- Authors: Jingwei Zuo, Karine Zeitouni and Yehia Taher
- Abstract summary: We propose SMATE, a novel semi-supervised model for learning the interpretable Spatio-Temporal representation from weakly labeled MTS.
We validate the learned representation on 22 public datasets from the UEA MTS archive.
- Score: 0.6445605125467572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning from Multivariate Time Series (MTS) has attracted widespread
attention in recent years. In particular, label shortage is a real challenge
for the classification task on MTS, considering its complex dimensional and
sequential data structure. Unlike self-training and positive unlabeled learning
that rely on distance-based classifiers, in this paper, we propose SMATE, a
novel semi-supervised model for learning the interpretable Spatio-Temporal
representation from weakly labeled MTS. We validate empirically the learned
representation on 22 public datasets from the UEA MTS archive. We compare it
with 13 state-of-the-art baseline methods for fully supervised tasks and four
baselines for semi-supervised tasks. The results show the reliability and
efficiency of our proposed method.
Related papers
- M-CELS: Counterfactual Explanation for Multivariate Time Series Data Guided by Learned Saliency Maps [0.9374652839580181]
We introduce M-CELS, a counterfactual explanation model designed to enhance interpretability in multidimensional time series classification tasks.
Results demonstrate the superior performance of M-CELS in terms of validity, proximity, and sparsity.
arXiv Detail & Related papers (2024-11-04T22:16:24Z) - PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - SMC-NCA: Semantic-guided Multi-level Contrast for Semi-supervised Temporal Action Segmentation [53.010417880335424]
Semi-supervised temporal action segmentation (SS-TA) aims to perform frame-wise classification in long untrimmed videos.
Recent studies have shown the potential of contrastive learning in unsupervised representation learning using unlabelled data.
We propose a novel Semantic-guided Multi-level Contrast scheme with a Neighbourhood-Consistency-Aware unit (SMC-NCA) to extract strong frame-wise representations.
arXiv Detail & Related papers (2023-12-19T17:26:44Z) - Exploring Progress in Multivariate Time Series Forecasting: Comprehensive Benchmarking and Heterogeneity Analysis [70.78170766633039]
We address the need for means of assessing MTS forecasting proposals reliably and fairly.
BasicTS+ is a benchmark designed to enable fair, comprehensive, and reproducible comparison of MTS forecasting solutions.
We apply BasicTS+ along with rich datasets to assess the capabilities of more than 45 MTS forecasting solutions.
arXiv Detail & Related papers (2023-10-09T19:52:22Z) - Multi-Level Contrastive Learning for Dense Prediction Task [59.591755258395594]
We present Multi-Level Contrastive Learning for Dense Prediction Task (MCL), an efficient self-supervised method for learning region-level feature representation for dense prediction tasks.
Our method is motivated by the three key factors in detection: localization, scale consistency and recognition.
Our method consistently outperforms the recent state-of-the-art methods on various datasets with significant margins.
arXiv Detail & Related papers (2023-04-04T17:59:04Z) - Enhancing Multivariate Time Series Classifiers through Self-Attention
and Relative Positioning Infusion [4.18804572788063]
Time Series Classification (TSC) is an important and challenging task for many visual computing applications.
We propose two novel attention blocks that can enhance deep learning-based TSC approaches.
We show that adding the proposed attention blocks improves base models' average accuracy by up to 3.6%.
arXiv Detail & Related papers (2023-02-13T20:50:34Z) - CaSS: A Channel-aware Self-supervised Representation Learning Framework
for Multivariate Time Series Classification [4.415086501328683]
We propose a unified channel-aware self-supervised learning framework CaSS.
We first design a new Transformer-based encoder Channel-aware Transformer (CaT) to capture the complex relationships between different time channels of MTS.
Second, we combine two novel pretext tasks Next Trend Prediction (NTP) and Contextual Similarity (CS) for the self-supervised representation learning with our proposed encoder.
arXiv Detail & Related papers (2022-03-08T08:36:40Z) - SAITS: Self-Attention-based Imputation for Time Series [6.321652307514677]
SAITS is a novel method based on the self-attention mechanism for missing value imputation in time series.
It learns missing values from a weighted combination of two diagonally-masked self-attention blocks.
Tests show SAITS outperforms state-of-the-art methods on the time-series imputation task efficiently.
arXiv Detail & Related papers (2022-02-17T08:40:42Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.