Deep Semi-Supervised Learning for Time Series Classification
- URL: http://arxiv.org/abs/2102.03622v1
- Date: Sat, 6 Feb 2021 17:40:56 GMT
- Title: Deep Semi-Supervised Learning for Time Series Classification
- Authors: Jann Goschenhofer, Rasmus Hvingelby, David R\"ugamer, Janek Thomas,
Moritz Wagner, Bernd Bischl
- Abstract summary: We investigate the transferability of state-of-the-art deep semi-supervised models from image to time series classification.
We show that these transferred semi-supervised models show significant performance gains over strong supervised, semi-supervised and self-supervised alternatives.
- Score: 1.096924880299061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While Semi-supervised learning has gained much attention in computer vision
on image data, yet limited research exists on its applicability in the time
series domain. In this work, we investigate the transferability of
state-of-the-art deep semi-supervised models from image to time series
classification. We discuss the necessary model adaptations, in particular an
appropriate model backbone architecture and the use of tailored data
augmentation strategies. Based on these adaptations, we explore the potential
of deep semi-supervised learning in the context of time series classification
by evaluating our methods on large public time series classification problems
with varying amounts of labelled samples. We perform extensive comparisons
under a decidedly realistic and appropriate evaluation scheme with a unified
reimplementation of all algorithms considered, which is yet lacking in the
field. We find that these transferred semi-supervised models show significant
performance gains over strong supervised, semi-supervised and self-supervised
alternatives, especially for scenarios with very few labelled samples.
Related papers
- GM-DF: Generalized Multi-Scenario Deepfake Detection [49.072106087564144]
Existing face forgery detection usually follows the paradigm of training models in a single domain.
In this paper, we elaborately investigate the generalization capacity of deepfake detection models when jointly trained on multiple face forgery detection datasets.
arXiv Detail & Related papers (2024-06-28T17:42:08Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Semi-Supervised Learning for hyperspectral images by non parametrically
predicting view assignment [25.198550162904713]
Hyperspectral image (HSI) classification is gaining a lot of momentum in present time because of high inherent spectral information within the images.
Recently, to effectively train the deep learning models with minimal labelled samples, the unlabeled samples are also being leveraged in self-supervised and semi-supervised setting.
In this work, we leverage the idea of semi-supervised learning to assist the discriminative self-supervised pretraining of the models.
arXiv Detail & Related papers (2023-06-19T14:13:56Z) - Quantifying Quality of Class-Conditional Generative Models in
Time-Series Domain [4.219228636765818]
We introduce the InceptionTime Score (ITS) and the Frechet InceptionTime Distance (FITD) to gauge the qualitative performance of class conditional generative models on the time-series domain.
We conduct extensive experiments on 80 different datasets to study the discriminative capabilities of proposed metrics.
arXiv Detail & Related papers (2022-10-14T08:13:20Z) - Current Trends in Deep Learning for Earth Observation: An Open-source
Benchmark Arena for Image Classification [7.511257876007757]
'AiTLAS: Benchmark Arena' is an open-source benchmark framework for evaluating state-of-the-art deep learning approaches for image classification.
We present a comprehensive comparative analysis of more than 400 models derived from nine different state-of-the-art architectures.
arXiv Detail & Related papers (2022-07-14T20:18:58Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Semi-supervised Deep Learning for Image Classification with Distribution
Mismatch: A Survey [1.5469452301122175]
Deep learning models rely on the abundance of labelled observations to train a prospective model.
It is expensive to gather labelled observations of data, making the usage of deep learning models not ideal.
In many situations different unlabelled data sources might be available.
This raises the risk of a significant distribution mismatch between the labelled and unlabelled datasets.
arXiv Detail & Related papers (2022-03-01T02:46:00Z) - Neural Contextual Anomaly Detection for Time Series [7.523820334642732]
We introduce Neural Contextual Anomaly Detection (NCAD), a framework for anomaly detection on time series.
NCAD scales seamlessly from the unsupervised to supervised setting.
We demonstrate empirically on standard benchmark datasets that our approach obtains a state-of-the-art performance.
arXiv Detail & Related papers (2021-07-16T04:33:53Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.