Self-Distilled Representation Learning for Time Series
- URL: http://arxiv.org/abs/2311.11335v1
- Date: Sun, 19 Nov 2023 14:34:01 GMT
- Title: Self-Distilled Representation Learning for Time Series
- Authors: Felix Pieper and Konstantin Ditschuneit and Martin Genzel and
Alexandra Lindt and Johannes Otterbach
- Abstract summary: Self-supervised learning for time-series data holds potential similar to that recently unleashed in Natural Language Processing and Computer Vision.
We propose a conceptually simple yet powerful non-contrastive approach, based on the data2vec self-distillation framework.
We demonstrate the competitiveness of our approach for classification and forecasting as downstream tasks, comparing with state-of-the-art self-supervised learning methods on the UCR and UEA archives as well as the ETT and Electricity datasets.
- Score: 45.51976109748732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning for time-series data holds potential similar to that
recently unleashed in Natural Language Processing and Computer Vision. While
most existing works in this area focus on contrastive learning, we propose a
conceptually simple yet powerful non-contrastive approach, based on the
data2vec self-distillation framework. The core of our method is a
student-teacher scheme that predicts the latent representation of an input time
series from masked views of the same time series. This strategy avoids strong
modality-specific assumptions and biases typically introduced by the design of
contrastive sample pairs. We demonstrate the competitiveness of our approach
for classification and forecasting as downstream tasks, comparing with
state-of-the-art self-supervised learning methods on the UCR and UEA archives
as well as the ETT and Electricity datasets.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - XAI for time-series classification leveraging image highlight methods [0.0699049312989311]
We present a Deep Neural Network (DNN) in a teacher-student architecture (distillation model) that offers interpretability in time-series classification tasks.
arXiv Detail & Related papers (2023-11-28T10:59:18Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation [71.76008290101214]
DistillFlow is a knowledge distillation approach to learning optical flow.
It achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets.
Our models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark.
arXiv Detail & Related papers (2021-06-08T09:13:34Z) - Semi-supervised Facial Action Unit Intensity Estimation with Contrastive
Learning [54.90704746573636]
Our method does not require to manually select key frames, and produces state-of-the-art results with as little as $2%$ of annotated frames.
We experimentally validate that our method outperforms existing methods when working with as little as $2%$ of randomly chosen data.
arXiv Detail & Related papers (2020-11-03T17:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.