Utilizing Expert Features for Contrastive Learning of Time-Series
Representations
- URL: http://arxiv.org/abs/2206.11517v1
- Date: Thu, 23 Jun 2022 07:56:27 GMT
- Title: Utilizing Expert Features for Contrastive Learning of Time-Series
Representations
- Authors: Manuel Nonnenmacher, Lukas Oldenburg, Ingo Steinwart, David Reeb
- Abstract summary: We present an approach that incorporates expert knowledge for time-series representation learning.
Our method employs expert features to replace the commonly used data transformations in previous contrastive learning approaches.
We demonstrate on three real-world time-series datasets that ExpCLR surpasses several state-of-the-art methods for both unsupervised and semi-supervised representation learning.
- Score: 4.960805676180953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach that incorporates expert knowledge for time-series
representation learning. Our method employs expert features to replace the
commonly used data transformations in previous contrastive learning approaches.
We do this since time-series data frequently stems from the industrial or
medical field where expert features are often available from domain experts,
while transformations are generally elusive for time-series data. We start by
proposing two properties that useful time-series representations should fulfill
and show that current representation learning approaches do not ensure these
properties. We therefore devise ExpCLR, a novel contrastive learning approach
built on an objective that utilizes expert features to encourage both
properties for the learned representation. Finally, we demonstrate on three
real-world time-series datasets that ExpCLR surpasses several state-of-the-art
methods for both unsupervised and semi-supervised representation learning.
Related papers
- From Pixels to Predictions: Spectrogram and Vision Transformer for Better Time Series Forecasting [15.234725654622135]
Time series forecasting plays a crucial role in decision-making across various domains.
Recent studies have explored image-driven approaches using computer vision models to address these challenges.
We propose a novel approach that uses time-frequency spectrograms as the visual representation of time series data.
arXiv Detail & Related papers (2024-03-17T00:14:29Z) - Universal Time-Series Representation Learning: A Survey [14.340399848964662]
Time-series data exists in every corner of real-world systems and services.
Deep learning has demonstrated remarkable performance in extracting hidden patterns and features from time-series data.
arXiv Detail & Related papers (2024-01-08T08:00:04Z) - T-Rep: Representation Learning for Time Series using Time-Embeddings [5.885238773559017]
We propose T-Rep, a self-supervised method to learn time series representations at a timestep granularity.
T-Rep learns vector embeddings of time alongside its feature extractor, to extract temporal features.
We evaluate T-Rep on downstream classification, forecasting, and anomaly detection tasks.
arXiv Detail & Related papers (2023-10-06T15:45:28Z) - TimeTuner: Diagnosing Time Representations for Time-Series Forecasting
with Counterfactual Explanations [3.8357850372472915]
This paper contributes a novel visual analytics framework, namely TimeTuner, to help analysts understand how model behaviors are associated with localized, stationarity, and correlations of time-series representations.
We show that TimeTuner can help characterize time-series representations and guide the feature engineering processes.
arXiv Detail & Related papers (2023-07-19T11:40:15Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Measuring disentangled generative spatio-temporal representation [9.264758623908813]
We adopt two state-the-art disentangled representation learning methods and apply them to three large-scale public-temporal datasets.
We find that our methods can be used to discover real-world-world semantics to describe the variables in the learned representation.
arXiv Detail & Related papers (2022-02-10T03:57:06Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - Reinforcement Learning with Prototypical Representations [114.35801511501639]
Proto-RL is a self-supervised framework that ties representation learning with exploration through prototypical representations.
These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations.
This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
arXiv Detail & Related papers (2021-02-22T18:56:34Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.