Learning Without Augmenting: Unsupervised Time Series Representation Learning via Frame Projections
- URL: http://arxiv.org/abs/2510.22655v1
- Date: Sun, 26 Oct 2025 12:36:29 GMT
- Title: Learning Without Augmenting: Unsupervised Time Series Representation Learning via Frame Projections
- Authors: Berken Utku Demirel, Christian Holz,
- Abstract summary: Self-supervised learning has emerged as a powerful paradigm for learning representations without labeled data.<n>Most SSL approaches rely on strong, well-established, handcrafted data augmentations to generate diverse views for representation learning.<n>We propose an unsupervised representation learning method that replaces augmentations by generating views using orthonormal bases and overcomplete frames.
- Score: 35.715609556178165
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Self-supervised learning (SSL) has emerged as a powerful paradigm for learning representations without labeled data. Most SSL approaches rely on strong, well-established, handcrafted data augmentations to generate diverse views for representation learning. However, designing such augmentations requires domain-specific knowledge and implicitly imposes representational invariances on the model, which can limit generalization. In this work, we propose an unsupervised representation learning method that replaces augmentations by generating views using orthonormal bases and overcomplete frames. We show that embeddings learned from orthonormal and overcomplete spaces reside on distinct manifolds, shaped by the geometric biases introduced by representing samples in different spaces. By jointly leveraging the complementary geometry of these distinct manifolds, our approach achieves superior performance without artificially increasing data diversity through strong augmentations. We demonstrate the effectiveness of our method on nine datasets across five temporal sequence tasks, where signal-specific characteristics make data augmentations particularly challenging. Without relying on augmentation-induced diversity, our method achieves performance gains of up to 15--20\% over existing self-supervised approaches. Source code: https://github.com/eth-siplab/Learning-with-FrameProjections
Related papers
- Decoupling Augmentation Bias in Prompt Learning for Vision-Language Models [8.634414503821697]
Methods such as CoCoOp have shown that replacing handcrafted prompts with learnable vectors, known as prompt learning, can result in improved performance.<n>While traditional zero-shot learning techniques benefit from various data augmentation strategies, prompt learning has primarily focused on text-based modifications.<n>We explore how image-level augmentations, particularly those that introduce attribute-specific variations, can support and enhance prompt learning.
arXiv Detail & Related papers (2025-11-05T11:15:16Z) - Can Generative Models Improve Self-Supervised Representation Learning? [0.7999703756441756]
We introduce a framework that enriches the self-supervised learning (SSL) paradigm by utilizing generative models to produce semantically consistent image augmentations.<n>Our results show that our framework significantly enhances the quality of learned visual representations by up to 10% Top-1 accuracy in downstream tasks.
arXiv Detail & Related papers (2024-03-09T17:17:07Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Parametric Augmentation for Time Series Contrastive Learning [33.47157775532995]
We create positive examples that assist the model in learning robust and discriminative representations.
Usually, preset human intuition directs the selection of relevant data augmentations.
We propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning.
arXiv Detail & Related papers (2024-02-16T03:51:14Z) - Combating Representation Learning Disparity with Geometric Harmonization [50.29859682439571]
We propose a novel Geometric Harmonization (GH) method to encourage category-level uniformity in representation learning.
Our proposal does not alter the setting of SSL and can be easily integrated into existing methods in a low-cost manner.
arXiv Detail & Related papers (2023-10-26T17:41:11Z) - Self-supervised Representation Learning From Random Data Projectors [13.764897214965766]
This paper presents an SSRL approach that can be applied to any data modality and network architecture.
We show that high-quality data representations can be learned by reconstructing random data projections.
arXiv Detail & Related papers (2023-10-11T18:00:01Z) - Joint Data and Feature Augmentation for Self-Supervised Representation
Learning on Point Clouds [4.723757543677507]
We propose a fusion contrastive learning framework to combine data augmentations in Euclidean space and feature augmentations in feature space.
We conduct extensive object classification experiments and object part segmentation experiments to validate the transferability of the proposed framework.
Experimental results demonstrate that the proposed framework is effective to learn the point cloud representation in a self-supervised manner.
arXiv Detail & Related papers (2022-11-02T14:58:03Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Improving Transferability of Representations via Augmentation-Aware
Self-Supervision [117.15012005163322]
AugSelf is an auxiliary self-supervised loss that learns the difference of augmentation parameters between two randomly augmented samples.
Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability.
AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost.
arXiv Detail & Related papers (2021-11-18T10:43:50Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.