Self-supervised contrastive learning performs non-linear system identification
- URL: http://arxiv.org/abs/2410.14673v1
- Date: Fri, 18 Oct 2024 17:59:25 GMT
- Title: Self-supervised contrastive learning performs non-linear system identification
- Authors: Rodrigo González Laiz, Tobias Schmidt, Steffen Schneider,
- Abstract summary: We show that self-supervised learning can perform system identification in latent space.
We propose DynCL, a framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model.
- Score: 2.393499494583001
- License:
- Abstract: Self-supervised learning (SSL) approaches have brought tremendous success across many tasks and domains. It has been argued that these successes can be attributed to a link between SSL and identifiable representation learning: Temporal structure and auxiliary variables ensure that latent representations are related to the true underlying generative factors of the data. Here, we deepen this connection and show that SSL can perform system identification in latent space. We propose DynCL, a framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model, give theoretical guarantees and validate them empirically.
Related papers
- A Survey of the Self Supervised Learning Mechanisms for Vision Transformers [5.152455218955949]
The application of self supervised learning (SSL) in vision tasks has gained significant attention.
We develop a comprehensive taxonomy of systematically classifying the SSL techniques.
We discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field.
arXiv Detail & Related papers (2024-08-30T07:38:28Z) - Continual Learning of Nonlinear Independent Representations [17.65617189829692]
We show that model identifiability progresses from a subspace level to a component-wise level as the number of distributions increases.
Our method achieves performance comparable to nonlinear ICA methods trained jointly on multiple offline distributions.
arXiv Detail & Related papers (2024-08-11T14:33:37Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls
and Opportunities [50.231837687221685]
Self-supervised learning (SSL) has transformed machine learning and its many real world applications.
Unsupervised anomaly detection (AD) has also capitalized on SSL, by self-generating pseudo-anomalies.
arXiv Detail & Related papers (2023-08-28T07:55:01Z) - Reverse Engineering Self-Supervised Learning [17.720366509919167]
Self-supervised learning (SSL) is a powerful tool in machine learning.
This paper presents an in-depth empirical analysis of SSL-trained representations.
arXiv Detail & Related papers (2023-05-24T23:15:28Z) - Spatiotemporal Self-supervised Learning for Point Clouds in the Wild [65.56679416475943]
We introduce an SSL strategy that leverages positive pairs in both the spatial and temporal domain.
We demonstrate the benefits of our approach via extensive experiments performed by self-supervised training on two large-scale LiDAR datasets.
arXiv Detail & Related papers (2023-03-28T18:06:22Z) - Self-Supervised Visual Representation Learning via Residual Momentum [15.515169550346517]
Self-supervised learning (SSL) approaches have shown promising capabilities in learning the representation from unlabeled data.
momentum-based SSL frameworks suffer from a large gap in representation between the online encoder (student) and the momentum encoder (teacher)
This paper is the first to investigate and identify this invisible gap as a bottleneck that has been overlooked in the existing SSL frameworks.
We propose "residual momentum" to directly reduce this gap to encourage the student to learn the representation as close to that of the teacher as possible.
arXiv Detail & Related papers (2022-11-17T19:54:02Z) - The Geometry of Self-supervised Learning Models and its Impact on
Transfer Learning [62.601681746034956]
Self-supervised learning (SSL) has emerged as a desirable paradigm in computer vision.
We propose a data-driven geometric strategy to analyze different SSL models using local neighborhoods in the feature space induced by each.
arXiv Detail & Related papers (2022-09-18T18:15:38Z) - Understanding the Role of Nonlinearity in Training Dynamics of
Contrastive Learning [37.27098255569438]
We study the role of nonlinearity in the training dynamics of contrastive learning (CL) on one and two-layer nonlinear networks.
We show that the presence of nonlinearity leads to many local optima even in 1-layer setting.
For 2-layer setting, we also discover emphglobal modulation: those local patterns discriminative from the perspective of global-level patterns are prioritized to learn.
arXiv Detail & Related papers (2022-06-02T23:52:35Z) - Self-Supervised Models are Continual Learners [79.70541692930108]
We show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for Continual Learning.
We devise a framework for Continual self-supervised visual representation Learning that significantly improves the quality of the learned representations.
arXiv Detail & Related papers (2021-12-08T10:39:13Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.