Explaining, Analyzing, and Probing Representations of Self-Supervised
Learning Models for Sensor-based Human Activity Recognition
- URL: http://arxiv.org/abs/2304.07304v2
- Date: Mon, 31 Jul 2023 15:23:04 GMT
- Title: Explaining, Analyzing, and Probing Representations of Self-Supervised
Learning Models for Sensor-based Human Activity Recognition
- Authors: Bulat Khaertdinov and Stylianos Asteriadis
- Abstract summary: Self-supervised learning (SSL) frameworks have been extensively applied to sensor-based Human Activity Recognition (HAR)
In this paper, we aim to analyze deep representations of two recent SSL frameworks, namely SimCLR and VICReg.
- Score: 2.2082422928825136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, self-supervised learning (SSL) frameworks have been
extensively applied to sensor-based Human Activity Recognition (HAR) in order
to learn deep representations without data annotations. While SSL frameworks
reach performance almost comparable to supervised models, studies on
interpreting representations learnt by SSL models are limited. Nevertheless,
modern explainability methods could help to unravel the differences between SSL
and supervised representations: how they are being learnt, what properties of
input data they preserve, and when SSL can be chosen over supervised training.
In this paper, we aim to analyze deep representations of two recent SSL
frameworks, namely SimCLR and VICReg. Specifically, the emphasis is made on (i)
comparing the robustness of supervised and SSL models to corruptions in input
data; (ii) explaining predictions of deep learning models using saliency maps
and highlighting what input channels are mostly used for predicting various
activities; (iii) exploring properties encoded in SSL and supervised
representations using probing. Extensive experiments on two single-device
datasets (MobiAct and UCI-HAR) have shown that self-supervised learning
representations are significantly more robust to noise in unseen data compared
to supervised models. In contrast, features learnt by the supervised approaches
are more homogeneous across subjects and better encode the nature of
activities.
Related papers
- A Survey of the Self Supervised Learning Mechanisms for Vision Transformers [5.152455218955949]
The application of self supervised learning (SSL) in vision tasks has gained significant attention.
We develop a comprehensive taxonomy of systematically classifying the SSL techniques.
We discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field.
arXiv Detail & Related papers (2024-08-30T07:38:28Z) - A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Reverse Engineering Self-Supervised Learning [17.720366509919167]
Self-supervised learning (SSL) is a powerful tool in machine learning.
This paper presents an in-depth empirical analysis of SSL-trained representations.
arXiv Detail & Related papers (2023-05-24T23:15:28Z) - Self-Supervised PPG Representation Learning Shows High Inter-Subject
Variability [3.8036939971290007]
We propose a Self-Supervised Learning (SSL) method with a pretext task of signal reconstruction to learn an informative generalized PPG representation.
Results show that in a very limited label data setting (10 samples per class or less), using SSL is beneficial.
SSL may pave the way for the broader use of machine learning models on PPG data in label-scarce regimes.
arXiv Detail & Related papers (2022-12-07T19:02:45Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - High Fidelity Visualization of What Your Self-Supervised Representation
Knows About [22.982471878833362]
In this work, we showcase the use of a conditional diffusion based generative model (RCDM) to visualize representations learned with self-supervised models.
We demonstrate how this model's generation quality is on par with state-of-the-art generative models while being faithful to the representation used as conditioning.
arXiv Detail & Related papers (2021-12-16T19:23:33Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - A Study of the Generalizability of Self-Supervised Representations [0.0]
Recent advancements in self-supervised learning (SSL) made it possible to learn generalizable visual representations from unlabeled data.
We study generalizability of the SSL and SL-based models via their prediction accuracy as well as prediction confidence.
We show that the SSL representations are more generalizable as compared to the SL representations.
arXiv Detail & Related papers (2021-09-19T15:57:37Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.