Don't freeze: Finetune encoders for better Self-Supervised HAR
- URL: http://arxiv.org/abs/2307.01168v1
- Date: Mon, 3 Jul 2023 17:23:34 GMT
- Title: Don't freeze: Finetune encoders for better Self-Supervised HAR
- Authors: Vitor Fortes Rey, Dominique Nshimyimana, Paul Lukowicz
- Abstract summary: We show how a simple change - not freezing the representation - leads to substantial performance gains across pretext tasks.
The improvement was found in all four investigated datasets and across all four pretext tasks and is proportional to amount of labelled data.
- Score: 5.008235182488304
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently self-supervised learning has been proposed in the field of human
activity recognition as a solution to the labelled data availability problem.
The idea being that by using pretext tasks such as reconstruction or
contrastive predictive coding, useful representations can be learned that then
can be used for classification. Those approaches follow the pretrain, freeze
and fine-tune procedure. In this paper we will show how a simple change - not
freezing the representation - leads to substantial performance gains across
pretext tasks. The improvement was found in all four investigated datasets and
across all four pretext tasks and is inversely proportional to amount of
labelled data. Moreover the effect is present whether the pretext task is
carried on the Capture24 dataset or directly in unlabelled data of the target
dataset.
Related papers
- Data Imputation by Pursuing Better Classification: A Supervised Kernel-Based Method [23.16359277296206]
We propose a new framework that effectively leverages supervision information to complete missing data in a manner conducive to classification.
Our algorithm significantly outperforms other methods when the data is missing more than 60% of the features.
arXiv Detail & Related papers (2024-05-13T14:44:02Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Using Self-Supervised Pretext Tasks for Active Learning [7.214674613451605]
We propose a novel active learning approach that utilizes self-supervised pretext tasks and a unique data sampler to select data that are both difficult and representative.
The pretext task learner is trained on the unlabeled set, and the unlabeled data are sorted and grouped into batches by their pretext task losses.
In each iteration, the main task model is used to sample the most uncertain data in a batch to be annotated.
arXiv Detail & Related papers (2022-01-19T07:58:06Z) - Investigating a Baseline Of Self Supervised Learning Towards Reducing
Labeling Costs For Image Classification [0.0]
The study implements the kaggle.com' cats-vs-dogs dataset, Mnist and Fashion-Mnist to investigate the self-supervised learning task.
Results show that the pretext process in the self-supervised learning improves the accuracy around 15% in the downstream classification task.
arXiv Detail & Related papers (2021-08-17T06:43:05Z) - Out-distribution aware Self-training in an Open World Setting [62.19882458285749]
We leverage unlabeled data in an open world setting to further improve prediction performance.
We introduce out-distribution aware self-training, which includes a careful sample selection strategy.
Our classifiers are by design out-distribution aware and can thus distinguish task-related inputs from unrelated ones.
arXiv Detail & Related papers (2020-12-21T12:25:04Z) - Predicting What You Already Know Helps: Provable Self-Supervised
Learning [60.27658820909876]
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data.
We show a mechanism exploiting the statistical connections between certain em reconstruction-based pretext tasks that guarantee to learn a good representation.
We prove the linear layer yields small approximation error even for complex ground truth function class.
arXiv Detail & Related papers (2020-08-03T17:56:13Z) - Don't Wait, Just Weight: Improving Unsupervised Representations by
Learning Goal-Driven Instance Weights [92.16372657233394]
Self-supervised learning techniques can boost performance by learning useful representations from unlabelled data.
We show that by learning Bayesian instance weights for the unlabelled data, we can improve the downstream classification accuracy.
Our method, BetaDataWeighter is evaluated using the popular self-supervised rotation prediction task on STL-10 and Visual Decathlon.
arXiv Detail & Related papers (2020-06-22T15:59:32Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - How Useful is Self-Supervised Pretraining for Visual Tasks? [133.1984299177874]
We evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks.
Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows.
arXiv Detail & Related papers (2020-03-31T16:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.