SelfAct: Personalized Activity Recognition based on Self-Supervised and
Active Learning
- URL: http://arxiv.org/abs/2304.09530v1
- Date: Wed, 19 Apr 2023 09:39:11 GMT
- Title: SelfAct: Personalized Activity Recognition based on Self-Supervised and
Active Learning
- Authors: Luca Arrotta, Gabriele Civitarese, Samuele Valente, Claudio Bettini
- Abstract summary: SelfAct is a novel framework for Human Activity Recognition (HAR) on wearable and mobile devices.
It combines self-supervised and active learning to mitigate problems such as intra- and inter-variability of activity execution.
Our experiments on two publicly available HAR datasets demonstrate that SelfAct achieves results close to or even better than the ones of fully supervised approaches.
- Score: 0.688204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Supervised Deep Learning (DL) models are currently the leading approach for
sensor-based Human Activity Recognition (HAR) on wearable and mobile devices.
However, training them requires large amounts of labeled data whose collection
is often time-consuming, expensive, and error-prone. At the same time, due to
the intra- and inter-variability of activity execution, activity models should
be personalized for each user. In this work, we propose SelfAct: a novel
framework for HAR combining self-supervised and active learning to mitigate
these problems. SelfAct leverages a large pool of unlabeled data collected from
many users to pre-train through self-supervision a DL model, with the goal of
learning a meaningful and efficient latent representation of sensor data. The
resulting pre-trained model can be locally used by new users, which will
fine-tune it thanks to a novel unsupervised active learning strategy. Our
experiments on two publicly available HAR datasets demonstrate that SelfAct
achieves results that are close to or even better than the ones of fully
supervised approaches with a small number of active learning queries.
Related papers
- Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Reducing Label Effort: Self-Supervised meets Active Learning [32.4747118398236]
Recent developments in self-training have achieved very impressive results rivaling supervised learning on some datasets.
Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort.
The performance gap between active learning trained either with self-training or from scratch diminishes as we approach to the point where almost half of the dataset is labeled.
arXiv Detail & Related papers (2021-08-25T20:04:44Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - SelfHAR: Improving Human Activity Recognition through Self-training with
Unlabeled Data [9.270269467155547]
SelfHAR is a semi-supervised model that learns to leverage unlabeled datasets to complement small labeled datasets.
Our approach combines teacher-student self-training, which distills the knowledge of unlabeled and labeled datasets.
SelfHAR is data-efficient, reaching similar performance using up to 10 times less labeled data compared to supervised approaches.
arXiv Detail & Related papers (2021-02-11T15:40:35Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z) - Contrastive Predictive Coding for Human Activity Recognition [5.766384728949437]
We introduce the Contrastive Predictive Coding framework to human activity recognition, which captures the long-term temporal structure of sensor data streams.
CPC-based pre-training is self-supervised, and the resulting learned representations can be integrated into standard activity chains.
It leads to significantly improved recognition performance when only small amounts of labeled training data are available.
arXiv Detail & Related papers (2020-12-09T21:44:36Z) - Federated Learning with Heterogeneous Labels and Models for Mobile
Activity Monitoring [0.7106986689736827]
On-device Federated Learning proves to be an effective approach for distributed and collaborative machine learning.
We propose a framework for federated label-based aggregation, which leverages overlapping information gain across activities.
Empirical evaluation with the Heterogeneity Human Activity Recognition (HHAR) dataset on Raspberry Pi 2 indicates an average deterministic accuracy increase of at least 11.01%.
arXiv Detail & Related papers (2020-12-04T11:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.