Contrastive Left-Right Wearable Sensors (IMUs) Consistency Matching for
HAR
- URL: http://arxiv.org/abs/2311.12674v1
- Date: Tue, 21 Nov 2023 15:31:16 GMT
- Title: Contrastive Left-Right Wearable Sensors (IMUs) Consistency Matching for
HAR
- Authors: Dominique Nshimyimana, Vitor Fortes Rey and Paul Lukowic
- Abstract summary: We show how real data can be used for self-supervised learning without any transformations.
Our approach involves contrastive matching of two different sensors.
We test our approach on the Opportunity and MM-Fit datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning algorithms are improving rapidly, but annotating training
data remains a bottleneck for many applications. In this paper, we show how
real data can be used for self-supervised learning without any transformations
by taking advantage of the symmetry present in the activities. Our approach
involves contrastive matching of two different sensors (left and right wrist or
leg-worn IMUs) to make representations of co-occurring sensor data more similar
and those of non-co-occurring sensor data more different. We test our approach
on the Opportunity and MM-Fit datasets. In MM-Fit we show significant
improvement over the baseline supervised and self-supervised method SimCLR,
while for Opportunity there is significant improvement over the supervised
baseline and slight improvement when compared to SimCLR. Moreover, our method
improves supervised baselines even when using only a small amount of the data
for training. Future work should explore under which conditions our method is
beneficial for human activity recognition systems and other related
applications.
Related papers
- MuJo: Multimodal Joint Feature Space Learning for Human Activity Recognition [2.7532797256542403]
Human Activity Recognition (HAR) is a longstanding problem in AI with applications in a broad range of areas, including healthcare, sports and fitness, security, and more.
We introduce our comprehensive Fitness Multimodal Activity dataset (FiMAD) to enhance HAR performance across various modalities.
We show that classifiers pre-trained on FiMAD can increase the performance on real HAR datasets such as MM-Fit, MyoGym, MotionSense, and MHEALTH.
arXiv Detail & Related papers (2024-06-06T08:42:36Z) - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Transfer-Learning Across Datasets with Different Input Dimensions: An
Algorithm and Analysis for the Linear Regression Case [7.674023644408741]
We propose a transfer learning algorithm that combines new and historical data with different input dimensions.
Our approach achieves state-of-the-art performance on 9 real-life datasets.
arXiv Detail & Related papers (2022-02-10T14:57:15Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.