TASKED: Transformer-based Adversarial learning for human activity
recognition using wearable sensors via Self-KnowledgE Distillation
- URL: http://arxiv.org/abs/2209.09092v1
- Date: Wed, 14 Sep 2022 11:08:48 GMT
- Title: TASKED: Transformer-based Adversarial learning for human activity
recognition using wearable sensors via Self-KnowledgE Distillation
- Authors: Sungho Suh, Vitor Fortes Rey and Paul Lukowicz
- Abstract summary: We propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED)
In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition.
- Score: 6.458496335718508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wearable sensor-based human activity recognition (HAR) has emerged as a
principal research area and is utilized in a variety of applications. Recently,
deep learning-based methods have achieved significant improvement in the HAR
field with the development of human-computer interaction applications. However,
they are limited to operating in a local neighborhood in the process of a
standard convolution neural network, and correlations between different sensors
on body positions are ignored. In addition, they still face significant
challenging problems with performance degradation due to large gaps in the
distribution of training and test data, and behavioral differences between
subjects. In this work, we propose a novel Transformer-based Adversarial
learning framework for human activity recognition using wearable sensors via
Self-KnowledgE Distillation (TASKED), that accounts for individual sensor
orientations and spatial and temporal features. The proposed method is capable
of learning cross-domain embedding feature representations from multiple
subjects datasets using adversarial learning and the maximum mean discrepancy
(MMD) regularization to align the data distribution over multiple domains. In
the proposed method, we adopt the teacher-free self-knowledge distillation to
improve the stability of the training procedure and the performance of human
activity recognition. Experimental results show that TASKED not only
outperforms state-of-the-art methods on the four real-world public HAR datasets
(alone or combined) but also improves the subject generalization effectively.
Related papers
- Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes [70.66864668709677]
We consider the problem of active learning for global sensitivity analysis of expensive black-box functions.
Since function evaluations are expensive, we use active learning to prioritize experimental resources where they yield the most value.
We propose novel active learning acquisition functions that directly target key quantities of derivative-based global sensitivity measures.
arXiv Detail & Related papers (2024-07-13T01:41:12Z) - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - Investigating Deep Neural Network Architecture and Feature Extraction
Designs for Sensor-based Human Activity Recognition [0.0]
In light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition.
We investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms.
Various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task.
arXiv Detail & Related papers (2023-09-26T14:55:32Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Unsupervised Statistical Feature-Guided Diffusion Model for Sensor-based Human Activity Recognition [3.2319909486685354]
A key problem holding up progress in wearable sensor-based human activity recognition is the unavailability of diverse and labeled training data.
We propose an unsupervised statistical feature-guided diffusion model specifically optimized for wearable sensor-based human activity recognition.
By conditioning the diffusion model on statistical information such as mean, standard deviation, Z-score, and skewness, we generate diverse and representative synthetic sensor data.
arXiv Detail & Related papers (2023-05-30T15:12:59Z) - Domain Adaptation for Inertial Measurement Unit-based Human Activity
Recognition: A Survey [1.7205106391379026]
Machine learning-based wearable human activity recognition (WHAR) models enable the development of smart and connected community applications.
The widespread adoption of these WHAR models is impeded by their degraded performance in the presence of data distribution heterogeneities.
Traditional machine learning algorithms and transfer learning techniques have been proposed to address the underpinning challenges of handling such data heterogeneities.
Domain adaptation is one such transfer learning techniques that has gained significant popularity in recent literature.
arXiv Detail & Related papers (2023-04-07T01:33:42Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Contrastive Predictive Coding for Human Activity Recognition [5.766384728949437]
We introduce the Contrastive Predictive Coding framework to human activity recognition, which captures the long-term temporal structure of sensor data streams.
CPC-based pre-training is self-supervised, and the resulting learned representations can be integrated into standard activity chains.
It leads to significantly improved recognition performance when only small amounts of labeled training data are available.
arXiv Detail & Related papers (2020-12-09T21:44:36Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.