Invariant Feature Learning for Sensor-based Human Activity Recognition
- URL: http://arxiv.org/abs/2012.07963v1
- Date: Mon, 14 Dec 2020 21:56:17 GMT
- Title: Invariant Feature Learning for Sensor-based Human Activity Recognition
- Authors: Yujiao Hao, Boyu Wang, Rong Zheng
- Abstract summary: We present an invariant feature learning framework (IFLF) that extracts common information shared across subjects and devices.
Experiments demonstrated that IFLF is effective in handling both subject and device diversion across popular open datasets and an in-house dataset.
- Score: 11.334750079923428
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Wearable sensor-based human activity recognition (HAR) has been a research
focus in the field of ubiquitous and mobile computing for years. In recent
years, many deep models have been applied to HAR problems. However, deep
learning methods typically require a large amount of data for models to
generalize well. Significant variances caused by different participants or
diverse sensor devices limit the direct application of a pre-trained model to a
subject or device that has not been seen before. To address these problems, we
present an invariant feature learning framework (IFLF) that extracts common
information shared across subjects and devices. IFLF incorporates two learning
paradigms: 1) meta-learning to capture robust features across seen domains and
adapt to an unseen one with similarity-based data selection; 2) multi-task
learning to deal with data shortage and enhance overall performance via
knowledge sharing among different subjects. Experiments demonstrated that IFLF
is effective in handling both subject and device diversion across popular open
datasets and an in-house dataset. It outperforms a baseline model of up to 40%
in test accuracy.
Related papers
- Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - FedOpenHAR: Federated Multi-Task Transfer Learning for Sensor-Based
Human Activity Recognition [0.0]
This paper explores Federated Transfer Learning in a Multi-Task manner for both sensor-based human activity recognition and device position identification tasks.
The OpenHAR framework is used to train the models, which contains ten smaller datasets.
By utilizing transfer learning and training a task-specific and personalized federated model, we obtained a similar accuracy with training each client individually and higher accuracy than a fully centralized approach.
arXiv Detail & Related papers (2023-11-13T21:31:07Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - TASKED: Transformer-based Adversarial learning for human activity
recognition using wearable sensors via Self-KnowledgE Distillation [6.458496335718508]
We propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED)
In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition.
arXiv Detail & Related papers (2022-09-14T11:08:48Z) - Multi-Domain Joint Training for Person Re-Identification [51.73921349603597]
Deep learning-based person Re-IDentification (ReID) often requires a large amount of training data to achieve good performance.
It appears that collecting more training data from diverse environments tends to improve the ReID performance.
We propose an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters can be adaptive to various factors.
arXiv Detail & Related papers (2022-01-06T09:20:59Z) - Adversarial Deep Feature Extraction Network for User Independent Human
Activity Recognition [4.988898367111902]
We present an adversarial subject-independent feature extraction method with the maximum mean discrepancy (MMD) regularization for human activity recognition.
We evaluate the method on well-known public data sets showing that it significantly improves user-independent performance and reduces variance in results.
arXiv Detail & Related papers (2021-10-23T07:50:32Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - Federated Learning with Heterogeneous Labels and Models for Mobile
Activity Monitoring [0.7106986689736827]
On-device Federated Learning proves to be an effective approach for distributed and collaborative machine learning.
We propose a framework for federated label-based aggregation, which leverages overlapping information gain across activities.
Empirical evaluation with the Heterogeneity Human Activity Recognition (HHAR) dataset on Raspberry Pi 2 indicates an average deterministic accuracy increase of at least 11.01%.
arXiv Detail & Related papers (2020-12-04T11:44:17Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.