Deep Transfer Learning with Graph Neural Network for Sensor-Based Human
Activity Recognition
- URL: http://arxiv.org/abs/2203.07910v1
- Date: Mon, 14 Mar 2022 07:57:32 GMT
- Title: Deep Transfer Learning with Graph Neural Network for Sensor-Based Human
Activity Recognition
- Authors: Yan Yan, Tianzheng Liao, Jinjin Zhao, Jiahong Wang, Liang Ma, Wei Lv,
Jing Xiong, and Lei Wang
- Abstract summary: We devised a graph-inspired deep learning approach toward the sensor-based HAR tasks.
We present a multi-layer residual structure involved graph convolutional neural network (ResGCNN) toward the sensor-based HAR tasks.
Experimental results on the PAMAP2 and mHealth data sets demonstrate that our ResGCNN is effective at capturing the characteristics of actions.
- Score: 12.51766929898714
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The sensor-based human activity recognition (HAR) in mobile application
scenarios is often confronted with sensor modalities variation and annotated
data deficiency. Given this observation, we devised a graph-inspired deep
learning approach toward the sensor-based HAR tasks, which was further used to
build a deep transfer learning model toward giving a tentative solution for
these two challenging problems. Specifically, we present a multi-layer residual
structure involved graph convolutional neural network (ResGCNN) toward the
sensor-based HAR tasks, namely the HAR-ResGCNN approach. Experimental results
on the PAMAP2 and mHealth data sets demonstrate that our ResGCNN is effective
at capturing the characteristics of actions with comparable results compared to
other sensor-based HAR models (with an average accuracy of 98.18% and 99.07%,
respectively). More importantly, the deep transfer learning experiments using
the ResGCNN model show excellent transferability and few-shot learning
performance. The graph-based framework shows good meta-learning ability and is
supposed to be a promising solution in sensor-based HAR tasks.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Feature Fusion for Human Activity Recognition using Parameter-Optimized Multi-Stage Graph Convolutional Network and Transformer Models [0.6157382820537721]
The study uses sensory data from HuGaDB, PKU-MMD, LARa, and TUG datasets.
Two models, the PO-MS-GCN and a Transformer were trained and evaluated, with PO-MS-GCN outperforming state-of-the-art models.
HuGaDB and TUG achieved high accuracies and f1-scores, while LARa and PKU-MMD had lower scores.
arXiv Detail & Related papers (2024-06-24T13:44:06Z) - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - HGFF: A Deep Reinforcement Learning Framework for Lifetime Maximization in Wireless Sensor Networks [5.4894758104028245]
We propose a new framework combining heterogeneous graph neural network with deep reinforcement learning to automatically construct the movement path of the sink.
We design ten types of static and dynamic maps to simulate different wireless sensor networks in the real world.
Our approach consistently outperforms the existing methods on all types of maps.
arXiv Detail & Related papers (2024-04-11T13:09:11Z) - Know Thy Neighbors: A Graph Based Approach for Effective Sensor-Based
Human Activity Recognition in Smart Homes [0.0]
We propose a novel graph-guided neural network approach for Human Activity Recognition (HAR) in smart homes.
We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home.
Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms.
arXiv Detail & Related papers (2023-11-16T02:43:13Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.