Decoding Human Activities: Analyzing Wearable Accelerometer and Gyroscope Data for Activity Recognition
- URL: http://arxiv.org/abs/2310.02011v3
- Date: Mon, 8 Jul 2024 18:09:11 GMT
- Title: Decoding Human Activities: Analyzing Wearable Accelerometer and Gyroscope Data for Activity Recognition
- Authors: Utsab Saha, Sawradip Saha, Tahmid Kabir, Shaikh Anowarul Fattah, Mohammad Saquib,
- Abstract summary: A person's movement or relative positioning can be effectively captured by different types of sensors and corresponding sensor output can be utilized in various manipulative techniques for the classification of different human activities.
This letter proposes an effective scheme for human activity recognition, which introduces two unique approaches within a multi-structural architecture, named FusionActNet.
The results clearly demonstrate that our method outperforms existing approaches in terms of accuracy, precision, recall, and F1 score, achieving 97.35% and 95.35% accuracy on the UCI HAR and Motion-Sense datasets, respectively.
- Score: 1.262949092134022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A person's movement or relative positioning can be effectively captured by different types of sensors and corresponding sensor output can be utilized in various manipulative techniques for the classification of different human activities. This letter proposes an effective scheme for human activity recognition, which introduces two unique approaches within a multi-structural architecture, named FusionActNet. The first approach aims to capture the static and dynamic behavior of a particular action by using two dedicated residual networks and the second approach facilitates the final decision-making process by introducing a guidance module. A two-stage training process is designed where at the first stage, residual networks are pre-trained separately by using static (where the human body is immobile) and dynamic (involving movement of the human body) data. In the next stage, the guidance module along with the pre-trained static or dynamic models are used to train the given sensor data. Here the guidance module learns to emphasize the most relevant prediction vector obtained from the static or dynamic models, which helps to effectively classify different human activities. The proposed scheme is evaluated using two benchmark datasets and compared with state-of-the-art methods. The results clearly demonstrate that our method outperforms existing approaches in terms of accuracy, precision, recall, and F1 score, achieving 97.35% and 95.35% accuracy on the UCI HAR and Motion-Sense datasets, respectively which highlights both the effectiveness and stability of the proposed scheme.
Related papers
- Consistency Based Weakly Self-Supervised Learning for Human Activity Recognition with Wearables [1.565361244756411]
We describe a weakly self-supervised approach for recognizing human activities from sensor-based data.
We show that our approach can help the clustering algorithm achieve comparable performance in identifying and categorizing the underlying human activities.
arXiv Detail & Related papers (2024-07-29T06:29:21Z) - Efficient Transferability Assessment for Selection of Pre-trained Detectors [63.21514888618542]
This paper studies the efficient transferability assessment of pre-trained object detectors.
We build up a detector transferability benchmark which contains a large and diverse zoo of pre-trained detectors.
Experimental results demonstrate that our method outperforms other state-of-the-art approaches in assessing transferability.
arXiv Detail & Related papers (2024-03-14T14:23:23Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Self-Supervised Human Activity Recognition with Localized Time-Frequency
Contrastive Representation Learning [16.457778420360537]
We propose a self-supervised learning solution for human activity recognition with smartphone accelerometer data.
We develop a model that learns strong representations from accelerometer signals, while reducing the model's reliance on class labels.
We evaluate the performance of the proposed solution on three datasets, namely MotionSense, HAPT, and HHAR.
arXiv Detail & Related papers (2022-08-26T22:47:18Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Non-local Graph Convolutional Network for joint Activity Recognition and
Motion Prediction [2.580765958706854]
3D skeleton-based motion prediction and activity recognition are two interwoven tasks in human behaviour analysis.
We propose a new way to combine the advantages of both graph convolutional neural networks and recurrent neural networks for joint human motion prediction and activity recognition.
arXiv Detail & Related papers (2021-08-03T14:07:10Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z) - Self-supervised Human Activity Recognition by Learning to Predict
Cross-Dimensional Motion [16.457778420360537]
We propose the use of self-supervised learning for human activity recognition with smartphone accelerometer data.
First, the representations of unlabeled input signals are learned by training a deep convolutional neural network to predict a segment of accelerometer values.
For this task, we add a number of fully connected layers to the end of the frozen network and train the added layers with labeled accelerometer signals to learn to classify human activities.
arXiv Detail & Related papers (2020-10-21T02:14:31Z) - Entropy Decision Fusion for Smartphone Sensor based Human Activity
Recognition [0.0]
We present an approach for fusing convolutional neural network, recurrent convolutional network, and support vector machine by computing.
Experiments are conducted on two benchmark datasets, UCI-HAR and WISDM.
arXiv Detail & Related papers (2020-05-30T21:09:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.