Investigating Deep Neural Network Architecture and Feature Extraction
Designs for Sensor-based Human Activity Recognition
- URL: http://arxiv.org/abs/2310.03760v1
- Date: Tue, 26 Sep 2023 14:55:32 GMT
- Title: Investigating Deep Neural Network Architecture and Feature Extraction
Designs for Sensor-based Human Activity Recognition
- Authors: Danial Ahangarani, Mohammad Shirazi, Navid Ashraf
- Abstract summary: In light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition.
We investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms.
Various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The extensive ubiquitous availability of sensors in smart devices and the
Internet of Things (IoT) has opened up the possibilities for implementing
sensor-based activity recognition. As opposed to traditional sensor time-series
processing and hand-engineered feature extraction, in light of deep learning's
proven effectiveness across various domains, numerous deep methods have been
explored to tackle the challenges in activity recognition, outperforming the
traditional signal processing and traditional machine learning approaches. In
this work, by performing extensive experimental studies on two human activity
recognition datasets, we investigate the performance of common deep learning
and machine learning approaches as well as different training mechanisms (such
as contrastive learning), and various feature representations extracted from
the sensor time-series data and measure their effectiveness for the human
activity recognition task.
Related papers
- Apprenticeship-Inspired Elegance: Synergistic Knowledge Distillation Empowers Spiking Neural Networks for Efficient Single-Eye Emotion Recognition [53.359383163184425]
We introduce a novel multimodality synergistic knowledge distillation scheme tailored for efficient single-eye motion recognition tasks.
This method allows a lightweight, unimodal student spiking neural network (SNN) to extract rich knowledge from an event-frame multimodal teacher network.
arXiv Detail & Related papers (2024-06-20T07:24:47Z) - Know Thy Neighbors: A Graph Based Approach for Effective Sensor-Based
Human Activity Recognition in Smart Homes [0.0]
We propose a novel graph-guided neural network approach for Human Activity Recognition (HAR) in smart homes.
We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home.
Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms.
arXiv Detail & Related papers (2023-11-16T02:43:13Z) - MultiIoT: Benchmarking Machine Learning for the Internet of Things [70.74131118309967]
The next generation of machine learning systems must be adept at perceiving and interacting with the physical world.
sensory data from motion, thermal, geolocation, depth, wireless signals, video, and audio are increasingly used to model the states of physical environments.
Existing efforts are often specialized to a single sensory modality or prediction task.
This paper proposes MultiIoT, the most expansive and unified IoT benchmark to date, encompassing over 1.15 million samples from 12 modalities and 8 real-world tasks.
arXiv Detail & Related papers (2023-11-10T18:13:08Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - TASKED: Transformer-based Adversarial learning for human activity
recognition using wearable sensors via Self-KnowledgE Distillation [6.458496335718508]
We propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED)
In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition.
arXiv Detail & Related papers (2022-09-14T11:08:48Z) - Classifying Human Activities with Inertial Sensors: A Machine Learning
Approach [0.0]
Human Activity Recognition (HAR) is an ongoing research topic.
It has applications in medical support, sports, fitness, social networking, human-computer interfaces, senior care, entertainment, surveillance, and the list goes on.
We examined and analyzed different Machine Learning and Deep Learning approaches for Human Activity Recognition using inertial sensor data of smartphones.
arXiv Detail & Related papers (2021-11-09T08:17:33Z) - Incremental Learning Techniques for Online Human Activity Recognition [0.0]
We propose a human activity recognition (HAR) approach for the online prediction of physical movements.
We develop a HAR system containing monitoring software and a mobile application that collects accelerometer and gyroscope data.
Six incremental learning algorithms are employed and evaluated in this work and compared with several batch learning algorithms commonly used for developing offline HAR systems.
arXiv Detail & Related papers (2021-09-20T11:33:09Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - Deep Learning for Sensor-based Human Activity Recognition: Overview,
Challenges and Opportunities [52.59080024266596]
We present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition.
We first introduce the multi-modality of the sensory data and provide information for public datasets.
We then propose a new taxonomy to structure the deep methods by challenges.
arXiv Detail & Related papers (2020-01-21T09:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.