Classifying Human Activities with Inertial Sensors: A Machine Learning
Approach
- URL: http://arxiv.org/abs/2111.05333v1
- Date: Tue, 9 Nov 2021 08:17:33 GMT
- Title: Classifying Human Activities with Inertial Sensors: A Machine Learning
Approach
- Authors: Hamza Ali Imran, Saad Wazir, Usman Iftikhar, Usama Latif
- Abstract summary: Human Activity Recognition (HAR) is an ongoing research topic.
It has applications in medical support, sports, fitness, social networking, human-computer interfaces, senior care, entertainment, surveillance, and the list goes on.
We examined and analyzed different Machine Learning and Deep Learning approaches for Human Activity Recognition using inertial sensor data of smartphones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human Activity Recognition (HAR) is an ongoing research topic. It has
applications in medical support, sports, fitness, social networking,
human-computer interfaces, senior care, entertainment, surveillance, and the
list goes on. Traditionally, computer vision methods were employed for HAR,
which has numerous problems such as secrecy or privacy, the influence of
environmental factors, less mobility, higher running costs, occlusion, and so
on. A new trend in the use of sensors, especially inertial sensors, has lately
emerged. There are several advantages of employing sensor data as an
alternative to traditional computer vision algorithms. Many of the limitations
of computer vision algorithms have been documented in the literature, including
research on Deep Neural Network (DNN) and Machine Learning (ML) approaches for
activity categorization utilizing sensor data. We examined and analyzed
different Machine Learning and Deep Learning approaches for Human Activity
Recognition using inertial sensor data of smartphones. In order to identify
which approach is best suited for this application.
Related papers
- Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Investigating Deep Neural Network Architecture and Feature Extraction
Designs for Sensor-based Human Activity Recognition [0.0]
In light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition.
We investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms.
Various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task.
arXiv Detail & Related papers (2023-09-26T14:55:32Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Video-based Human Action Recognition using Deep Learning: A Review [4.976815699476327]
Human action recognition is an important application domain in computer vision.
Deep learning has been given particular attention by the computer vision community.
This paper presents an overview of the current state-of-the-art in action recognition using video analysis with deep learning techniques.
arXiv Detail & Related papers (2022-08-07T17:12:12Z) - HAKE: A Knowledge Engine Foundation for Human Activity Understanding [65.24064718649046]
Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.
We propose a novel paradigm to reformulate this task in two stages: first mapping pixels to an intermediate space spanned by atomic activity primitives, then programming detected primitives with interpretable logic rules to infer semantics.
Our framework, the Human Activity Knowledge Engine (HAKE), exhibits superior generalization ability and performance upon challenging benchmarks.
arXiv Detail & Related papers (2022-02-14T16:38:31Z) - Incremental Learning Techniques for Online Human Activity Recognition [0.0]
We propose a human activity recognition (HAR) approach for the online prediction of physical movements.
We develop a HAR system containing monitoring software and a mobile application that collects accelerometer and gyroscope data.
Six incremental learning algorithms are employed and evaluated in this work and compared with several batch learning algorithms commonly used for developing offline HAR systems.
arXiv Detail & Related papers (2021-09-20T11:33:09Z) - Continual Learning in Sensor-based Human Activity Recognition: an
Empirical Benchmark Analysis [4.686889458553123]
Sensor-based human activity recognition (HAR) is a key enabler for many real-world applications in smart homes, personal healthcare, and urban planning.
How can a HAR system autonomously learn new activities over a long period of time without being re-engineered from scratch?
This problem is known as continual learning and has been particularly popular in the domain of computer vision.
arXiv Detail & Related papers (2021-04-19T15:38:22Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Human Activity Recognition using Inertial, Physiological and
Environmental Sensors: a Comprehensive Survey [3.1166345853612296]
This survey focuses on critical role of machine learning in developing HAR applications based on inertial sensors in conjunction with physiological and environmental sensors.
Har is considered as one of the most promising assistive technology tools to support elderly's daily life by monitoring their cognitive and physical function through daily activities.
arXiv Detail & Related papers (2020-04-19T11:32:35Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - Deep Learning for Sensor-based Human Activity Recognition: Overview,
Challenges and Opportunities [52.59080024266596]
We present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition.
We first introduce the multi-modality of the sensory data and provide information for public datasets.
We then propose a new taxonomy to structure the deep methods by challenges.
arXiv Detail & Related papers (2020-01-21T09:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.