HHAR-net: Hierarchical Human Activity Recognition using Neural Networks
- URL: http://arxiv.org/abs/2010.16052v2
- Date: Tue, 10 Nov 2020 22:52:46 GMT
- Title: HHAR-net: Hierarchical Human Activity Recognition using Neural Networks
- Authors: Mehrdad Fazli, Kamran Kowsari, Erfaneh Gharavi, Laura Barnes, Afsaneh
Doryab
- Abstract summary: This research aims at building a hierarchical classification with Neural Networks to recognize human activities.
We evaluate our model on the Extrasensory dataset; a dataset collected in the wild and containing data from smartphones and smartwatches.
- Score: 2.4530909757679633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Activity recognition using built-in sensors in smart and wearable devices
provides great opportunities to understand and detect human behavior in the
wild and gives a more holistic view of individuals' health and well being.
Numerous computational methods have been applied to sensor streams to recognize
different daily activities. However, most methods are unable to capture
different layers of activities concealed in human behavior. Also, the
performance of the models starts to decrease with increasing the number of
activities. This research aims at building a hierarchical classification with
Neural Networks to recognize human activities based on different levels of
abstraction. We evaluate our model on the Extrasensory dataset; a dataset
collected in the wild and containing data from smartphones and smartwatches. We
use a two-level hierarchy with a total of six mutually exclusive labels namely,
"lying down", "sitting", "standing in place", "walking", "running", and
"bicycling" divided into "stationary" and "non-stationary". The results show
that our model can recognize low-level activities (stationary/non-stationary)
with 95.8% accuracy and overall accuracy of 92.8% over six labels. This is 3%
above our best performing baseline.
Related papers
- Consistency Based Weakly Self-Supervised Learning for Human Activity Recognition with Wearables [1.565361244756411]
We describe a weakly self-supervised approach for recognizing human activities from sensor-based data.
We show that our approach can help the clustering algorithm achieve comparable performance in identifying and categorizing the underlying human activities.
arXiv Detail & Related papers (2024-07-29T06:29:21Z) - Unsupervised Embedding Learning for Human Activity Recognition Using
Wearable Sensor Data [2.398608007786179]
We present an unsupervised approach to project the human activities into an embedding space in which similar activities will be located closely together.
Results of experiments on three labeled benchmark datasets demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2023-07-21T08:52:47Z) - CHARM: A Hierarchical Deep Learning Model for Classification of Complex
Human Activities Using Motion Sensors [0.9594432031144714]
CHARM is a hierarchical deep learning model for classification of complex human activities using motion sensors.
It outperforms state-of-the-art supervised learning approaches for high-level activity recognition in terms of average accuracy and F1 scores.
The ability to learn low-level user activities when trained using only high-level activity labels may pave the way to semi-supervised learning of HAR tasks.
arXiv Detail & Related papers (2022-07-16T01:36:54Z) - Classifying Human Activities using Machine Learning and Deep Learning
Techniques [0.0]
Human Activity Recognition (HAR) describes the machines ability to recognize human actions.
Challenge in HAR is to overcome the difficulties of separating human activities based on the given data.
Deep Learning techniques like Long Short-Term Memory (LSTM), Bi-Directional LS classifier, Recurrent Neural Network (RNN), and Gated Recurrent Unit (GRU) are trained.
Experiment results proved that the Linear Support Vector in machine learning and Gated Recurrent Unit in Deep Learning provided better accuracy for human activity recognition.
arXiv Detail & Related papers (2022-05-19T05:20:04Z) - HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data [61.79595926825511]
Acquiring balanced datasets containing accurate activity labels requires humans to correctly annotate and potentially interfere with the subjects' normal activities in real-time.
We propose HAR-GCCN, a deep graph CNN model that leverages the correlation between chronologically adjacent sensor measurements to predict the correct labels for unclassified activities.
Har-GCCN shows superior performance relative to previously used baseline methods, improving classification accuracy by about 25% and up to 68% on different datasets.
arXiv Detail & Related papers (2022-03-07T01:23:46Z) - HAKE: A Knowledge Engine Foundation for Human Activity Understanding [65.24064718649046]
Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.
We propose a novel paradigm to reformulate this task in two stages: first mapping pixels to an intermediate space spanned by atomic activity primitives, then programming detected primitives with interpretable logic rules to infer semantics.
Our framework, the Human Activity Knowledge Engine (HAKE), exhibits superior generalization ability and performance upon challenging benchmarks.
arXiv Detail & Related papers (2022-02-14T16:38:31Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Human Activity Recognition using Attribute-Based Neural Networks and
Context Information [61.67246055629366]
We consider human activity recognition (HAR) from wearable sensor data in manual-work processes.
We show how context information can be integrated systematically into a deep neural network-based HAR system.
We empirically show that our proposed architecture increases HAR performance, compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T06:08:25Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - ConsNet: Learning Consistency Graph for Zero-Shot Human-Object
Interaction Detection [101.56529337489417]
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of human, action, object> in images.
We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs.
Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities.
arXiv Detail & Related papers (2020-08-14T09:11:18Z) - Sequential Weakly Labeled Multi-Activity Localization and Recognition on
Wearable Sensors using Recurrent Attention Networks [13.64024154785943]
We propose a recurrent attention network (RAN) to handle sequential weakly labeled multi-activity recognition and location tasks.
Our RAN model can simultaneously infer multi-activity types from the coarse-grained sequential weak labels.
It will greatly reduce the burden of manual labeling.
arXiv Detail & Related papers (2020-04-13T04:57:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.