Hierarchical Self Attention Based Autoencoder for Open-Set Human
Activity Recognition
- URL: http://arxiv.org/abs/2103.04279v1
- Date: Sun, 7 Mar 2021 06:21:18 GMT
- Title: Hierarchical Self Attention Based Autoencoder for Open-Set Human
Activity Recognition
- Authors: M Tanjid Hasan Tonmoy, Saif Mahmud, A K M Mahbubur Rahman, M Ashraful
Amin, and Amin Ahsan Ali
- Abstract summary: Self attention based approach is proposed for wearable sensor based human activity recognition.
It incorporates self-attention based feature representations from encoder to detect unseen activity classes in open-set recognition setting.
We conduct extensive validation experiments that indicate significantly improved robustness to noise and subject specific variability in body-worn sensor signals.
- Score: 2.492343817244558
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Wearable sensor based human activity recognition is a challenging problem due
to difficulty in modeling spatial and temporal dependencies of sensor signals.
Recognition models in closed-set assumption are forced to yield members of
known activity classes as prediction. However, activity recognition models can
encounter an unseen activity due to body-worn sensor malfunction or disability
of the subject performing the activities. This problem can be addressed through
modeling solution according to the assumption of open-set recognition. Hence,
the proposed self attention based approach combines data hierarchically from
different sensor placements across time to classify closed-set activities and
it obtains notable performance improvement over state-of-the-art models on five
publicly available datasets. The decoder in this autoencoder architecture
incorporates self-attention based feature representations from encoder to
detect unseen activity classes in open-set recognition setting. Furthermore,
attention maps generated by the hierarchical model demonstrate explainable
selection of features in activity recognition. We conduct extensive leave one
subject out validation experiments that indicate significantly improved
robustness to noise and subject specific variability in body-worn sensor
signals. The source code is available at:
github.com/saif-mahmud/hierarchical-attention-HAR
Related papers
- Advancing Location-Invariant and Device-Agnostic Motion Activity
Recognition on Wearable Devices [6.557453686071467]
We conduct a comprehensive evaluation of the generalizability of motion models across sensor locations.
Our analysis highlights this challenge and identifies key on-body locations for building location-invariant models.
We present deployable on-device motion models reaching 91.41% frame-level F1-score from a single model irrespective of sensor placements.
arXiv Detail & Related papers (2024-02-06T05:10:00Z) - Open-Vocabulary Animal Keypoint Detection with Semantic-feature Matching [74.75284453828017]
Open-Vocabulary Keypoint Detection (OVKD) task is innovatively designed to use text prompts for identifying arbitrary keypoints across any species.
We have developed a novel framework named Open-Vocabulary Keypoint Detection with Semantic-feature Matching (KDSM)
This framework combines vision and language models, creating an interplay between language features and local keypoint visual features.
arXiv Detail & Related papers (2023-10-08T07:42:41Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Attention-Based Sensor Fusion for Human Activity Recognition Using IMU
Signals [4.558966602878624]
We propose a novel attention-based approach to human activity recognition using multiple IMU sensors worn at different body locations.
An attention-based fusion mechanism is developed to learn the importance of sensors at different body locations and to generate an attentive feature representation.
The proposed approach is evaluated using five public datasets and it outperforms state-of-the-art methods on a wide variety of activity categories.
arXiv Detail & Related papers (2021-12-20T17:00:27Z) - Self-supervised Pretraining with Classification Labels for Temporal
Activity Detection [54.366236719520565]
Temporal Activity Detection aims to predict activity classes per frame.
Due to the expensive frame-level annotations required for detection, the scale of detection datasets is limited.
This work proposes a novel self-supervised pretraining method for detection leveraging classification labels.
arXiv Detail & Related papers (2021-11-26T18:59:28Z) - Unsupervised Domain Adaption of Object Detectors: A Survey [87.08473838767235]
Recent advances in deep learning have led to the development of accurate and efficient models for various computer vision applications.
Learning highly accurate models relies on the availability of datasets with a large number of annotated images.
Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images.
arXiv Detail & Related papers (2021-05-27T23:34:06Z) - Anomaly Detection Based on Selection and Weighting in Latent Space [73.01328671569759]
We propose a novel selection-and-weighting-based anomaly detection framework called SWAD.
Experiments on both benchmark and real-world datasets have shown the effectiveness and superiority of SWAD.
arXiv Detail & Related papers (2021-03-08T10:56:38Z) - Learning Asynchronous and Sparse Human-Object Interaction in Videos [56.73059840294019]
Asynchronous-Sparse Interaction Graph Networks (ASSIGN) is able to automatically detect the structure of interaction events associated with entities in a video scene.
ASSIGN is tested on human-object interaction recognition and shows superior performance in segmenting and labeling of human sub-activities and object affordances from raw videos.
arXiv Detail & Related papers (2021-03-03T23:43:55Z) - HHAR-net: Hierarchical Human Activity Recognition using Neural Networks [2.4530909757679633]
This research aims at building a hierarchical classification with Neural Networks to recognize human activities.
We evaluate our model on the Extrasensory dataset; a dataset collected in the wild and containing data from smartphones and smartwatches.
arXiv Detail & Related papers (2020-10-28T17:06:42Z) - Human Activity Recognition from Wearable Sensor Data Using
Self-Attention [2.9023633922848586]
We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
arXiv Detail & Related papers (2020-03-17T14:16:57Z) - Machine learning approaches for identifying prey handling activity in
otariid pinnipeds [12.814241588031685]
This paper focuses on the identification of prey handling activity in seals.
Data taken into consideration are streams of 3D accelerometers and depth sensors values collected by devices attached directly on seals.
We propose an automatic model based on Machine Learning (ML) algorithms.
arXiv Detail & Related papers (2020-02-10T15:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.