Self-Supervised Transformers for Activity Classification using Ambient
Sensors
- URL: http://arxiv.org/abs/2011.12137v1
- Date: Sun, 22 Nov 2020 20:46:25 GMT
- Title: Self-Supervised Transformers for Activity Classification using Ambient
Sensors
- Authors: Luke Hicks, Ariel Ruiz-Garcia, Vasile Palade, Ibrahim Almakky
- Abstract summary: This paper proposes a methodology to classify the activities of a resident within an ambient sensor based environment.
We also propose a methodology to pre-train Transformers in a self-supervised manner, as a hybrid autoencoder-classifier model.
- Score: 3.1829446824051195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing care for ageing populations is an onerous task, and as life
expectancy estimates continue to rise, the number of people that require senior
care is growing rapidly. This paper proposes a methodology based on Transformer
Neural Networks to classify the activities of a resident within an ambient
sensor based environment. We also propose a methodology to pre-train
Transformers in a self-supervised manner, as a hybrid autoencoder-classifier
model instead of using contrastive loss. The social impact of the research is
considered with wider benefits of the approach and next steps for identifying
transitions in human behaviour. In recent years there has been an increasing
drive for integrating sensor based technologies within care facilities for data
collection. This allows for employing machine learning for many aspects
including activity recognition and anomaly detection. Due to the sensitivity of
healthcare environments, some methods of data collection used in current
research are considered to be intrusive within the senior care industry,
including cameras for image based activity recognition, and wearables for
activity tracking, but recent studies have shown that using these methods
commonly result in poor data quality due to the lack of resident interest in
participating in data gathering. This has led to a focus on ambient sensors,
such as binary PIR motion, connected domestic appliances, and electricity and
water metering. By having consistency in ambient data collection, the quality
of data is considerably more reliable, presenting the opportunity to perform
classification with enhanced accuracy. Therefore, in this research we looked to
find an optimal way of using deep learning to classify human activity with
ambient sensor data.
Related papers
- HODN: Disentangling Human-Object Feature for HOI Detection [51.48164941412871]
We propose a Human and Object Disentangling Network (HODN) to model the Human-Object Interaction (HOI) relationships explicitly.
Considering that human features are more contributive to interaction, we propose a Human-Guide Linking method to make sure the interaction decoder focuses on the human-centric regions.
Our proposed method achieves competitive performance on both the V-COCO and the HICO-Det Linking datasets.
arXiv Detail & Related papers (2023-08-20T04:12:50Z) - Unsupervised Embedding Learning for Human Activity Recognition Using
Wearable Sensor Data [2.398608007786179]
We present an unsupervised approach to project the human activities into an embedding space in which similar activities will be located closely together.
Results of experiments on three labeled benchmark datasets demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2023-07-21T08:52:47Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline Performances [76.34037366117234]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - Automated Mobility Context Detection with Inertial Signals [7.71058263701836]
The primary goal of this paper is the investigation of context detection for remote monitoring of daily motor functions.
We aim to understand whether inertial signals sampled with wearable accelerometers, provide reliable information to classify gait-related activities as either indoor or outdoor.
arXiv Detail & Related papers (2022-05-16T09:34:43Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - Occupancy Detection in Room Using Sensor Data [0.0]
This paper provides a solution to detect occupancy using sensor data by using and testing several variables.
Seven famous algorithms in Machine Learning, namely as Decision Tree, Random Forest, Gradient Boosting Machine, Logistic Regression, Naive Bayes, Kernelized SVM and K-Nearest Neighbors are tested and compared.
arXiv Detail & Related papers (2021-01-10T19:53:57Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Unsupervised Multi-Modal Representation Learning for Affective Computing
with Multi-Corpus Wearable Data [16.457778420360537]
We propose an unsupervised framework to reduce the reliance on human supervision.
The proposed framework utilizes two stacked convolutional autoencoders to learn latent representations from wearable electrocardiogram (ECG) and electrodermal activity (EDA) signals.
Our method outperforms current state-of-the-art results that have performed arousal detection on the same datasets.
arXiv Detail & Related papers (2020-08-24T22:01:55Z) - Machine learning approaches for identifying prey handling activity in
otariid pinnipeds [12.814241588031685]
This paper focuses on the identification of prey handling activity in seals.
Data taken into consideration are streams of 3D accelerometers and depth sensors values collected by devices attached directly on seals.
We propose an automatic model based on Machine Learning (ML) algorithms.
arXiv Detail & Related papers (2020-02-10T15:30:08Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.