Know Thy Neighbors: A Graph Based Approach for Effective Sensor-Based
Human Activity Recognition in Smart Homes
- URL: http://arxiv.org/abs/2311.09514v1
- Date: Thu, 16 Nov 2023 02:43:13 GMT
- Title: Know Thy Neighbors: A Graph Based Approach for Effective Sensor-Based
Human Activity Recognition in Smart Homes
- Authors: Srivatsa P, Thomas Pl\"otz
- Abstract summary: We propose a novel graph-guided neural network approach for Human Activity Recognition (HAR) in smart homes.
We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home.
Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been a resurgence of applications focused on Human Activity
Recognition (HAR) in smart homes, especially in the field of ambient
intelligence and assisted living technologies. However, such applications
present numerous significant challenges to any automated analysis system
operating in the real world, such as variability, sparsity, and noise in sensor
measurements. Although state-of-the-art HAR systems have made considerable
strides in addressing some of these challenges, they especially suffer from a
practical limitation: they require successful pre-segmentation of continuous
sensor data streams before automated recognition, i.e., they assume that an
oracle is present during deployment, which is capable of identifying time
windows of interest across discrete sensor events. To overcome this limitation,
we propose a novel graph-guided neural network approach that performs activity
recognition by learning explicit co-firing relationships between sensors. We
accomplish this by learning a more expressive graph structure representing the
sensor network in a smart home, in a data-driven manner. Our approach maps
discrete input sensor measurements to a feature space through the application
of attention mechanisms and hierarchical pooling of node embeddings. We
demonstrate the effectiveness of our proposed approach by conducting several
experiments on CASAS datasets, showing that the resulting graph-guided neural
network outperforms the state-of-the-art method for HAR in smart homes across
multiple datasets and by large margins. These results are promising because
they push HAR for smart homes closer to real-world applications.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Layout Agnostic Human Activity Recognition in Smart Homes through Textual Descriptions Of Sensor Triggers (TDOST) [0.22354214294493352]
We develop a layout-agnostic modeling approach for human activity recognition (HAR) systems in smart homes.
We generate Textual Descriptions Of Sensor Triggers (TDOST) that encapsulate the surrounding trigger conditions.
We demonstrate the effectiveness of TDOST-based models in unseen smart homes through experiments on benchmarked CASAS datasets.
arXiv Detail & Related papers (2024-05-20T20:37:44Z) - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - HGFF: A Deep Reinforcement Learning Framework for Lifetime Maximization in Wireless Sensor Networks [5.4894758104028245]
We propose a new framework combining heterogeneous graph neural network with deep reinforcement learning to automatically construct the movement path of the sink.
We design ten types of static and dynamic maps to simulate different wireless sensor networks in the real world.
Our approach consistently outperforms the existing methods on all types of maps.
arXiv Detail & Related papers (2024-04-11T13:09:11Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Deep Transfer Learning with Graph Neural Network for Sensor-Based Human
Activity Recognition [12.51766929898714]
We devised a graph-inspired deep learning approach toward the sensor-based HAR tasks.
We present a multi-layer residual structure involved graph convolutional neural network (ResGCNN) toward the sensor-based HAR tasks.
Experimental results on the PAMAP2 and mHealth data sets demonstrate that our ResGCNN is effective at capturing the characteristics of actions.
arXiv Detail & Related papers (2022-03-14T07:57:32Z) - Feeling of Presence Maximization: mmWave-Enabled Virtual Reality Meets
Deep Reinforcement Learning [76.46530937296066]
This paper investigates the problem of providing ultra-reliable and energy-efficient virtual reality (VR) experiences for wireless mobile users.
To ensure reliable ultra-high-definition (UHD) video frame delivery to mobile users, a coordinated multipoint (CoMP) transmission technique and millimeter wave (mmWave) communications are exploited.
arXiv Detail & Related papers (2021-06-03T08:35:10Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Online Guest Detection in a Smart Home using Pervasive Sensors and
Probabilistic Reasoning [3.538944147459101]
This paper presents a probabilistic approach able to estimate the number of persons in the environment at each time step.
Using both simulated and real data, our method has been tested and validated on two smart homes of different sizes and configuration.
arXiv Detail & Related papers (2020-03-13T15:41:15Z) - Temporal Pulses Driven Spiking Neural Network for Fast Object
Recognition in Autonomous Driving [65.36115045035903]
We propose an approach to address the object recognition problem directly with raw temporal pulses utilizing the spiking neural network (SNN)
Being evaluated on various datasets, our proposed method has shown comparable performance as the state-of-the-art methods, while achieving remarkable time efficiency.
arXiv Detail & Related papers (2020-01-24T22:58:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.