Advancing Location-Invariant and Device-Agnostic Motion Activity
Recognition on Wearable Devices
- URL: http://arxiv.org/abs/2402.03714v1
- Date: Tue, 6 Feb 2024 05:10:00 GMT
- Title: Advancing Location-Invariant and Device-Agnostic Motion Activity
Recognition on Wearable Devices
- Authors: Rebecca Adaimi, Abdelkareem Bedri, Jun Gong, Richard Kang, Joanna
Arreaza-Taylor, Gerri-Michelle Pascual, Michael Ralph, and Gierad Laput
- Abstract summary: We conduct a comprehensive evaluation of the generalizability of motion models across sensor locations.
Our analysis highlights this challenge and identifies key on-body locations for building location-invariant models.
We present deployable on-device motion models reaching 91.41% frame-level F1-score from a single model irrespective of sensor placements.
- Score: 6.557453686071467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wearable sensors have permeated into people's lives, ushering impactful
applications in interactive systems and activity recognition. However,
practitioners face significant obstacles when dealing with sensing
heterogeneities, requiring custom models for different platforms. In this
paper, we conduct a comprehensive evaluation of the generalizability of motion
models across sensor locations. Our analysis highlights this challenge and
identifies key on-body locations for building location-invariant models that
can be integrated on any device. For this, we introduce the largest
multi-location activity dataset (N=50, 200 cumulative hours), which we make
publicly available. We also present deployable on-device motion models reaching
91.41% frame-level F1-score from a single model irrespective of sensor
placements. Lastly, we investigate cross-location data synthesis, aiming to
alleviate the laborious data collection tasks by synthesizing data in one
location given data from another. These contributions advance our vision of
low-barrier, location-invariant activity recognition systems, catalyzing
research in HCI and ubiquitous computing.
Related papers
- Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds [79.00975648564483]
Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios.
This dataset provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective.
The objective is to predict the future positions of agents relative to the robot using raw sensory input data.
arXiv Detail & Related papers (2023-11-05T18:59:31Z) - Learning to Detect Slip through Tactile Estimation of the Contact Force Field and its Entropy [6.739132519488627]
We introduce a physics-informed, data-driven approach to detect slip continuously in real time.
We employ the GelSight Mini, an optical tactile sensor, attached to custom-designed grippers to gather tactile data.
Our results show that the best classification algorithm achieves a high average accuracy of 95.61%.
arXiv Detail & Related papers (2023-03-02T03:16:21Z) - DAPPER: Label-Free Performance Estimation after Personalization for
Heterogeneous Mobile Sensing [95.18236298557721]
We present DAPPER (Domain AdaPtation Performance EstimatoR) that estimates the adaptation performance in a target domain with unlabeled target data.
Our evaluation with four real-world sensing datasets compared against six baselines shows that DAPPER outperforms the state-of-the-art baseline by 39.8% in estimation accuracy.
arXiv Detail & Related papers (2021-11-22T08:49:33Z) - SALIENCE: An Unsupervised User Adaptation Model for Multiple Wearable
Sensors Based Human Activity Recognition [9.358282765566807]
We propose SALIENCE (unsupervised user adaptation model for multiple wearable sensors based human activity recognition) model.
It aligns the data of each sensor separately to achieve local alignment, while uniformly aligning the data of all sensors to ensure global alignment.
Experiments are conducted on two public WHAR datasets, and the experimental results show that our model can yield a competitive performance.
arXiv Detail & Related papers (2021-08-17T13:45:32Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Invariant Feature Learning for Sensor-based Human Activity Recognition [11.334750079923428]
We present an invariant feature learning framework (IFLF) that extracts common information shared across subjects and devices.
Experiments demonstrated that IFLF is effective in handling both subject and device diversion across popular open datasets and an in-house dataset.
arXiv Detail & Related papers (2020-12-14T21:56:17Z) - SensiX: A Platform for Collaborative Machine Learning on the Edge [69.1412199244903]
We present SensiX, a personal edge platform that stays between sensor data and sensing models.
We demonstrate its efficacy in developing motion and audio-based multi-device sensing systems.
Our evaluation shows that SensiX offers a 7-13% increase in overall accuracy and up to 30% increase across different environment dynamics at the expense of 3mW power overhead.
arXiv Detail & Related papers (2020-12-04T23:06:56Z) - Human Activity Recognition from Wearable Sensor Data Using
Self-Attention [2.9023633922848586]
We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
arXiv Detail & Related papers (2020-03-17T14:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.