Real-time Human Activity Recognition Using Conditionally Parametrized
Convolutions on Mobile and Wearable Devices
- URL: http://arxiv.org/abs/2006.03259v2
- Date: Sat, 13 Jun 2020 07:55:34 GMT
- Title: Real-time Human Activity Recognition Using Conditionally Parametrized
Convolutions on Mobile and Wearable Devices
- Authors: Xin Cheng, Lei Zhang, Yin Tang, Yue Liu, Hao Wu and Jun He
- Abstract summary: deep convolutional neural networks (CNNs) have achieved state-of-the-art performance on various HAR datasets.
A high number of operations in deep leaning increases computational cost and is not suitable for real-time HAR using mobile and wearable sensors.
We propose a efficient CNN using conditionally parametrized convolution for real-time HAR on mobile and wearable devices.
- Score: 14.260179062012512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep learning has represented an important research trend in human
activity recognition (HAR). In particular, deep convolutional neural networks
(CNNs) have achieved state-of-the-art performance on various HAR datasets. For
deep learning, improvements in performance have to heavily rely on increasing
model size or capacity to scale to larger and larger datasets, which inevitably
leads to the increase of operations. A high number of operations in deep
leaning increases computational cost and is not suitable for real-time HAR
using mobile and wearable sensors. Though shallow learning techniques often are
lightweight, they could not achieve good performance. Therefore, deep learning
methods that can balance the trade-off between accuracy and computation cost is
highly needed, which to our knowledge has seldom been researched. In this
paper, we for the first time propose a computation efficient CNN using
conditionally parametrized convolution for real-time HAR on mobile and wearable
devices. We evaluate the proposed method on four public benchmark HAR datasets
consisting of WISDM dataset, PAMAP2 dataset, UNIMIB-SHAR dataset, and
OPPORTUNITY dataset, achieving state-of-the-art accuracy without compromising
computation cost. Various ablation experiments are performed to show how such a
network with large capacity is clearly preferable to baseline while requiring a
similar amount of operations. The method can be used as a drop-in replacement
for the existing deep HAR architectures and easily deployed onto mobile and
wearable devices for real-time HAR applications.
Related papers
- Temporal Action Localization for Inertial-based Human Activity Recognition [9.948823510429902]
Video-based Human Activity Recognition (TAL) has followed a segment-based prediction approach, localizing activity segments in a timeline of arbitrary length.
This paper is the first to systematically demonstrate the applicability of state-of-the-art TAL models for both offline and near-online Human Activity Recognition (HAR)
We show that by analyzing timelines as a whole, TAL models can produce more coherent segments and achieve higher NULL-class accuracy across all datasets.
arXiv Detail & Related papers (2023-11-27T13:55:21Z) - Efficient Adaptive Human-Object Interaction Detection with
Concept-guided Memory [64.11870454160614]
We propose an efficient Adaptive HOI Detector with Concept-guided Memory (ADA-CM)
ADA-CM has two operating modes. The first mode makes it tunable without learning new parameters in a training-free paradigm.
Our proposed method achieves competitive results with state-of-the-art on the HICO-DET and V-COCO datasets with much less training time.
arXiv Detail & Related papers (2023-09-07T13:10:06Z) - Human Activity Recognition Using Self-Supervised Representations of
Wearable Data [0.0]
Development of accurate algorithms for human activity recognition (HAR) is hindered by the lack of large real-world labeled datasets.
Here we develop a 6-class HAR model with strong performance when evaluated on real-world datasets not seen during training.
arXiv Detail & Related papers (2023-04-26T07:33:54Z) - Lightweight Transformers for Human Activity Recognition on Mobile
Devices [0.5505634045241288]
Human Activity Recognition (HAR) on mobile devices has shown to be achievable with lightweight neural models.
We present Human Activity Recognition Transformer (HART), a lightweight, sensor-wise transformer architecture.
Our experiments on HAR tasks with several publicly available datasets show that HART uses fewer FLoating-point Operations Per Second (FLOPS) and parameters while outperforming current state-of-the-art results.
arXiv Detail & Related papers (2022-09-22T09:42:08Z) - Efficient Deep Clustering of Human Activities and How to Improve
Evaluation [53.08810276824894]
We present a new deep clustering model for human activity re-cog-ni-tion (HAR)
In this paper, we highlight several distinct problems with how deep HAR clustering models are evaluated.
We then discuss solutions to these problems, and suggest standard evaluation settings for future deep HAR clustering models.
arXiv Detail & Related papers (2022-09-17T14:12:42Z) - How Much More Data Do I Need? Estimating Requirements for Downstream
Tasks [99.44608160188905]
Given a small training data set and a learning algorithm, how much more data is necessary to reach a target validation or test performance?
Overestimating or underestimating data requirements incurs substantial costs that could be avoided with an adequate budget.
Using our guidelines, practitioners can accurately estimate data requirements of machine learning systems to gain savings in both development time and data acquisition costs.
arXiv Detail & Related papers (2022-07-04T21:16:05Z) - Transformer Networks for Data Augmentation of Human Physical Activity
Recognition [61.303828551910634]
State of the art models like Recurrent Generative Adrial Networks (RGAN) are used to generate realistic synthetic data.
In this paper, transformer based generative adversarial networks which have global attention on data, are compared on PAMAP2 and Real World Human Activity Recognition data sets with RGAN.
arXiv Detail & Related papers (2021-09-02T16:47:29Z) - Transformer-Based Behavioral Representation Learning Enables Transfer
Learning for Mobile Sensing in Small Datasets [4.276883061502341]
We provide a neural architecture framework for mobile sensing data that can learn generalizable feature representations from time series.
This architecture combines benefits from CNN and Trans-former architectures to enable better prediction performance.
arXiv Detail & Related papers (2021-07-09T22:26:50Z) - A Data and Compute Efficient Design for Limited-Resources Deep Learning [68.55415606184]
equivariant neural networks have gained increased interest in the deep learning community.
They have been successfully applied in the medical domain where symmetries in the data can be effectively exploited to build more accurate and robust models.
Mobile, on-device implementations of deep learning solutions have been developed for medical applications.
However, equivariant models are commonly implemented using large and computationally expensive architectures, not suitable to run on mobile devices.
In this work, we design and test an equivariant version of MobileNetV2 and further optimize it with model quantization to enable more efficient inference.
arXiv Detail & Related papers (2020-04-21T00:49:11Z) - A Deep Learning Method for Complex Human Activity Recognition Using
Virtual Wearable Sensors [22.923108537119685]
Sensor-based human activity recognition (HAR) is now a research hotspot in multiple application areas.
We propose a novel method based on deep learning for complex HAR in the real-scene.
The proposed method can surprisingly converge in a few iterations and achieve an accuracy of 91.15% on a real IMU dataset.
arXiv Detail & Related papers (2020-03-04T03:31:23Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.