Attention-Based Deep Learning Framework for Human Activity Recognition
with User Adaptation
- URL: http://arxiv.org/abs/2006.03820v2
- Date: Sat, 27 Mar 2021 14:41:03 GMT
- Title: Attention-Based Deep Learning Framework for Human Activity Recognition
with User Adaptation
- Authors: Davide Buffelli, Fabio Vandin
- Abstract summary: Sensor-based human activity recognition (HAR) requires to predict the action of a person based on sensor-generated time series data.
We propose a novel deep learning framework, algname, based on a purely attention-based mechanism.
We show that our proposed attention-based architecture is considerably more powerful than previous approaches.
- Score: 5.629161809575013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sensor-based human activity recognition (HAR) requires to predict the action
of a person based on sensor-generated time series data. HAR has attracted major
interest in the past few years, thanks to the large number of applications
enabled by modern ubiquitous computing devices. While several techniques based
on hand-crafted feature engineering have been proposed, the current
state-of-the-art is represented by deep learning architectures that
automatically obtain high level representations and that use recurrent neural
networks (RNNs) to extract temporal dependencies in the input. RNNs have
several limitations, in particular in dealing with long-term dependencies. We
propose a novel deep learning framework, \algname, based on a purely
attention-based mechanism, that overcomes the limitations of the
state-of-the-art. We show that our proposed attention-based architecture is
considerably more powerful than previous approaches, with an average increment,
of more than $7\%$ on the F1 score over the previous best performing model.
Furthermore, we consider the problem of personalizing HAR deep learning models,
which is of great importance in several applications. We propose a simple and
effective transfer-learning based strategy to adapt a model to a specific user,
providing an average increment of $6\%$ on the F1 score on the predictions for
that user. Our extensive experimental evaluation proves the significantly
superior capabilities of our proposed framework over the current
state-of-the-art and the effectiveness of our user adaptation technique.
Related papers
- Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - A Close Look into Human Activity Recognition Models using Deep Learning [0.0]
This paper surveys some state-of-the-art human activity recognition models based on deep learning architecture.
The analysis outlines how the models are implemented to maximize its effectivity and some of the potential limitations it faces.
arXiv Detail & Related papers (2022-04-26T19:43:21Z) - A New Clustering-Based Technique for the Acceleration of Deep
Convolutional Networks [2.7393821783237184]
Model Compression and Acceleration (MCA) techniques are used to transform large pre-trained networks into smaller models.
We propose a clustering-based approach that is able to increase the number of employed centroids/representatives.
This is achieved by imposing a special structure to the employed representatives, which is enabled by the particularities of the problem at hand.
arXiv Detail & Related papers (2021-07-19T18:22:07Z) - Transformer-Based Behavioral Representation Learning Enables Transfer
Learning for Mobile Sensing in Small Datasets [4.276883061502341]
We provide a neural architecture framework for mobile sensing data that can learn generalizable feature representations from time series.
This architecture combines benefits from CNN and Trans-former architectures to enable better prediction performance.
arXiv Detail & Related papers (2021-07-09T22:26:50Z) - Action Transformer: A Self-Attention Model for Short-Time Human Action
Recognition [5.123810256000945]
Action Transformer (AcT) is a self-attentional architecture that consistently outperforms more elaborated networks that mix convolutional, recurrent, and attentive layers.
AcT exploits 2D pose representations over small temporal windows, providing a low latency solution for accurate and effective real-time performance.
arXiv Detail & Related papers (2021-07-01T16:53:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Continual Learning for Natural Language Generation in Task-oriented
Dialog Systems [72.92029584113676]
Natural language generation (NLG) is an essential component of task-oriented dialog systems.
We study NLG in a "continual learning" setting to expand its knowledge to new domains or functionalities incrementally.
The major challenge towards this goal is catastrophic forgetting, meaning that a continually trained model tends to forget the knowledge it has learned before.
arXiv Detail & Related papers (2020-10-02T10:32:29Z) - On the impact of selected modern deep-learning techniques to the
performance and celerity of classification models in an experimental
high-energy physics use case [0.0]
Deep learning techniques are tested in the context of a classification problem encountered in the domain of high-energy physics.
The advantages are evaluated in terms of both performance metrics and the time required to train and apply the resulting models.
A new wrapper library for PyTorch called LUMIN is presented, which incorporates all of the techniques studied.
arXiv Detail & Related papers (2020-02-03T12:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.