Tutorial on Deep Learning for Human Activity Recognition
- URL: http://arxiv.org/abs/2110.06663v1
- Date: Wed, 13 Oct 2021 12:01:02 GMT
- Title: Tutorial on Deep Learning for Human Activity Recognition
- Authors: Marius Bock, Alexander Hoelzemann, Michael Moeller, Kristof Van
Laerhoven
- Abstract summary: This tutorial was first held at the 2021 ACM International Symposium on Wearable Computers (ISWC'21)
It provides a hands-on and interactive walk-through of the most important steps in the data pipeline for the deep learning of human activities.
- Score: 70.94062293989832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Activity recognition systems that are capable of estimating human activities
from wearable inertial sensors have come a long way in the past decades. Not
only have state-of-the-art methods moved away from feature engineering and have
fully adopted end-to-end deep learning approaches, best practices for setting
up experiments, preparing datasets, and validating activity recognition
approaches have similarly evolved. This tutorial was first held at the 2021 ACM
International Symposium on Wearable Computers (ISWC'21) and International Joint
Conference on Pervasive and Ubiquitous Computing (UbiComp'21). The tutorial,
after a short introduction in the research field of activity recognition,
provides a hands-on and interactive walk-through of the most important steps in
the data pipeline for the deep learning of human activities. All presentation
slides shown during the tutorial, which also contain links to all code
exercises, as well as the link of the GitHub page of the tutorial can be found
on: https://mariusbock.github.io/dl-for-har
Related papers
- Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Any-point Trajectory Modeling for Policy Learning [64.23861308947852]
We introduce Any-point Trajectory Modeling (ATM) to predict future trajectories of arbitrary points within a video frame.
ATM outperforms strong video pre-training baselines by 80% on average.
We show effective transfer learning of manipulation skills from human videos and videos from a different robot morphology.
arXiv Detail & Related papers (2023-12-28T23:34:43Z) - Early Action Recognition with Action Prototypes [62.826125870298306]
We propose a novel model that learns a prototypical representation of the full action for each class.
We decompose the video into short clips, where a visual encoder extracts features from each clip independently.
Later, a decoder aggregates together in an online fashion features from all the clips for the final class prediction.
arXiv Detail & Related papers (2023-12-11T18:31:13Z) - An Empirical Study and Analysis of Learning Generalizable Manipulation
Skill in the SAPIEN Simulator [12.677245428522834]
This paper provides a brief overview of our submission to the no interaction track of SAPIEN ManiSkill Challenge 2021.
Our approach follows an end-to-end pipeline which mainly consists of two steps.
We adopt these features to predict the action score of the robot simulators through a deep and wide transformer-based network.
arXiv Detail & Related papers (2022-08-31T05:45:55Z) - Classifying Human Activities using Machine Learning and Deep Learning
Techniques [0.0]
Human Activity Recognition (HAR) describes the machines ability to recognize human actions.
Challenge in HAR is to overcome the difficulties of separating human activities based on the given data.
Deep Learning techniques like Long Short-Term Memory (LSTM), Bi-Directional LS classifier, Recurrent Neural Network (RNN), and Gated Recurrent Unit (GRU) are trained.
Experiment results proved that the Linear Support Vector in machine learning and Gated Recurrent Unit in Deep Learning provided better accuracy for human activity recognition.
arXiv Detail & Related papers (2022-05-19T05:20:04Z) - An Empirical Study of End-to-End Temporal Action Detection [82.64373812690127]
Temporal action detection (TAD) is an important yet challenging task in video understanding.
Rather than end-to-end learning, most existing methods adopt a head-only learning paradigm.
We validate the advantage of end-to-end learning over head-only learning and observe up to 11% performance improvement.
arXiv Detail & Related papers (2022-04-06T16:46:30Z) - Continual Learning from Demonstration of Robotics Skills [5.573543601558405]
Methods for teaching motion skills to robots focus on training for a single skill at a time.
We propose an approach for continual learning from demonstration using hypernetworks and neural ordinary differential equation solvers.
arXiv Detail & Related papers (2022-02-14T16:26:52Z) - Self-supervised Human Activity Recognition by Learning to Predict
Cross-Dimensional Motion [16.457778420360537]
We propose the use of self-supervised learning for human activity recognition with smartphone accelerometer data.
First, the representations of unlabeled input signals are learned by training a deep convolutional neural network to predict a segment of accelerometer values.
For this task, we add a number of fully connected layers to the end of the frozen network and train the added layers with labeled accelerometer signals to learn to classify human activities.
arXiv Detail & Related papers (2020-10-21T02:14:31Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.