Tutorial on Deep Learning for Human Activity Recognition
- URL: http://arxiv.org/abs/2110.06663v1
- Date: Wed, 13 Oct 2021 12:01:02 GMT
- Title: Tutorial on Deep Learning for Human Activity Recognition
- Authors: Marius Bock, Alexander Hoelzemann, Michael Moeller, Kristof Van
Laerhoven
- Abstract summary: This tutorial was first held at the 2021 ACM International Symposium on Wearable Computers (ISWC'21)
It provides a hands-on and interactive walk-through of the most important steps in the data pipeline for the deep learning of human activities.
- Score: 70.94062293989832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Activity recognition systems that are capable of estimating human activities
from wearable inertial sensors have come a long way in the past decades. Not
only have state-of-the-art methods moved away from feature engineering and have
fully adopted end-to-end deep learning approaches, best practices for setting
up experiments, preparing datasets, and validating activity recognition
approaches have similarly evolved. This tutorial was first held at the 2021 ACM
International Symposium on Wearable Computers (ISWC'21) and International Joint
Conference on Pervasive and Ubiquitous Computing (UbiComp'21). The tutorial,
after a short introduction in the research field of activity recognition,
provides a hands-on and interactive walk-through of the most important steps in
the data pipeline for the deep learning of human activities. All presentation
slides shown during the tutorial, which also contain links to all code
exercises, as well as the link of the GitHub page of the tutorial can be found
on: https://mariusbock.github.io/dl-for-har
Related papers
- Hands-On Tutorial: Labeling with LLM and Human-in-the-Loop [7.925650087629884]
This tutorial is designed for NLP practitioners from both research and industry backgrounds.
We will present the basics of each strategy, highlight their benefits and limitations, and discuss in detail real-life case studies.
The tutorial includes a hands-on workshop, where attendees will be guided in implementing a hybrid annotation setup.
arXiv Detail & Related papers (2024-11-07T11:51:14Z) - KOI: Accelerating Online Imitation Learning via Hybrid Key-state Guidance [51.09834120088799]
We introduce the hybrid Key-state guided Online Imitation (KOI) learning method.
We use visual-language models to extract semantic key states from expert trajectory, indicating the objectives of "what to do"
Within the intervals between semantic key states, optical flow is employed to capture motion key states to understand the mechanisms of "how to do"
arXiv Detail & Related papers (2024-08-06T02:53:55Z) - Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning [70.64617500380287]
Continual learning allows models to learn from new data while retaining previously learned knowledge.
The semantic knowledge available in the label information of the images, offers important semantic information that can be related with previously acquired knowledge of semantic classes.
We propose integrating semantic guidance within and across tasks by capturing semantic similarity using text embeddings.
arXiv Detail & Related papers (2024-08-02T07:51:44Z) - Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Any-point Trajectory Modeling for Policy Learning [64.23861308947852]
We introduce Any-point Trajectory Modeling (ATM) to predict future trajectories of arbitrary points within a video frame.
ATM outperforms strong video pre-training baselines by 80% on average.
We show effective transfer learning of manipulation skills from human videos and videos from a different robot morphology.
arXiv Detail & Related papers (2023-12-28T23:34:43Z) - An Empirical Study and Analysis of Learning Generalizable Manipulation
Skill in the SAPIEN Simulator [12.677245428522834]
This paper provides a brief overview of our submission to the no interaction track of SAPIEN ManiSkill Challenge 2021.
Our approach follows an end-to-end pipeline which mainly consists of two steps.
We adopt these features to predict the action score of the robot simulators through a deep and wide transformer-based network.
arXiv Detail & Related papers (2022-08-31T05:45:55Z) - Continual Learning from Demonstration of Robotics Skills [5.573543601558405]
Methods for teaching motion skills to robots focus on training for a single skill at a time.
We propose an approach for continual learning from demonstration using hypernetworks and neural ordinary differential equation solvers.
arXiv Detail & Related papers (2022-02-14T16:26:52Z) - Self-supervised Human Activity Recognition by Learning to Predict
Cross-Dimensional Motion [16.457778420360537]
We propose the use of self-supervised learning for human activity recognition with smartphone accelerometer data.
First, the representations of unlabeled input signals are learned by training a deep convolutional neural network to predict a segment of accelerometer values.
For this task, we add a number of fully connected layers to the end of the frozen network and train the added layers with labeled accelerometer signals to learn to classify human activities.
arXiv Detail & Related papers (2020-10-21T02:14:31Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.