Robust Activity Recognition for Adaptive Worker-Robot Interaction using
Transfer Learning
- URL: http://arxiv.org/abs/2308.14843v1
- Date: Mon, 28 Aug 2023 19:03:46 GMT
- Title: Robust Activity Recognition for Adaptive Worker-Robot Interaction using
Transfer Learning
- Authors: Farid Shahnavaz, Riley Tavassoli, and Reza Akhavian
- Abstract summary: This paper proposes a transfer learning methodology for activity recognition of construction workers.
The developed algorithm transfers features from a model pre-trained by the original authors and fine-tunes them for the downstream task of activity recognition.
Results indicate that the fine-tuned model can recognize distinct MMH tasks in a robust and adaptive manner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human activity recognition (HAR) using machine learning has shown tremendous
promise in detecting construction workers' activities. HAR has many
applications in human-robot interaction research to enable robots'
understanding of human counterparts' activities. However, many existing HAR
approaches lack robustness, generalizability, and adaptability. This paper
proposes a transfer learning methodology for activity recognition of
construction workers that requires orders of magnitude less data and compute
time for comparable or better classification accuracy. The developed algorithm
transfers features from a model pre-trained by the original authors and
fine-tunes them for the downstream task of activity recognition in
construction. The model was pre-trained on Kinetics-400, a large-scale
video-based human activity recognition dataset with 400 distinct classes. The
model was fine-tuned and tested using videos captured from manual material
handling (MMH) activities found on YouTube. Results indicate that the
fine-tuned model can recognize distinct MMH tasks in a robust and adaptive
manner which is crucial for the widespread deployment of collaborative robots
in construction.
Related papers
- Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation [8.940998315746684]
We propose a model-based reinforcement learning (RL) approach for robotic arm end-tasks.
We employ Bayesian neural network models to represent, in a probabilistic way, both the belief and information encoded in the dynamic model during exploration.
Our experiments show the advantages of our Bayesian model-based RL approach, with similar quality in the results than relevant alternatives.
arXiv Detail & Related papers (2024-04-02T11:44:37Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Efficient Adaptive Human-Object Interaction Detection with
Concept-guided Memory [64.11870454160614]
We propose an efficient Adaptive HOI Detector with Concept-guided Memory (ADA-CM)
ADA-CM has two operating modes. The first mode makes it tunable without learning new parameters in a training-free paradigm.
Our proposed method achieves competitive results with state-of-the-art on the HICO-DET and V-COCO datasets with much less training time.
arXiv Detail & Related papers (2023-09-07T13:10:06Z) - MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot
Interaction [34.978017200500005]
We propose Multimodal Interactive Latent Dynamics (MILD) to address the problem of two-party physical Human-Robot Interactions (HRIs)
We learn the interaction dynamics from demonstrations, using Hidden Semi-Markov Models (HSMMs) to model the joint distribution of the interacting agents in the latent space of a Variational Autoencoder (VAE)
MILD generates more accurate trajectories for the controlled agent (robot) when conditioned on the observed agent's (human) trajectory.
arXiv Detail & Related papers (2022-10-22T11:25:11Z) - CHARM: A Hierarchical Deep Learning Model for Classification of Complex
Human Activities Using Motion Sensors [0.9594432031144714]
CHARM is a hierarchical deep learning model for classification of complex human activities using motion sensors.
It outperforms state-of-the-art supervised learning approaches for high-level activity recognition in terms of average accuracy and F1 scores.
The ability to learn low-level user activities when trained using only high-level activity labels may pave the way to semi-supervised learning of HAR tasks.
arXiv Detail & Related papers (2022-07-16T01:36:54Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Human Activity Recognition Using Multichannel Convolutional Neural
Network [0.0]
Human Activity Recognition (HAR) simply refers to the capacity of a machine to perceive human actions.
This paper describes a supervised learning method that can distinguish human actions based on data collected from practical human movements.
The model was tested on the UCI HAR dataset, which resulted in a 95.25% classification accuracy.
arXiv Detail & Related papers (2021-01-17T16:48:17Z) - Federated Learning with Heterogeneous Labels and Models for Mobile
Activity Monitoring [0.7106986689736827]
On-device Federated Learning proves to be an effective approach for distributed and collaborative machine learning.
We propose a framework for federated label-based aggregation, which leverages overlapping information gain across activities.
Empirical evaluation with the Heterogeneity Human Activity Recognition (HHAR) dataset on Raspberry Pi 2 indicates an average deterministic accuracy increase of at least 11.01%.
arXiv Detail & Related papers (2020-12-04T11:44:17Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.