Computational ergonomics for task delegation in Human-Robot
Collaboration: spatiotemporal adaptation of the robot to the human through
contactless gesture recognition
- URL: http://arxiv.org/abs/2203.11007v2
- Date: Tue, 22 Mar 2022 08:56:44 GMT
- Title: Computational ergonomics for task delegation in Human-Robot
Collaboration: spatiotemporal adaptation of the robot to the human through
contactless gesture recognition
- Authors: Brenda Elizabeth Olivas-Padilla, Dimitris Papanagiotou, Gavriela
Senteri, Sotiris Manitsaris, and Alina Glushkova
- Abstract summary: This paper proposes two hypotheses for ergonomically effective task delegation and Human Human Collaboration (HRC)
The first hypothesis states that it is possible to quantify ergonomically professional tasks using motion data from a reduced set of sensors.
The second hypothesis is that by including gesture recognition and spatial adaptation, the ergonomics of an HRC scenario can be improved by avoiding needless motions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The high prevalence of work-related musculoskeletal disorders (WMSDs) could
be addressed by optimizing Human-Robot Collaboration (HRC) frameworks for
manufacturing applications. In this context, this paper proposes two hypotheses
for ergonomically effective task delegation and HRC. The first hypothesis
states that it is possible to quantify ergonomically professional tasks using
motion data from a reduced set of sensors. Then, the most dangerous tasks can
be delegated to a collaborative robot. The second hypothesis is that by
including gesture recognition and spatial adaptation, the ergonomics of an HRC
scenario can be improved by avoiding needless motions that could expose
operators to ergonomic risks and by lowering the physical effort required of
operators. An HRC scenario for a television manufacturing process is optimized
to test both hypotheses. For the ergonomic evaluation, motion primitives with
known ergonomic risks were modeled for their detection in professional tasks
and to estimate a risk score based on the European Assembly Worksheet (EAWS). A
Deep Learning gesture recognition module trained with egocentric television
assembly data was used to complement the collaboration between the human
operator and the robot. Additionally, a skeleton-tracking algorithm provided
the robot with information about the operator's pose, allowing it to spatially
adapt its motion to the operator's anthropometrics. Three experiments were
conducted to determine the effect of gesture recognition and spatial adaptation
on the operator's range of motion. The rate of spatial adaptation was used as a
key performance indicator (KPI), and a new KPI for measuring the reduction in
the operator's motion is presented in this paper.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Data-Driven Ergonomic Risk Assessment of Complex Hand-intensive
Manufacturing Processes [1.5837588732514762]
Hand-intensive manufacturing processes require significant human dexterity to accommodate task complexity.
These strenuous hand motions often lead to musculoskeletal disorders and rehabilitation surgeries.
We develop a data-driven ergonomic risk assessment system to better identify and address ergonomic issues related to hand-intensive manufacturing processes.
arXiv Detail & Related papers (2024-03-05T23:32:45Z) - Offline Risk-sensitive RL with Partial Observability to Enhance
Performance in Human-Robot Teaming [1.3980986259786223]
We propose a method to incorporate model uncertainty, thus enabling risk-sensitive sequential decision-making.
Experiments were conducted with a group of twenty-six human participants within a simulated robot teleoperation environment.
arXiv Detail & Related papers (2024-02-08T14:27:34Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Rearrange Indoor Scenes for Human-Robot Co-Activity [82.22847163761969]
We present an optimization-based framework for rearranging indoor furniture to accommodate human-robot co-activities better.
Our algorithm preserves the functional relations among furniture by integrating spatial and semantic co-occurrence extracted from SUNCG and ConceptNet.
Our experiments show that rearranged scenes provide an average of 14% more accessible space and 30% more objects to interact with.
arXiv Detail & Related papers (2023-03-10T03:03:32Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Dynamic Human-Robot Role Allocation based on Human Ergonomics Risk
Prediction and Robot Actions Adaptation [35.91053423341299]
We propose a novel method that optimize assembly strategies and distribute the effort among the workers in human-robot cooperative tasks.
The proposed approach succeeds in controlling the task allocation process to ensure safe and ergonomic conditions for the human worker.
arXiv Detail & Related papers (2021-11-05T17:29:41Z) - Ergonomically Intelligent Physical Human-Robot Interaction: Postural
Estimation, Assessment, and Optimization [3.681892767755111]
We show that we can estimate human posture solely from the trajectory of the interacting robot.
We propose DULA, a differentiable ergonomics model, and use it in gradient-free postural optimization for physical human-robot interaction tasks.
arXiv Detail & Related papers (2021-08-12T21:13:06Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.