Enhancing team performance with transfer-learning during real-world
human-robot collaboration
- URL: http://arxiv.org/abs/2211.13070v1
- Date: Wed, 23 Nov 2022 16:02:00 GMT
- Title: Enhancing team performance with transfer-learning during real-world
human-robot collaboration
- Authors: Athanasios C. Tsitos and Maria Dagioglou
- Abstract summary: Transfer learning was integrated in a deep Reinforcement Learning (dRL) agent.
Probability reuse method was used for the transfer learning (TL)
TL also affected the subjective performance of the teams and enhanced the perceived fluency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Socially aware robots should be able, among others, to support fluent
human-robot collaboration in tasks that require interdependent actions in order
to be solved. Towards enhancing mutual performance, collaborative robots should
be equipped with adaptation and learning capabilities. However, co-learning can
be a time consuming procedure. For this reason, transferring knowledge from an
expert could potentially boost the overall team performance. In the present
study, transfer learning was integrated in a deep Reinforcement Learning (dRL)
agent. In a real-time and real-world set-up, two groups of participants had to
collaborate with a cobot under two different conditions of dRL agents; one that
was transferring knowledge and one that did not. A probabilistic policy reuse
method was used for the transfer learning (TL). The results showed that there
was a significant difference between the performance of the two groups; TL
halved the time needed for the training of new participants to the task.
Moreover, TL also affected the subjective performance of the teams and enhanced
the perceived fluency. Finally, in many cases the objective performance metrics
did not correlate with the subjective ones providing interesting insights about
the design of transparent and explainable cobot behaviour.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Exploring CausalWorld: Enhancing robotic manipulation via knowledge transfer and curriculum learning [6.683222869973898]
This study explores a learning-based tri-finger robotic arm manipulating task, which requires complex movements and coordination among the fingers.
By employing reinforcement learning, we train an agent to acquire the necessary skills for proficient manipulation.
Two knowledge transfer strategies, fine-tuning and curriculum learning, were utilized within the soft actor-critic architecture.
arXiv Detail & Related papers (2024-03-25T23:19:19Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Progressively Efficient Learning [58.6490456517954]
We develop a novel learning framework named Communication-Efficient Interactive Learning (CEIL)
CEIL leads to emergence of a human-like pattern where the learner and the teacher communicate efficiently by exchanging increasingly more abstract intentions.
Agents trained with CEIL quickly master new tasks, outperforming non-hierarchical and hierarchical imitation learning by up to 50% and 20% in absolute success rate.
arXiv Detail & Related papers (2023-10-13T07:52:04Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - Identifying Suitable Tasks for Inductive Transfer Through the Analysis
of Feature Attributions [78.55044112903148]
We use explainability techniques to predict whether task pairs will be complementary, through comparison of neural network activation between single-task models.
Our results show that, through this approach, it is possible to reduce training time by up to 83.5% at a cost of only 0.034 reduction in positive-class F1 on the TREC-IS 2020-A dataset.
arXiv Detail & Related papers (2022-02-02T15:51:07Z) - Accelerating the Convergence of Human-in-the-Loop Reinforcement Learning
with Counterfactual Explanations [1.8275108630751844]
Human-in-the-loop Reinforcement Learning (HRL) addresses this issue by combining human feedback and reinforcement learning techniques.
We extend the existing TAMER Framework with the possibility to enhance human feedback with two different types of counterfactual explanations.
arXiv Detail & Related papers (2021-08-03T08:27:28Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Two-stage training algorithm for AI robot soccer [2.0757564643017092]
Two-stage heterogeneous centralized training is proposed to improve the learning performance of heterogeneous agents.
The proposed method is applied to 5 versus 5 AI robot soccer for validation.
arXiv Detail & Related papers (2021-04-13T04:24:13Z) - Deep Reinforcement Learning with Interactive Feedback in a Human-Robot
Environment [1.2998475032187096]
We propose a deep reinforcement learning approach with interactive feedback to learn a domestic task in a human-robot scenario.
We compare three different learning methods using a simulated robotic arm for the task of organizing different objects.
The obtained results show that a learner agent, using either agent-IDeepRL or human-IDeepRL, completes the given task earlier and has fewer mistakes compared to the autonomous DeepRL approach.
arXiv Detail & Related papers (2020-07-07T11:55:27Z) - Human-Robot Team Coordination with Dynamic and Latent Human Task
Proficiencies: Scheduling with Learning Curves [0.0]
We introduce a novel resource coordination that enables robots to explore the relative strengths and learning abilities of their human teammates.
We generate and evaluate a robust schedule while discovering the latest individual worker proficiency.
Results indicate that scheduling strategies favoring exploration tend to be beneficial for human-robot collaboration.
arXiv Detail & Related papers (2020-07-03T19:44:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.