How Do Human Users Teach a Continual Learning Robot in Repeated
Interactions?
- URL: http://arxiv.org/abs/2307.00123v1
- Date: Fri, 30 Jun 2023 20:29:48 GMT
- Title: How Do Human Users Teach a Continual Learning Robot in Repeated
Interactions?
- Authors: Ali Ayub, Jainish Mehta, Zachary De Francesco, Patrick Holthaus,
Kerstin Dautenhahn and Chrystopher L. Nehaniv
- Abstract summary: We take a human-centered approach to continual learning, to understand how humans teach continual learning robots over the long term.
We conducted an in-person study with 40 participants that interacted with a continual learning robot in 200 sessions.
An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users.
- Score: 7.193217430660011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning (CL) has emerged as an important avenue of research in
recent years, at the intersection of Machine Learning (ML) and Human-Robot
Interaction (HRI), to allow robots to continually learn in their environments
over long-term interactions with humans. Most research in continual learning,
however, has been robot-centered to develop continual learning algorithms that
can quickly learn new information on static datasets. In this paper, we take a
human-centered approach to continual learning, to understand how humans teach
continual learning robots over the long term and if there are variations in
their teaching styles. We conducted an in-person study with 40 participants
that interacted with a continual learning robot in 200 sessions. In this
between-participant study, we used two different CL models deployed on a Fetch
mobile manipulator robot. An extensive qualitative and quantitative analysis of
the data collected in the study shows that there is significant variation among
the teaching styles of individual users indicating the need for personalized
adaptation to their distinct teaching styles. The results also show that
although there is a difference in the teaching styles between expert and
non-expert users, the style does not have an effect on the performance of the
continual learning robot. Finally, our analysis shows that the constrained
experimental setups that have been widely used to test most continual learning
techniques are not adequate, as real users interact with and teach continual
learning robots in a variety of ways. Our code is available at
https://github.com/aliayub7/cl_hri.
Related papers
- On the Effect of Robot Errors on Human Teaching Dynamics [1.7249361224827533]
We investigate how the presence and severity of robot errors affect three dimensions of human teaching dynamics.
Results show that people tend to spend more time teaching robots with errors.
Our findings offer valuable insights for designing effective interfaces for interactive learning.
arXiv Detail & Related papers (2024-09-15T19:02:34Z) - Human-Robot Mutual Learning through Affective-Linguistic Interaction and Differential Outcomes Training [Pre-Print] [0.3811184252495269]
We test how affective-linguistic communication, in combination with differential outcomes training, affects mutual learning in a human-robot context.
Taking inspiration from child- caregiver dynamics, our human-robot interaction setup consists of a (simulated) robot attempting to learn how best to communicate internal, homeostatically-controlled needs.
arXiv Detail & Related papers (2024-07-01T13:35:08Z) - Evaluating Continual Learning on a Home Robot [30.620205237707342]
We show how continual learning methods can be adapted for use on a real, low-cost home robot.
We propose SANER, a method for continuously learning a library of skills, and ABIP, as the backbone to support it.
arXiv Detail & Related papers (2023-06-04T17:14:49Z) - Continual Learning through Human-Robot Interaction -- Human Perceptions
of a Continual Learning Robot in Repeated Interactions [7.717214217542406]
We developed a system that integrates CL models for object recognition with a Fetch mobile manipulator robot.
We conducted a study with 60 participants who interacted with our system in 300 sessions (5 sessions per participant)
Our results suggest that participants' perceptions of trust, competence, and usability of a continual learning robot significantly decrease over multiple sessions if the robot forgets previously learned objects.
arXiv Detail & Related papers (2023-05-22T01:14:46Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - The State of Lifelong Learning in Service Robots: Current Bottlenecks in
Object Perception and Manipulation [3.7858180627124463]
State-of-the-art continues to improve to make a proper coupling between object perception and manipulation.
In most of the cases, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object.
In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects.
apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition.
arXiv Detail & Related papers (2020-03-18T11:00:55Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.