Continual Learning through Human-Robot Interaction -- Human Perceptions
of a Continual Learning Robot in Repeated Interactions
- URL: http://arxiv.org/abs/2305.16332v1
- Date: Mon, 22 May 2023 01:14:46 GMT
- Title: Continual Learning through Human-Robot Interaction -- Human Perceptions
of a Continual Learning Robot in Repeated Interactions
- Authors: Ali Ayub, Zachary De Francesco, Patrick Holthaus, Chrystopher L.
Nehaniv, Kerstin Dautenhahn
- Abstract summary: We developed a system that integrates CL models for object recognition with a Fetch mobile manipulator robot.
We conducted a study with 60 participants who interacted with our system in 300 sessions (5 sessions per participant)
Our results suggest that participants' perceptions of trust, competence, and usability of a continual learning robot significantly decrease over multiple sessions if the robot forgets previously learned objects.
- Score: 7.717214217542406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For long-term deployment in dynamic real-world environments, assistive robots
must continue to learn and adapt to their environments. Researchers have
developed various computational models for continual learning (CL) that can
allow robots to continually learn from limited training data, and avoid
forgetting previous knowledge. While these CL models can mitigate forgetting on
static, systematically collected datasets, it is unclear how human users might
perceive a robot that continually learns over multiple interactions with them.
In this paper, we developed a system that integrates CL models for object
recognition with a Fetch mobile manipulator robot and allows human participants
to directly teach and test the robot over multiple sessions. We conducted an
in-person study with 60 participants who interacted with our system in 300
sessions (5 sessions per participant). We conducted a between-participant study
with three different CL models (3 experimental conditions) to understand human
perceptions of continual learning robots over multiple sessions. Our results
suggest that participants' perceptions of trust, competence, and usability of a
continual learning robot significantly decrease over multiple sessions if the
robot forgets previously learned objects. However, the perceived task load on
participants for teaching and testing the robot remains the same over multiple
sessions even if the robot forgets previously learned objects. Our results also
indicate that state-of-the-art CL models might perform unreliably when applied
to robots interacting with human participants. Further, continual learning
robots are not perceived as very trustworthy or competent by human
participants, regardless of the underlying continual learning model or the
session number.
Related papers
- Human Reactions to Incorrect Answers from Robots [0.0]
The study systematically studies how trust dynamics and system design are affected by human responses to robot failures.
Results show that participants' trust in robotic technologies increased significantly when robots acknowledged their errors or limitations.
The study advances the science of human-robot interaction and promotes a wider adoption of robotic technologies.
arXiv Detail & Related papers (2024-03-21T11:00:11Z) - How Do Human Users Teach a Continual Learning Robot in Repeated
Interactions? [7.193217430660011]
We take a human-centered approach to continual learning, to understand how humans teach continual learning robots over the long term.
We conducted an in-person study with 40 participants that interacted with a continual learning robot in 200 sessions.
An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users.
arXiv Detail & Related papers (2023-06-30T20:29:48Z) - AR2-D2:Training a Robot Without a Robot [53.10633639596096]
We introduce AR2-D2, a system for collecting demonstrations which does not require people with specialized training.
AR2-D2 is a framework in the form of an iOS app that people can use to record a video of themselves manipulating any object.
We show that data collected via our system enables the training of behavior cloning agents in manipulating real objects.
arXiv Detail & Related papers (2023-06-23T23:54:26Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Continual Learning of Visual Concepts for Robots through Limited
Supervision [9.89901717499058]
My research focuses on developing robots that continually learn in dynamic unseen environments/scenarios.
I develop machine learning models that produce State-of-the-results on benchmark datasets.
arXiv Detail & Related papers (2021-01-26T01:26:07Z) - The State of Lifelong Learning in Service Robots: Current Bottlenecks in
Object Perception and Manipulation [3.7858180627124463]
State-of-the-art continues to improve to make a proper coupling between object perception and manipulation.
In most of the cases, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object.
In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects.
apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition.
arXiv Detail & Related papers (2020-03-18T11:00:55Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z) - A Survey of Behavior Learning Applications in Robotics -- State of the Art and Perspectives [44.45953630612019]
Recent success of machine learning in many domains has been overwhelming.
We will give a broad overview of behaviors that have been learned and used on real robots.
arXiv Detail & Related papers (2019-06-05T07:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.