Human-guided Robot Behavior Learning: A GAN-assisted Preference-based
Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2010.07467v1
- Date: Thu, 15 Oct 2020 01:44:06 GMT
- Title: Human-guided Robot Behavior Learning: A GAN-assisted Preference-based
Reinforcement Learning Approach
- Authors: Huixin Zhan, Feng Tao, and Yongcan Cao
- Abstract summary: We propose a new GAN-assisted human preference-based reinforcement learning approach.
It uses a generative adversarial network (GAN) to actively learn human preferences and then replace the role of human in assigning preferences.
Our method can achieve a reduction of about 99.8% human time without performance sacrifice.
- Score: 2.9764834057085716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human demonstrations can provide trustful samples to train reinforcement
learning algorithms for robots to learn complex behaviors in real-world
environments. However, obtaining sufficient demonstrations may be impractical
because many behaviors are difficult for humans to demonstrate. A more
practical approach is to replace human demonstrations by human queries, i.e.,
preference-based reinforcement learning. One key limitation of the existing
algorithms is the need for a significant amount of human queries because a
large number of labeled data is needed to train neural networks for the
approximation of a continuous, high-dimensional reward function. To reduce and
minimize the need for human queries, we propose a new GAN-assisted human
preference-based reinforcement learning approach that uses a generative
adversarial network (GAN) to actively learn human preferences and then replace
the role of human in assigning preferences. The adversarial neural network is
simple and only has a binary output, hence requiring much less human queries to
train. Moreover, a maximum entropy based reinforcement learning algorithm is
designed to shape the loss towards the desired regions or away from the
undesired regions. To show the effectiveness of the proposed approach, we
present some studies on complex robotic tasks without access to the environment
reward in a typical MuJoCo robot locomotion environment. The obtained results
show our method can achieve a reduction of about 99.8% human time without
performance sacrifice.
Related papers
- Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Few-Shot Preference Learning for Human-in-the-Loop RL [13.773589150740898]
Motivated by the success of meta-learning, we pre-train preference models on prior task data and quickly adapt them for new tasks using only a handful of queries.
We reduce the amount of online feedback needed to train manipulation policies in Meta-World by 20$times$, and demonstrate the effectiveness of our method on a real Franka Panda Robot.
arXiv Detail & Related papers (2022-12-06T23:12:26Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - Learning from humans: combining imitation and deep reinforcement
learning to accomplish human-level performance on a virtual foraging task [6.263481844384228]
We develop a method to learn bio-inspired foraging policies using human data.
We conduct an experiment where humans are virtually immersed in an open field foraging environment and are trained to collect the highest amount of rewards.
arXiv Detail & Related papers (2022-03-11T20:52:30Z) - Skill Preferences: Learning to Extract and Execute Robotic Skills from
Human Feedback [82.96694147237113]
We present Skill Preferences, an algorithm that learns a model over human preferences and uses it to extract human-aligned skills from offline data.
We show that SkiP enables a simulated kitchen robot to solve complex multi-step manipulation tasks.
arXiv Detail & Related papers (2021-08-11T18:04:08Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning
Systems [0.8223798883838329]
This research investigates how to integrate human interaction modalities to the reinforcement learning loop.
Results show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms.
arXiv Detail & Related papers (2020-08-30T17:28:18Z) - Weak Human Preference Supervision For Deep Reinforcement Learning [48.03929962249475]
The current reward learning from human preferences could be used to resolve complex reinforcement learning (RL) tasks without access to a reward function.
We propose a weak human preference supervision framework, for which we developed a human preference scaling model.
Our established human-demonstration estimator requires human feedback only for less than 0.01% of the agent's interactions with the environment.
arXiv Detail & Related papers (2020-07-25T10:37:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.