Learning to Shape Rewards using a Game of Switching Controls
- URL: http://arxiv.org/abs/2103.09159v1
- Date: Tue, 16 Mar 2021 15:56:57 GMT
- Title: Learning to Shape Rewards using a Game of Switching Controls
- Authors: David Mguni, Jianhong Wang, Taher Jafferjee, Nicolas Perez-Nieves,
Wenbin Song, Yaodong Yang, Feifei Tong, Hui Chen, Jiangcheng Zhu, Yali Du,
Jun Wang
- Abstract summary: We introduce an automated RS framework in which the shaping-reward function is constructed in a novel game between two agents.
We prove theoretically that our framework, which easily adopts existing RL algorithms, learns to construct a shaping-reward function that is tailored to the task.
We demonstrate the superior performance of our method against state-of-the-art RS algorithms in Cartpole and the challenging console games Gravitar, Solaris and Super Mario.
- Score: 21.456451774045465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reward shaping (RS) is a powerful method in reinforcement learning (RL) for
overcoming the problem of sparse and uninformative rewards. However, RS relies
on manually engineered shaping-reward functions whose construction is typically
time-consuming and error-prone. It also requires domain knowledge which runs
contrary to the goal of autonomous learning. In this paper, we introduce an
automated RS framework in which the shaping-reward function is constructed in a
novel stochastic game between two agents. One agent learns both which states to
add shaping rewards and their optimal magnitudes and the other agent learns the
optimal policy for the task using the shaped rewards. We prove theoretically
that our framework, which easily adopts existing RL algorithms, learns to
construct a shaping-reward function that is tailored to the task and ensures
convergence to higher performing policies for the given task. We demonstrate
the superior performance of our method against state-of-the-art RS algorithms
in Cartpole and the challenging console games Gravitar, Solaris and Super
Mario.
Related papers
- ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization [41.074747242532695]
Online Reward Selection and Policy Optimization (ORSO) is a novel approach that frames shaping reward selection as an online model selection problem.
ORSO employs principled exploration strategies to automatically identify promising shaping reward functions without human intervention.
We demonstrate ORSO's effectiveness across various continuous control tasks using the Isaac Gym simulator.
arXiv Detail & Related papers (2024-10-17T17:55:05Z) - Reinforcement Learning with Foundation Priors: Let the Embodied Agent Efficiently Learn on Its Own [59.11934130045106]
We propose Reinforcement Learning with Foundation Priors (RLFP) to utilize guidance and feedback from policy, value, and success-reward foundation models.
Within this framework, we introduce the Foundation-guided Actor-Critic (FAC) algorithm, which enables embodied agents to explore more efficiently with automatic reward functions.
Our method achieves remarkable performances in various manipulation tasks on both real robots and in simulation.
arXiv Detail & Related papers (2023-10-04T07:56:42Z) - Inverse Preference Learning: Preference-based RL without a Reward
Function [34.31087304327075]
Inverse Preference Learning (IPL) is specifically designed for learning from offline preference data.
Our key insight is that for a fixed policy, the $Q$-function encodes all information about the reward function, effectively making them interchangeable.
IPL attains competitive performance compared to more complex approaches that leverage transformer-based and non-Markovian reward functions.
arXiv Detail & Related papers (2023-05-24T17:14:10Z) - Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals [69.76245723797368]
Read and Reward speeds up RL algorithms on Atari games by reading manuals released by the Atari game developers.
Various RL algorithms obtain significant improvement in performance and training speed when assisted by our design.
arXiv Detail & Related papers (2023-02-09T05:47:03Z) - Learning of Parameters in Behavior Trees for Movement Skills [0.9562145896371784]
Behavior Trees (BTs) can provide a policy representation that supports modular and composable skills.
We present a novel algorithm that can learn the parameters of a BT policy in simulation and then generalize to the physical robot without any additional training.
arXiv Detail & Related papers (2021-09-27T13:46:39Z) - Composable Learning with Sparse Kernel Representations [110.19179439773578]
We present a reinforcement learning algorithm for learning sparse non-parametric controllers in a Reproducing Kernel Hilbert Space.
We improve the sample complexity of this approach by imposing a structure of the state-action function through a normalized advantage function.
We demonstrate the performance of this algorithm on learning obstacle-avoidance policies in multiple simulations of a robot equipped with a laser scanner while navigating in a 2D environment.
arXiv Detail & Related papers (2021-03-26T13:58:23Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping [71.214923471669]
Reward shaping is an effective technique for incorporating domain knowledge into reinforcement learning (RL)
In this paper, we consider the problem of adaptively utilizing a given shaping reward function.
Experiments in sparse-reward cartpole and MuJoCo environments show that our algorithms can fully exploit beneficial shaping rewards.
arXiv Detail & Related papers (2020-11-05T05:34:14Z) - Reward Machines: Exploiting Reward Function Structure in Reinforcement
Learning [22.242379207077217]
We show how to show the reward function's code to the RL agent so it can exploit the function's internal structure to learn optimal policies.
First, we propose reward machines, a type of finite state machine that supports the specification of reward functions.
We then describe different methodologies to exploit this structure to support learning, including automated reward shaping, task decomposition, and counterfactual reasoning with off-policy learning.
arXiv Detail & Related papers (2020-10-06T00:10:16Z) - Active Finite Reward Automaton Inference and Reinforcement Learning
Using Queries and Counterexamples [31.31937554018045]
Deep reinforcement learning (RL) methods require intensive data from the exploration of the environment to achieve satisfactory performance.
We propose a framework that enables an RL agent to reason over its exploration process and distill high-level knowledge for effectively guiding its future explorations.
Specifically, we propose a novel RL algorithm that learns high-level knowledge in the form of a finite reward automaton by using the L* learning algorithm.
arXiv Detail & Related papers (2020-06-28T21:13:08Z) - Forgetful Experience Replay in Hierarchical Reinforcement Learning from
Demonstrations [55.41644538483948]
In this paper, we propose a combination of approaches that allow the agent to use low-quality demonstrations in complex vision-based environments.
Our proposed goal-oriented structuring of replay buffer allows the agent to automatically highlight sub-goals for solving complex hierarchical tasks in demonstrations.
The solution based on our algorithm beats all the solutions for the famous MineRL competition and allows the agent to mine a diamond in the Minecraft environment.
arXiv Detail & Related papers (2020-06-17T15:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.