Intrinsically Motivated Hierarchical Policy Learning in Multi-objective
Markov Decision Processes
- URL: http://arxiv.org/abs/2308.09733v1
- Date: Fri, 18 Aug 2023 02:10:45 GMT
- Title: Intrinsically Motivated Hierarchical Policy Learning in Multi-objective
Markov Decision Processes
- Authors: Sherif Abdelfattah, Kathryn Merrick, Jiankun Hu
- Abstract summary: We propose a novel dual-phase intrinsically motivated reinforcement learning method to address this limitation.
We show experimentally that the proposed method significantly outperforms state-of-the-art multi-objective reinforcement methods in a dynamic robotics environment.
- Score: 15.50007257943931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-objective Markov decision processes are sequential decision-making
problems that involve multiple conflicting reward functions that cannot be
optimized simultaneously without a compromise. This type of problems cannot be
solved by a single optimal policy as in the conventional case. Alternatively,
multi-objective reinforcement learning methods evolve a coverage set of optimal
policies that can satisfy all possible preferences in solving the problem.
However, many of these methods cannot generalize their coverage sets to work in
non-stationary environments. In these environments, the parameters of the state
transition and reward distribution vary over time. This limitation results in
significant performance degradation for the evolved policy sets. In order to
overcome this limitation, there is a need to learn a generic skill set that can
bootstrap the evolution of the policy coverage set for each shift in the
environment dynamics therefore, it can facilitate a continuous learning
process. In this work, intrinsically motivated reinforcement learning has been
successfully deployed to evolve generic skill sets for learning hierarchical
policies to solve multi-objective Markov decision processes. We propose a novel
dual-phase intrinsically motivated reinforcement learning method to address
this limitation. In the first phase, a generic set of skills is learned. While
in the second phase, this set is used to bootstrap policy coverage sets for
each shift in the environment dynamics. We show experimentally that the
proposed method significantly outperforms state-of-the-art multi-objective
reinforcement methods in a dynamic robotics environment.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.