Intrinsically Motivated Hierarchical Policy Learning in Multi-objective
Markov Decision Processes
- URL: http://arxiv.org/abs/2308.09733v1
- Date: Fri, 18 Aug 2023 02:10:45 GMT
- Title: Intrinsically Motivated Hierarchical Policy Learning in Multi-objective
Markov Decision Processes
- Authors: Sherif Abdelfattah, Kathryn Merrick, Jiankun Hu
- Abstract summary: We propose a novel dual-phase intrinsically motivated reinforcement learning method to address this limitation.
We show experimentally that the proposed method significantly outperforms state-of-the-art multi-objective reinforcement methods in a dynamic robotics environment.
- Score: 15.50007257943931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-objective Markov decision processes are sequential decision-making
problems that involve multiple conflicting reward functions that cannot be
optimized simultaneously without a compromise. This type of problems cannot be
solved by a single optimal policy as in the conventional case. Alternatively,
multi-objective reinforcement learning methods evolve a coverage set of optimal
policies that can satisfy all possible preferences in solving the problem.
However, many of these methods cannot generalize their coverage sets to work in
non-stationary environments. In these environments, the parameters of the state
transition and reward distribution vary over time. This limitation results in
significant performance degradation for the evolved policy sets. In order to
overcome this limitation, there is a need to learn a generic skill set that can
bootstrap the evolution of the policy coverage set for each shift in the
environment dynamics therefore, it can facilitate a continuous learning
process. In this work, intrinsically motivated reinforcement learning has been
successfully deployed to evolve generic skill sets for learning hierarchical
policies to solve multi-objective Markov decision processes. We propose a novel
dual-phase intrinsically motivated reinforcement learning method to address
this limitation. In the first phase, a generic set of skills is learned. While
in the second phase, this set is used to bootstrap policy coverage sets for
each shift in the environment dynamics. We show experimentally that the
proposed method significantly outperforms state-of-the-art multi-objective
reinforcement methods in a dynamic robotics environment.
Related papers
- HarmoDT: Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning [72.25707314772254]
We introduce the Harmony Multi-Task Decision Transformer (HarmoDT), a novel solution designed to identify an optimal harmony subspace of parameters for each task.
The upper level of this framework is dedicated to learning a task-specific mask that delineates the harmony subspace, while the inner level focuses on updating parameters to enhance the overall performance of the unified policy.
arXiv Detail & Related papers (2024-05-28T11:41:41Z) - Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning [26.244121960815907]
We propose a primal-based framework that orchestrates policy optimization between multi-objective learning and constraint adherence.
Our method employs a novel natural policy gradient manipulation method to optimize multiple RL objectives.
Empirically, our proposed method also outperforms prior state-of-the-art methods on challenging safe multi-objective reinforcement learning tasks.
arXiv Detail & Related papers (2024-05-26T00:42:10Z) - A Robust Policy Bootstrapping Algorithm for Multi-objective
Reinforcement Learning in Non-stationary Environments [15.794728813746397]
Multi-objective reinforcement learning methods fuse the reinforcement learning paradigm with multi-objective optimization techniques.
One major drawback of these methods is the lack of adaptability to non-stationary dynamics in the environment.
We propose a novel multi-objective reinforcement learning algorithm that can robustly evolve a convex coverage set of policies in an online manner in non-stationary environments.
arXiv Detail & Related papers (2023-08-18T02:15:12Z) - Stepsize Learning for Policy Gradient Methods in Contextual Markov
Decision Processes [35.889129338603446]
Policy-based algorithms are among the most widely adopted techniques in model-free RL.
They tend to struggle when asked to accomplish a series of heterogeneous tasks.
We introduce a new formulation, known as meta-MDP, that can be used to solve any hyper parameter selection problem in RL.
arXiv Detail & Related papers (2023-06-13T12:58:12Z) - Policy Dispersion in Non-Markovian Environment [53.05904889617441]
This paper tries to learn the diverse policies from the history of state-action pairs under a non-Markovian environment.
We first adopt a transformer-based method to learn policy embeddings.
Then, we stack the policy embeddings to construct a dispersion matrix to induce a set of diverse policies.
arXiv Detail & Related papers (2023-02-28T11:58:39Z) - UNIFY: a Unified Policy Designing Framework for Solving Constrained
Optimization Problems with Machine Learning [18.183339583346005]
We propose a unified framework to design a solution policy for complex decision-making problems.
Our approach relies on a clever decomposition of the policy in two stages, namely an unconstrained ML model and a CO problem.
We demonstrate the method effectiveness on two practical problems, namely an Energy Management System and the Set Multi-cover with coverage requirements.
arXiv Detail & Related papers (2022-10-25T14:09:24Z) - Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in
Latent Space [76.46113138484947]
General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments.
To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach goals for a wide range of tasks on command.
We propose Planning to Practice, a method that makes it practical to train goal-conditioned policies for long-horizon tasks.
arXiv Detail & Related papers (2022-05-17T06:58:17Z) - Constructing a Good Behavior Basis for Transfer using Generalized Policy
Updates [63.58053355357644]
We study the problem of learning a good set of policies, so that when combined together, they can solve a wide variety of unseen reinforcement learning tasks.
We show theoretically that having access to a specific set of diverse policies, which we call a set of independent policies, can allow for instantaneously achieving high-level performance.
arXiv Detail & Related papers (2021-12-30T12:20:46Z) - State Augmented Constrained Reinforcement Learning: Overcoming the
Limitations of Learning with Rewards [88.30521204048551]
A common formulation of constrained reinforcement learning involves multiple rewards that must individually accumulate to given thresholds.
We show a simple example in which the desired optimal policy cannot be induced by any weighted linear combination of rewards.
This work addresses this shortcoming by augmenting the state with Lagrange multipliers and reinterpreting primal-dual methods.
arXiv Detail & Related papers (2021-02-23T21:07:35Z) - One Solution is Not All You Need: Few-Shot Extrapolation via Structured
MaxEnt RL [142.36621929739707]
We show that learning diverse behaviors for accomplishing a task can lead to behavior that generalizes to varying environments.
By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations.
arXiv Detail & Related papers (2020-10-27T17:41:57Z) - SOAC: The Soft Option Actor-Critic Architecture [25.198302636265286]
Methods have been proposed for concurrently learning low-level intra-option policies and high-level option selection policy.
Existing methods typically suffer from two major challenges: ineffective exploration and unstable updates.
We present a novel and stable off-policy approach that builds on the maximum entropy model to address these challenges.
arXiv Detail & Related papers (2020-06-25T13:06:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.