Sample-Efficient Multi-Objective Learning via Generalized Policy
Improvement Prioritization
- URL: http://arxiv.org/abs/2301.07784v2
- Date: Thu, 23 Mar 2023 16:29:23 GMT
- Title: Sample-Efficient Multi-Objective Learning via Generalized Policy
Improvement Prioritization
- Authors: Lucas N. Alegre and Ana L. C. Bazzan and Diederik M. Roijers and Ann
Now\'e and Bruno C. da Silva
- Abstract summary: Multi-objective reinforcement learning (MORL) algorithms tackle sequential decision problems where agents may have different preferences.
We introduce a novel algorithm that uses Generalized Policy Improvement (GPI) to define principled, formally-derived prioritization schemes.
We empirically show that our method outperforms state-of-the-art MORL algorithms in challenging multi-objective tasks.
- Score: 8.836422771217084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-objective reinforcement learning (MORL) algorithms tackle sequential
decision problems where agents may have different preferences over (possibly
conflicting) reward functions. Such algorithms often learn a set of policies
(each optimized for a particular agent preference) that can later be used to
solve problems with novel preferences. We introduce a novel algorithm that uses
Generalized Policy Improvement (GPI) to define principled, formally-derived
prioritization schemes that improve sample-efficient learning. They implement
active-learning strategies by which the agent can (i) identify the most
promising preferences/objectives to train on at each moment, to more rapidly
solve a given MORL problem; and (ii) identify which previous experiences are
most relevant when learning a policy for a particular agent preference, via a
novel Dyna-style MORL method. We prove our algorithm is guaranteed to always
converge to an optimal solution in a finite number of steps, or an
$\epsilon$-optimal solution (for a bounded $\epsilon$) if the agent is limited
and can only identify possibly sub-optimal policies. We also prove that our
method monotonically improves the quality of its partial solutions while
learning. Finally, we introduce a bound that characterizes the maximum utility
loss (with respect to the optimal solution) incurred by the partial solutions
computed by our method throughout learning. We empirically show that our method
outperforms state-of-the-art MORL algorithms in challenging multi-objective
tasks, both with discrete and continuous state and action spaces.
Related papers
- C-MORL: Multi-Objective Reinforcement Learning through Efficient Discovery of Pareto Front [9.04360155372014]
Constrained MORL is a seamless bridge between constrained policy optimization and MORL.
Our algorithm achieves more consistent and superior performances in terms of hypervolume, expected utility, and sparsity on both discrete and continuous control tasks.
arXiv Detail & Related papers (2024-10-03T06:13:56Z) - Take a Step and Reconsider: Sequence Decoding for Self-Improved Neural Combinatorial Optimization [1.1510009152620668]
We present a simple and problem-independent sequence decoding method for self-improved learning.
By modifying the policy to ignore previously sampled sequences, we force it to consider only unseen alternatives.
Our method outperforms previous NCO approaches on the Job Shop Scheduling Problem.
arXiv Detail & Related papers (2024-07-24T12:06:09Z) - Local Optimization Achieves Global Optimality in Multi-Agent
Reinforcement Learning [139.53668999720605]
We present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO.
We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate.
arXiv Detail & Related papers (2023-05-08T16:20:03Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Optimistic Linear Support and Successor Features as a Basis for Optimal
Policy Transfer [7.970144204429356]
We introduce an SF-based extension of the Optimistic Linear Support algorithm to learn a set of policies whose SFs form a convex coverage set.
We prove that policies in this set can be combined via generalized policy improvement to construct optimal behaviors for any new linearly-expressible tasks.
arXiv Detail & Related papers (2022-06-22T19:00:08Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Modularity in Reinforcement Learning via Algorithmic Independence in
Credit Assignment [79.5678820246642]
We show that certain action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process.
arXiv Detail & Related papers (2021-06-28T21:29:13Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z) - Submodular Meta-Learning [43.15332631500541]
We introduce a discrete variant of the meta-learning framework to improve performance on future tasks.
Our approach aims at using prior data, i.e., previously visited tasks, to train a proper initial solution set.
We show that our framework leads to a significant reduction in computational complexity in solving the new tasks while incurring a small performance loss.
arXiv Detail & Related papers (2020-07-11T21:02:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.