Pareto Conditioned Networks
- URL: http://arxiv.org/abs/2204.05036v1
- Date: Mon, 11 Apr 2022 12:09:51 GMT
- Title: Pareto Conditioned Networks
- Authors: Mathieu Reymond, Eugenio Bargiacchi, Ann Now\'e
- Abstract summary: We propose a method that uses a single neural network to encompass all non-dominated policies.
PCN associates every past transition with its episode's return and trains the network such that, when conditioned on this same return, it should reenact said transition.
Our method is stable as it learns in a supervised fashion, thus avoiding moving target issues.
- Score: 1.7188280334580197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In multi-objective optimization, learning all the policies that reach
Pareto-efficient solutions is an expensive process. The set of optimal policies
can grow exponentially with the number of objectives, and recovering all
solutions requires an exhaustive exploration of the entire state space. We
propose Pareto Conditioned Networks (PCN), a method that uses a single neural
network to encompass all non-dominated policies. PCN associates every past
transition with its episode's return. It trains the network such that, when
conditioned on this same return, it should reenact said transition. In doing so
we transform the optimization problem into a classification problem. We recover
a concrete policy by conditioning the network on the desired Pareto-efficient
solution. Our method is stable as it learns in a supervised fashion, thus
avoiding moving target issues. Moreover, by using a single network, PCN scales
efficiently with the number of objectives. Finally, it makes minimal
assumptions on the shape of the Pareto front, which makes it suitable to a
wider range of problems than previous state-of-the-art multi-objective
reinforcement learning algorithms.
Related papers
- Joint Admission Control and Resource Allocation of Virtual Network Embedding via Hierarchical Deep Reinforcement Learning [69.00997996453842]
We propose a deep Reinforcement Learning approach to learn a joint Admission Control and Resource Allocation policy for virtual network embedding.
We show that HRL-ACRA outperforms state-of-the-art baselines in terms of both the acceptance ratio and long-term average revenue.
arXiv Detail & Related papers (2024-06-25T07:42:30Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Non-orthogonal Age-Optimal Information Dissemination in Vehicular
Networks: A Meta Multi-Objective Reinforcement Learning Approach [0.0]
A roadside unit (RSU) provides timely updates about a set of physical processes to vehicles.
The formulated problem is a multi-objective mixed-integer nonlinear programming problem.
We develop a hybrid deep Q-network (DQN)-deep deterministic policy gradient (DDPG) model to solve each optimization sub-problem.
arXiv Detail & Related papers (2024-02-15T16:51:47Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Latent-Conditioned Policy Gradient for Multi-Objective Deep Reinforcement Learning [2.1408617023874443]
We propose a novel multi-objective reinforcement learning (MORL) algorithm that trains a single neural network via policy gradient.
The proposed method works in both continuous and discrete action spaces with no design change of the policy network.
arXiv Detail & Related papers (2023-03-15T20:07:48Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - PD-MORL: Preference-Driven Multi-Objective Reinforcement Learning
Algorithm [0.18416014644193063]
We propose a novel MORL algorithm that trains a single universal network to cover the entire preference space scalable to continuous robotic tasks.
PD-MORL achieves up to 25% larger hypervolume for challenging continuous control tasks and uses an order of magnitude fewer trainable parameters compared to prior approaches.
arXiv Detail & Related papers (2022-08-16T19:23:02Z) - Optimistic Linear Support and Successor Features as a Basis for Optimal
Policy Transfer [7.970144204429356]
We introduce an SF-based extension of the Optimistic Linear Support algorithm to learn a set of policies whose SFs form a convex coverage set.
We prove that policies in this set can be combined via generalized policy improvement to construct optimal behaviors for any new linearly-expressible tasks.
arXiv Detail & Related papers (2022-06-22T19:00:08Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Efficient Multi-Objective Optimization for Deep Learning [2.0305676256390934]
Multi-objective optimization (MOO) is a prevalent challenge for Deep Learning.
There exists no scalable MOO solution for truly deep neural networks.
arXiv Detail & Related papers (2021-03-24T17:59:42Z) - Deep Policy Dynamic Programming for Vehicle Routing Problems [89.96386273895985]
We propose Deep Policy Dynamic Programming (D PDP) to combine the strengths of learned neurals with those of dynamic programming algorithms.
D PDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions.
We evaluate our framework on the travelling salesman problem (TSP) and the vehicle routing problem (VRP) and show that the neural policy improves the performance of (restricted) DP algorithms.
arXiv Detail & Related papers (2021-02-23T15:33:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.