Multi-Objective Learning to Predict Pareto Fronts Using Hypervolume
Maximization
- URL: http://arxiv.org/abs/2102.04523v1
- Date: Mon, 8 Feb 2021 20:41:21 GMT
- Title: Multi-Objective Learning to Predict Pareto Fronts Using Hypervolume
Maximization
- Authors: Timo M. Deist, Monika Grewal, Frank J.W.M. Dankers, Tanja
Alderliesten, Peter A.N. Bosman
- Abstract summary: Real-world problems are often multi-objective with decision-makers unable to specify a priori which trade-off between the conflicting objectives is preferable.
We propose a novel learning approach to estimate the Pareto front by maximizing the dominated hypervolume (HV) of the average loss vectors corresponding to a set of learners.
In our approach, the set of learners are trained multi-objectively with a dynamic loss function, wherein each learner's losses are weighted by their HV maximizing gradients.
Experiments on three different multi-objective tasks show that the outputs of the set of learners are indeed well-spread on
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world problems are often multi-objective with decision-makers unable to
specify a priori which trade-off between the conflicting objectives is
preferable. Intuitively, building machine learning solutions in such cases
would entail providing multiple predictions that span and uniformly cover the
Pareto front of all optimal trade-off solutions. We propose a novel learning
approach to estimate the Pareto front by maximizing the dominated hypervolume
(HV) of the average loss vectors corresponding to a set of learners, leveraging
established multi-objective optimization methods. In our approach, the set of
learners are trained multi-objectively with a dynamic loss function, wherein
each learner's losses are weighted by their HV maximizing gradients.
Consequently, the learners get trained according to different trade-offs on the
Pareto front, which otherwise is not guaranteed for fixed linear scalarizations
or when optimizing for specific trade-offs per learner without knowing the
shape of the Pareto front. Experiments on three different multi-objective tasks
show that the outputs of the set of learners are indeed well-spread on the
Pareto front. Further, the outputs corresponding to validation samples are also
found to closely follow the trade-offs that were learned from training samples
for our set of benchmark problems.
Related papers
- Pareto Inverse Reinforcement Learning for Diverse Expert Policy Generation [6.876580618014666]
In this paper, we adapt inverse reinforcement learning (IRL) by using reward distance estimates for regularizing the discriminator.
We show that ParIRL outperforms other IRL algorithms for various multi-objective control tasks.
arXiv Detail & Related papers (2024-08-22T03:51:39Z) - Pareto Front Shape-Agnostic Pareto Set Learning in Multi-Objective Optimization [6.810571151954673]
Existing methods rely on the mapping of preference vectors in the objective space to optimal solutions in the decision space.
Our proposed method can handle any shape of the Pareto front and learn the Pareto set without requiring prior knowledge.
arXiv Detail & Related papers (2024-08-11T14:09:40Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Divide and Conquer: Provably Unveiling the Pareto Front with
Multi-Objective Reinforcement Learning [2.5115843173830252]
We introduce IPRO, a principled algorithm that decomposes the task of finding the Pareto front into a sequence of single-objective problems.
Empirical evaluations demonstrate that IPRO matches or outperforms methods that require additional domain knowledge.
By leveraging problem-specific single-objective solvers, our approach holds promise for applications beyond multi-objective reinforcement learning.
arXiv Detail & Related papers (2024-02-11T12:35:13Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - Pareto Manifold Learning: Tackling multiple tasks via ensembles of
single-task models [50.33956216274694]
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution.
We propose textitPareto Manifold Learning, an ensembling method in weight space.
arXiv Detail & Related papers (2022-10-18T11:20:54Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Efficient Continuous Pareto Exploration in Multi-Task Learning [34.41682709915956]
We present a novel, efficient method for continuous analysis of optimal solutions in machine learning problems.
We scale up theoretical results in multi-objective optimization to modern machine learning problems by proposing a sample-based sparse linear system.
arXiv Detail & Related papers (2020-06-29T23:36:20Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.