Bi-level Multi-objective Evolutionary Learning: A Case Study on
Multi-task Graph Neural Topology Search
- URL: http://arxiv.org/abs/2302.02565v1
- Date: Mon, 6 Feb 2023 04:59:51 GMT
- Title: Bi-level Multi-objective Evolutionary Learning: A Case Study on
Multi-task Graph Neural Topology Search
- Authors: Chao Wang, Licheng Jiao, Jiaxuan Zhao, Lingling Li, Xu Liu, Fang Liu,
Shuyuan Yang
- Abstract summary: This paper proposes a bi-level multi-objective learning framework (BLMOL)
It coupling the decision-making process with the optimization process of the UL-MOP.
The preference surrogate model is constructed to replace the expensive evaluation process of the UL-MOP.
- Score: 47.59828447981408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The construction of machine learning models involves many bi-level
multi-objective optimization problems (BL-MOPs), where upper level (UL)
candidate solutions must be evaluated via training weights of a model in the
lower level (LL). Due to the Pareto optimality of sub-problems and the complex
dependency across UL solutions and LL weights, an UL solution is feasible if
and only if the LL weight is Pareto optimal. It is computationally expensive to
determine which LL Pareto weight in the LL Pareto weight set is the most
appropriate for each UL solution. This paper proposes a bi-level
multi-objective learning framework (BLMOL), coupling the above decision-making
process with the optimization process of the UL-MOP by introducing LL
preference $r$. Specifically, the UL variable and $r$ are simultaneously
searched to minimize multiple UL objectives by evolutionary multi-objective
algorithms. The LL weight with respect to $r$ is trained to minimize multiple
LL objectives via gradient-based preference multi-objective algorithms. In
addition, the preference surrogate model is constructed to replace the
expensive evaluation process of the UL-MOP. We consider a novel case study on
multi-task graph neural topology search. It aims to find a set of Pareto
topologies and their Pareto weights, representing different trade-offs across
tasks at UL and LL, respectively. The found graph neural network is employed to
solve multiple tasks simultaneously, including graph classification, node
classification, and link prediction. Experimental results demonstrate that
BLMOL can outperform some state-of-the-art algorithms and generate
well-representative UL solutions and LL weights.
Related papers
- Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - End-to-End Pareto Set Prediction with Graph Neural Networks for
Multi-objective Facility Location [10.130342722193204]
Facility location problems (FLPs) are a typical class of NP-hard optimization problems, which are widely seen in the supply chain and logistics.
In this paper, we consider the multi-objective facility location problem (MO-FLP) that simultaneously minimizes the overall cost and maximizes the system reliability.
Two graph neural networks are constructed to learn the implicit graph representation on nodes and edges.
arXiv Detail & Related papers (2022-10-27T07:15:55Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Multi-Objective Meta Learning [2.9932638148627104]
We propose a unified gradient-based Multi-Objective Meta Learning (MOML) framework.
We show the effectiveness of the proposed MOML framework in several meta learning problems.
arXiv Detail & Related papers (2021-02-14T10:23:09Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.