Multi-objective Optimization by Learning Space Partitions
- URL: http://arxiv.org/abs/2110.03173v2
- Date: Sun, 10 Oct 2021 17:10:32 GMT
- Title: Multi-objective Optimization by Learning Space Partitions
- Authors: Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo,
Yuandong Tian
- Abstract summary: We propose LaMOO, a novel multi-objective that learns a model from observed samples to partition the search space and then focus on promising regions.
LaMOO substantially outperforms strong baselines on multiple real-world MOO tasks.
- Score: 34.84370438997276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contrast to single-objective optimization (SOO), multi-objective
optimization (MOO) requires an optimizer to find the Pareto frontier, a subset
of feasible solutions that are not dominated by other feasible solutions. In
this paper, we propose LaMOO, a novel multi-objective optimizer that learns a
model from observed samples to partition the search space and then focus on
promising regions that are likely to contain a subset of the Pareto frontier.
The partitioning is based on the dominance number, which measures "how close" a
data point is to the Pareto frontier among existing samples. To account for
possible partition errors due to limited samples and model mismatch, we
leverage Monte Carlo Tree Search (MCTS) to exploit promising regions while
exploring suboptimal regions that may turn out to contain good solutions later.
Theoretically, we prove the efficacy of learning space partitioning via LaMOO
under certain assumptions. Empirically, on the HyperVolume (HV) benchmark, a
popular MOO metric, LaMOO substantially outperforms strong baselines on
multiple real-world MOO tasks, by up to 225% in sample efficiency for neural
architecture search on Nasbench201, and up to 10% for molecular design.
Related papers
- Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Multi-Objective Neural Architecture Search by Learning Search Space Partitions [8.4553113915588]
We implement a novel meta-algorithm called LaMOO on neural architecture search (NAS) tasks.
LaMOO speedups the search process by learning a model from observed samples to partition the search space and then focusing on promising regions.
For real-world tasks, LaMOO achieves 97.36% accuracy with only 1.62M #Params on CIFAR10 in only 600 search samples.
arXiv Detail & Related papers (2024-06-01T03:51:34Z) - Efficient Multi-agent Reinforcement Learning by Planning [33.51282615335009]
Multi-agent reinforcement learning (MARL) algorithms have accomplished remarkable breakthroughs in solving large-scale decision-making tasks.
Most existing MARL algorithms are model-free, limiting sample efficiency and hindering their applicability in more challenging scenarios.
We propose the MAZero algorithm, which combines a centralized model with Monte Carlo Tree Search (MCTS) for policy search.
arXiv Detail & Related papers (2024-05-20T04:36:02Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - Pareto Set Learning for Expensive Multi-Objective Optimization [5.419608513284392]
Expensive multi-objective optimization problems can be found in many real-world applications.
This paper develops a novel learning-based method to approximate the whole Pareto set for MOBO.
arXiv Detail & Related papers (2022-10-16T09:41:54Z) - Optimal Scaling for Locally Balanced Proposals in Discrete Spaces [65.14092237705476]
We show that efficiency of Metropolis-Hastings (M-H) algorithms in discrete spaces can be characterized by an acceptance rate that is independent of the target distribution.
Knowledge of the optimal acceptance rate allows one to automatically tune the neighborhood size of a proposal distribution in a discrete space, directly analogous to step-size control in continuous spaces.
arXiv Detail & Related papers (2022-09-16T22:09:53Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - Multi-Task Learning on Networks [0.0]
Multi-objective optimization problems arising in the multi-task learning context have specific features and require adhoc methods.
In this thesis the solutions in the Input Space are represented as probability distributions encapsulating the knowledge contained in the function evaluations.
In this space of probability distributions, endowed with the metric given by the Wasserstein distance, a new algorithm MOEA/WST can be designed in which the model is not directly on the objective function.
arXiv Detail & Related papers (2021-12-07T09:13:10Z) - Learning Space Partitions for Path Planning [54.475949279050596]
PlaLaM outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima.
These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 245% and in molecular design by up to 0.4 on properties on a 0-1 scale.
arXiv Detail & Related papers (2021-06-19T18:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.