MOLE: Digging Tunnels Through Multimodal Multi-Objective Landscapes
- URL: http://arxiv.org/abs/2204.10848v1
- Date: Fri, 22 Apr 2022 17:54:54 GMT
- Title: MOLE: Digging Tunnels Through Multimodal Multi-Objective Landscapes
- Authors: Lennart Sch\"apermeier, Christian Grimme, Pascal Kerschke
- Abstract summary: Locally efficient (LE) sets, often considered as traps for local search, are rarely isolated in the decision space.
The Multi-Objective Gradient Sliding Algorithm (MOGSA) is an algorithmic concept developed to exploit these superpositions.
We propose a new algorithm, the Multi-Objective Landscape Explorer (MOLE), which is able to efficiently model and exploit LE sets in MMMOO problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in the visualization of continuous multimodal multi-objective
optimization (MMMOO) landscapes brought a new perspective to their search
dynamics. Locally efficient (LE) sets, often considered as traps for local
search, are rarely isolated in the decision space. Rather, intersections by
superposing attraction basins lead to further solution sets that at least
partially contain better solutions. The Multi-Objective Gradient Sliding
Algorithm (MOGSA) is an algorithmic concept developed to exploit these
superpositions. While it has promising performance on many MMMOO problems with
linear LE sets, closer analysis of MOGSA revealed that it does not sufficiently
generalize to a wider set of test problems. Based on a detailed analysis of
shortcomings of MOGSA, we propose a new algorithm, the Multi-Objective
Landscape Explorer (MOLE). It is able to efficiently model and exploit LE sets
in MMMOO problems. An implementation of MOLE is presented for the bi-objective
case, and the practicality of the approach is shown in a benchmarking
experiment on the Bi-Objective BBOB testbed.
Related papers
- LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - A First-Order Multi-Gradient Algorithm for Multi-Objective Bi-Level Optimization [7.097069899573992]
We study the Multi-Objective Bi-Level Optimization (MOBLO) problem.
Existing gradient-based MOBLO algorithms need to compute the Hessian matrix.
We propose an efficient first-order multi-gradient method for MOBLO, called FORUM.
arXiv Detail & Related papers (2024-01-17T15:03:37Z) - Bi-level Multi-objective Evolutionary Learning: A Case Study on
Multi-task Graph Neural Topology Search [47.59828447981408]
This paper proposes a bi-level multi-objective learning framework (BLMOL)
It coupling the decision-making process with the optimization process of the UL-MOP.
The preference surrogate model is constructed to replace the expensive evaluation process of the UL-MOP.
arXiv Detail & Related papers (2023-02-06T04:59:51Z) - Pareto Set Learning for Neural Multi-objective Combinatorial
Optimization [6.091096843566857]
Multiobjective optimization (MOCO) problems can be found in many real-world applications.
We develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
Our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiconditioned vehicle routing problem and multi knapsack problem in terms of solution quality, speed, and model efficiency.
arXiv Detail & Related papers (2022-03-29T09:26:22Z) - Discovery-and-Selection: Towards Optimal Multiple Instance Learning for
Weakly Supervised Object Detection [86.86602297364826]
We propose a discoveryand-selection approach fused with multiple instance learning (DS-MIL)
Our proposed DS-MIL approach can consistently improve the baselines, reporting state-of-the-art performance.
arXiv Detail & Related papers (2021-10-18T07:06:57Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z) - Empirical Study on the Benefits of Multiobjectivization for Solving
Single-Objective Problems [0.0]
Local optima are often preventing algorithms from making progress and thus pose a severe threat.
With the use of a sophisticated visualization technique based on the multi-objective gradients, the properties of the arising multi-objective landscapes are illustrated and examined.
We will empirically show that the multi-objective COCO MOGSA is able to exploit these properties to overcome local traps.
arXiv Detail & Related papers (2020-06-25T14:04:37Z) - Decomposition in Decision and Objective Space for Multi-Modal
Multi-Objective Optimization [15.681236469530397]
Multi-modal multi-objective optimization problems (MMMOPs) have multiple subsets within the Pareto-optimal Set.
Prevalent multi-objective evolutionary algorithms are not purely designed to search for multiple solution subsets, whereas, algorithms designed for MMMOPs demonstrate degraded performance in the objective space.
This motivates the design of better algorithms for addressing MMMOPs.
arXiv Detail & Related papers (2020-06-04T03:18:47Z) - Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning [63.64636047748605]
We develop a new theoretical framework to provide convergence guarantee for the general multi-step MAML algorithm.
In particular, our results suggest that an inner-stage step needs to be chosen inversely proportional to $N$ of inner-stage steps in order for $N$ MAML to have guaranteed convergence.
arXiv Detail & Related papers (2020-02-18T19:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.