Accurate Peak Detection in Multimodal Optimization via Approximated Landscape Learning
- URL: http://arxiv.org/abs/2503.18066v1
- Date: Sun, 23 Mar 2025 13:21:53 GMT
- Title: Accurate Peak Detection in Multimodal Optimization via Approximated Landscape Learning
- Authors: Zeyuan Ma, Hongqiao Lian, Wenjie Qiu, Yue-Jiao Gong,
- Abstract summary: We propose a novel optimization framework tailored for MMOPs, termed as APDMMO, which facilitates peak detection via fully leveraging the landscape knowledge.<n>Specifically, we first design a novel surrogate landscape model which ensembles a group of non-linear activation units to improve the regression accuracy on diverse MMOPs.<n>Then we propose a free-of-trial peak detection method which efficiently locates potential peak areas through back-propagation on the learned surrogate landscape model.
- Score: 8.839347987566336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting potential optimal peak areas and locating the accurate peaks in these areas are two major challenges in Multimodal Optimization problems (MMOPs). To address them, much efforts have been spent on developing novel searching operators, niching strategies and multi-objective problem transformation pipelines. Though promising, existing approaches more or less overlook the potential usage of landscape knowledge. In this paper, we propose a novel optimization framework tailored for MMOPs, termed as APDMMO, which facilitates peak detection via fully leveraging the landscape knowledge and hence capable of providing strong optimization performance on MMOPs. Specifically, we first design a novel surrogate landscape model which ensembles a group of non-linear activation units to improve the regression accuracy on diverse MMOPs. Then we propose a free-of-trial peak detection method which efficiently locates potential peak areas through back-propagation on the learned surrogate landscape model. Based on the detected peak areas, we employ SEP-CMAES for local search within these areas in parallel to further improve the accuracy of the found optima. Extensive benchmarking results demonstrate that APDMMO outperforms several up-to-date baselines. Further ablation studies verify the effectiveness of the proposed novel designs. The source-code is available at ~\href{}{https://github.com/GMC-DRL/APDMMO}.
Related papers
- Preference-Guided Diffusion for Multi-Objective Offline Optimization [64.08326521234228]
We propose a preference-guided diffusion model for offline multi-objective optimization.
Our guidance is a preference model trained to predict the probability that one design dominates another.
Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions.
arXiv Detail & Related papers (2025-03-21T16:49:38Z) - Offline Model-Based Optimization: Comprehensive Review [61.91350077539443]
offline optimization is a fundamental challenge in science and engineering, where the goal is to optimize black-box functions using only offline datasets.<n>Recent advances in model-based optimization have harnessed the generalization capabilities of deep neural networks to develop offline-specific surrogate and generative models.<n>Despite its growing impact in accelerating scientific discovery, the field lacks a comprehensive review.
arXiv Detail & Related papers (2025-03-21T16:35:02Z) - Language-Based Bayesian Optimization Research Assistant (BORA) [0.0]
Contextualizing domain knowledge is a powerful approach to guide searches for fruitful regions.<n>Here, we propose use of Language Models (LLMs) for contextualizing search optimization.
arXiv Detail & Related papers (2025-01-27T17:20:04Z) - Meta-Learning Objectives for Preference Optimization [39.15940594751445]
We show that it is possible to gain insights on the efficacy of preference optimization algorithms on simpler benchmarks.<n>We propose a novel family of PO algorithms based on mirror descent, which we call Mirror Preference Optimization (MPO)
arXiv Detail & Related papers (2024-11-10T19:11:48Z) - A Landscape-Aware Differential Evolution for Multimodal Optimization Problems [54.50341106632738]
How to simultaneously locate multiple global peaks and achieve certain accuracy on the found peaks are two key challenges in solving multimodal optimization problems (MMOPs)<n>In this paper, a landscape-aware differential evolution (LADE) algorithm is proposed for MMOPs, which utilizes landscape knowledge to maintain sufficient diversity and provide efficient search guidance.<n> Experimental results show that LADE obtains generally better or competitive performance compared with seven well-performed algorithms proposed recently and four winner algorithms in the IEEE CEC competitions for multimodal optimization.
arXiv Detail & Related papers (2024-08-05T09:37:55Z) - RLEMMO: Evolutionary Multimodal Optimization Assisted By Deep Reinforcement Learning [8.389454219309837]
multimodal optimization problems (MMOP) requires finding all optimal solutions, which is challenging in limited function evaluations.
We propose RLEMMO, a Meta-Black-Box Optimization framework, which maintains a population of solutions and incorporates a reinforcement learning agent.
With a novel reward mechanism that encourages both quality and diversity, RLEMMO can be effectively trained using a policy gradient algorithm.
arXiv Detail & Related papers (2024-04-12T05:02:49Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.<n>We identify two pivotal factors in model parameter learning: update direction and update method.<n>We develop a capable Gradient-inspired Prompt-based GPO.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Surpassing legacy approaches to PWR core reload optimization with single-objective Reinforcement learning [0.0]
We have developed methods based on Deep Reinforcement Learning (DRL) for both single- and multi-objective optimization.
In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO)
PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global and local search method.
arXiv Detail & Related papers (2024-02-16T19:35:58Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Rectified Max-Value Entropy Search for Bayesian Optimization [54.26984662139516]
We develop a rectified MES acquisition function based on the notion of mutual information.
As a result, RMES shows a consistent improvement over MES in several synthetic function benchmarks and real-world optimization problems.
arXiv Detail & Related papers (2022-02-28T08:11:02Z) - Learning Space Partitions for Path Planning [54.475949279050596]
PlaLaM outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima.
These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 245% and in molecular design by up to 0.4 on properties on a 0-1 scale.
arXiv Detail & Related papers (2021-06-19T18:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.