EmoDM: A Diffusion Model for Evolutionary Multi-objective Optimization
- URL: http://arxiv.org/abs/2401.15931v1
- Date: Mon, 29 Jan 2024 07:41:44 GMT
- Title: EmoDM: A Diffusion Model for Evolutionary Multi-objective Optimization
- Authors: Xueming Yan and Yaochu Jin
- Abstract summary: This work proposes for the first time a diffusion model that can learn to perform evolutionary multi-objective search, called EmoDM.
EmoDM can generate a set of non-dominated solutions for a new MOP by means of its reverse diffusion without further evolutionary search.
Experimental results demonstrate the competitiveness of EmoDM in terms of both the search performance and computational efficiency.
- Score: 26.432301097788276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolutionary algorithms have been successful in solving multi-objective
optimization problems (MOPs). However, as a class of population-based search
methodology, evolutionary algorithms require a large number of evaluations of
the objective functions, preventing them from being applied to a wide range of
expensive MOPs. To tackle the above challenge, this work proposes for the first
time a diffusion model that can learn to perform evolutionary multi-objective
search, called EmoDM. This is achieved by treating the reversed convergence
process of evolutionary search as the forward diffusion and learn the noise
distributions from previously solved evolutionary optimization tasks. The
pre-trained EmoDM can then generate a set of non-dominated solutions for a new
MOP by means of its reverse diffusion without further evolutionary search,
thereby significantly reducing the required function evaluations. To enhance
the scalability of EmoDM, a mutual entropy-based attention mechanism is
introduced to capture the decision variables that are most important for the
objectives. Experimental results demonstrate the competitiveness of EmoDM in
terms of both the search performance and computational efficiency compared with
state-of-the-art evolutionary algorithms in solving MOPs having up to 5000
decision variables. The pre-trained EmoDM is shown to generalize well to unseen
problems, revealing its strong potential as a general and efficient MOP solver.
Related papers
- Evolutionary Large Language Model for Automated Feature Transformation [25.956740176321897]
We propose an evolutionary Large Language Model (LLM) framework for automated feature transformation.
This framework consists of two parts: 1) constructing a multi-population database through an RL data collector, and 2) utilizing the ability of Large Language Model (LLM) in sequence understanding.
We empirically demonstrate the effectiveness and generality of our proposed method.
arXiv Detail & Related papers (2024-05-25T12:27:21Z) - Pre-Evolved Model for Complex Multi-objective Optimization Problems [3.784829029016233]
Multi-objective optimization problems (MOPs) necessitate the simultaneous optimization of multiple objectives.
This paper proposes the concept of pre-evolving for MOEAs to generate high-quality populations for diverse complex MOPs.
arXiv Detail & Related papers (2023-12-11T05:16:58Z) - Evolutionary Dynamic Optimization and Machine Learning [0.0]
Evolutionary Computation (EC) has emerged as a powerful field of Artificial Intelligence, inspired by nature's mechanisms of gradual development.
To overcome these limitations, researchers have integrated learning algorithms with evolutionary techniques.
This integration harnesses the valuable data generated by EC algorithms during iterative searches, providing insights into the search space and population dynamics.
arXiv Detail & Related papers (2023-10-12T22:28:53Z) - Rank-Based Learning and Local Model Based Evolutionary Algorithm for High-Dimensional Expensive Multi-Objective Problems [1.0499611180329806]
The proposed algorithm consists of three parts: rank-based learning, hyper-volume-based non-dominated search, and local search in the relatively sparse objective space.
The experimental results of benchmark problems and a real-world application on geothermal reservoir heat extraction optimization demonstrate that the proposed algorithm shows superior performance.
arXiv Detail & Related papers (2023-04-19T06:25:04Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - dMFEA-II: An Adaptive Multifactorial Evolutionary Algorithm for
Permutation-based Discrete Optimization Problems [6.943742860591444]
We propose the first adaptation of the recently introduced Multifactorial Evolutionary Algorithm II (MFEA-II) to permutation-based discrete environments.
The performance of the proposed solver has been assessed over 5 different multitasking setups.
arXiv Detail & Related papers (2020-04-14T14:42:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.