A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization
- URL: http://arxiv.org/abs/2201.06718v2
- Date: Thu, 20 Oct 2022 21:51:02 GMT
- Title: A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization
- Authors: Tapabrata Ray, Mohammad Mohiuddin Mamun and Hemant Kumar Singh
- Abstract summary: We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In solving multi-modal, multi-objective optimization problems (MMOPs), the
objective is not only to find a good representation of the Pareto-optimal front
(PF) in the objective space but also to find all equivalent Pareto-optimal
subsets (PSS) in the variable space. Such problems are practically relevant
when a decision maker (DM) is interested in identifying alternative designs
with similar performance. There has been significant research interest in
recent years to develop efficient algorithms to deal with MMOPs. However, the
existing algorithms still require prohibitive number of function evaluations
(often in several thousands) to deal with problems involving as low as two
objectives and two variables. The algorithms are typically embedded with
sophisticated, customized mechanisms that require additional parameters to
manage the diversity and convergence in the variable and the objective spaces.
In this letter, we introduce a steady-state evolutionary algorithm for solving
MMOPs, with a simple design and no additional userdefined parameters that need
tuning compared to a standard EA. We report its performance on 21 MMOPs from
various test suites that are widely used for benchmarking using a low
computational budget of 1000 function evaluations. The performance of the
proposed algorithm is compared with six state-of-the-art algorithms (MO Ring
PSO SCD, DN-NSGAII, TriMOEA-TA&R, CPDEA, MMOEA/DC and MMEA-WI). The proposed
algorithm exhibits significantly better performance than the above algorithms
based on the established metrics including IGDX, PSP and IGD. We hope this
study would encourage design of simple, efficient and generalized algorithms to
improve its uptake for practical applications.
Related papers
- Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Improving Performance Insensitivity of Large-scale Multiobjective
Optimization via Monte Carlo Tree Search [7.34812867861951]
We propose an evolutionary algorithm for solving large-scale multiobjective optimization problems based on Monte Carlo tree search.
The proposed method samples the decision variables to construct new nodes on the Monte Carlo tree for optimization and evaluation.
It selects nodes with good evaluation for further search to reduce the performance sensitivity caused by large-scale decision variables.
arXiv Detail & Related papers (2023-04-08T17:15:49Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Decomposition in Decision and Objective Space for Multi-Modal
Multi-Objective Optimization [15.681236469530397]
Multi-modal multi-objective optimization problems (MMMOPs) have multiple subsets within the Pareto-optimal Set.
Prevalent multi-objective evolutionary algorithms are not purely designed to search for multiple solution subsets, whereas, algorithms designed for MMMOPs demonstrate degraded performance in the objective space.
This motivates the design of better algorithms for addressing MMMOPs.
arXiv Detail & Related papers (2020-06-04T03:18:47Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.