The Cone epsilon-Dominance: An Approach for Evolutionary Multiobjective
Optimization
- URL: http://arxiv.org/abs/2008.04224v1
- Date: Tue, 14 Jul 2020 14:13:13 GMT
- Title: The Cone epsilon-Dominance: An Approach for Evolutionary Multiobjective
Optimization
- Authors: Lucas S. Batista, Felipe Campelo, Frederico G. Guimar\~aes and Jaime
A. Ram\'irez
- Abstract summary: cone epsilon-dominance approach to improve convergence and diversity in evolutionary algorithms.
Sixteen well-known benchmark problems are considered in the experimental section.
Results suggest that the cone-eps-MOEA is capable of presenting an efficient and balanced performance over all the performance metrics considered.
- Score: 1.0323063834827413
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose the cone epsilon-dominance approach to improve convergence and
diversity in multiobjective evolutionary algorithms (MOEAs). A cone-eps-MOEA is
presented and compared with MOEAs based on the standard Pareto relation
(NSGA-II, NSGA-II*, SPEA2, and a clustered NSGA-II) and on the
epsilon-dominance (eps-MOEA). The comparison is performed both in terms of
computational complexity and on four performance indicators selected to
quantify the quality of the final results obtained by each algorithm: the
convergence, diversity, hypervolume, and coverage of many sets metrics. Sixteen
well-known benchmark problems are considered in the experimental section,
including the ZDT and the DTLZ families. To evaluate the possible differences
amongst the algorithms, a carefully designed experiment is performed for the
four performance metrics. The results obtained suggest that the cone-eps-MOEA
is capable of presenting an efficient and balanced performance over all the
performance metrics considered. These results strongly support the conclusion
that the cone-eps-MOEA is a competitive approach for obtaining an efficient
balance between convergence and diversity to the Pareto front, and as such
represents a useful tool for the solution of multiobjective optimization
problems.
Related papers
- Expensive Multi-Objective Bayesian Optimization Based on Diffusion Models [17.19004913553654]
Multi-objective Bayesian optimization (MOBO) has shown promising performance on various expensive multi-objective optimization problems (EMOPs)
We propose a novel Composite Diffusion Model based Pareto Set Learning algorithm, namely CDM-PSL, for expensive MOBO.
Our proposed algorithm attains superior performance compared with various state-of-the-art MOBO algorithms.
arXiv Detail & Related papers (2024-05-14T14:55:57Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Convergence of Meta-Learning with Task-Specific Adaptation over Partial
Parameters [152.03852111442114]
Although model-agnostic metalearning (MAML) is a very successful algorithm meta-learning practice, it can have high computational complexity.
Our paper shows that such complexity can significantly affect the overall convergence performance of ANIL.
arXiv Detail & Related papers (2020-06-16T19:57:48Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Hybrid Adaptive Evolutionary Algorithm for Multi-objective Optimization [0.0]
This paper proposed a new Multi-objective Algorithm as an extension of the Hybrid Adaptive Evolutionary algorithm (HAEA) called MoHAEA.
MoHAEA is compared with four states of the art MOEAs, namely MOEA/D, pa$lambda$-MOEA/D, MOEA/D-AWA, and NSGA-II.
arXiv Detail & Related papers (2020-04-29T02:16:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.