A Framework to Handle Multi-modal Multi-objective Optimization in
Decomposition-based Evolutionary Algorithms
- URL: http://arxiv.org/abs/2009.14700v1
- Date: Wed, 30 Sep 2020 14:32:57 GMT
- Title: A Framework to Handle Multi-modal Multi-objective Optimization in
Decomposition-based Evolutionary Algorithms
- Authors: Ryoji Tanabe and Hisao Ishibuchi
- Abstract summary: decomposition-based evolutionary algorithms have good performance for multi-objective optimization.
They are likely to perform poorly for multi-modal multi-objective optimization due to the lack of mechanisms to maintain the solution space diversity.
This paper proposes a framework to improve the performance of decomposition-based evolutionary algorithms for multi-modal multi-objective optimization.
- Score: 7.81768535871051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modal multi-objective optimization is to locate (almost) equivalent
Pareto optimal solutions as many as possible. While decomposition-based
evolutionary algorithms have good performance for multi-objective optimization,
they are likely to perform poorly for multi-modal multi-objective optimization
due to the lack of mechanisms to maintain the solution space diversity. To
address this issue, this paper proposes a framework to improve the performance
of decomposition-based evolutionary algorithms for multi-modal multi-objective
optimization. Our framework is based on three operations: assignment, deletion,
and addition operations. One or more individuals can be assigned to the same
subproblem to handle multiple equivalent solutions. In each iteration, a child
is assigned to a subproblem based on its objective vector, i.e., its location
in the objective space. The child is compared with its neighbors in the
solution space assigned to the same subproblem. The performance of improved
versions of six decomposition-based evolutionary algorithms by our framework is
evaluated on various test problems regarding the number of objectives, decision
variables, and equivalent Pareto optimal solution sets. Results show that the
improved versions perform clearly better than their original algorithms.
Related papers
- Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - An Effective and Efficient Evolutionary Algorithm for Many-Objective
Optimization [2.5594423685710814]
We develop an effective evolutionary algorithm (E3A) that can handle various many-objective problems.
In E3A, inspired by SDE, a novel population maintenance method is proposed.
We conduct extensive experiments and show that E3A performs better than 11 state-of-the-art many-objective evolutionary algorithms.
arXiv Detail & Related papers (2022-05-31T15:35:46Z) - An Analysis of Phenotypic Diversity in Multi-Solution Optimization [118.97353274202749]
We show that multiobjective optimization does not always produce much diversity, multimodal optimization produces higher fitness solutions, and quality diversity is not sensitive to genetic neutrality.
An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set with quality diversity.
arXiv Detail & Related papers (2021-05-10T10:39:03Z) - Manifold Interpolation for Large-Scale Multi-Objective Optimization via
Generative Adversarial Networks [12.18471608552718]
Large-scale multiobjective optimization problems (LSMOPs) are characterized as involving hundreds or even thousands of decision variables and multiple conflicting objectives.
Previous research has shown that these optimal solutions are uniformly distributed on the manifold structure in the low-dimensional space.
In this work, a generative adversarial network (GAN)-based manifold framework is proposed to learn the manifold and generate high-quality solutions.
arXiv Detail & Related papers (2021-01-08T09:38:38Z) - Decomposition in Decision and Objective Space for Multi-Modal
Multi-Objective Optimization [15.681236469530397]
Multi-modal multi-objective optimization problems (MMMOPs) have multiple subsets within the Pareto-optimal Set.
Prevalent multi-objective evolutionary algorithms are not purely designed to search for multiple solution subsets, whereas, algorithms designed for MMMOPs demonstrate degraded performance in the objective space.
This motivates the design of better algorithms for addressing MMMOPs.
arXiv Detail & Related papers (2020-06-04T03:18:47Z) - A Decomposition-based Large-scale Multi-modal Multi-objective
Optimization Algorithm [9.584279193016522]
We propose an efficient multi-modal multi-objective optimization algorithm based on the widely used MOEA/D algorithm.
Experimental results show that our proposed algorithm can effectively preserve the diversity of solutions in the decision space.
arXiv Detail & Related papers (2020-04-21T09:18:54Z) - Pareto Multi-Task Learning [53.90732663046125]
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously.
It is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other.
Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization.
arXiv Detail & Related papers (2019-12-30T08:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.