Enhanced Innovized Repair Operator for Evolutionary Multi- and
Many-objective Optimization
- URL: http://arxiv.org/abs/2011.10760v1
- Date: Sat, 21 Nov 2020 10:29:15 GMT
- Title: Enhanced Innovized Repair Operator for Evolutionary Multi- and
Many-objective Optimization
- Authors: Sukrit Mittal and Dhish Kumar Saxena and Kalyanmoy Deb and Erik
Goodman
- Abstract summary: "Innovization" is a task of learning common relationships among some or all of the Pareto-optimal (PO) solutions in optimisation problems.
Recent studies have shown that a chronological sequence of non-dominated solutions also possess salient patterns that can be used to learn problem features.
We propose a machine-learning- (ML-) assisted modelling approach that learns the modifications in design variables needed to advance population members towards the Pareto-optimal set.
- Score: 5.885238773559015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: "Innovization" is a task of learning common relationships among some or all
of the Pareto-optimal (PO) solutions in multi- and many-objective optimization
problems. Recent studies have shown that a chronological sequence of
non-dominated solutions obtained in consecutive iterations during an
optimization run also possess salient patterns that can be used to learn
problem features to help create new and improved solutions. In this paper, we
propose a machine-learning- (ML-) assisted modelling approach that learns the
modifications in design variables needed to advance population members towards
the Pareto-optimal set. We then propose to use the resulting ML model as an
additional innovized repair (IR2) operator to be applied on offspring solutions
created by the usual genetic operators, as a novel mean of improving their
convergence properties. In this paper, the well-known random forest (RF) method
is used as the ML model and is integrated with various evolutionary multi- and
many-objective optimization algorithms, including NSGA-II, NSGA-III, and
MOEA/D. On several test problems ranging from two to five objectives, we
demonstrate improvement in convergence behaviour using the proposed IR2-RF
operator. Since the operator does not demand any additional solution
evaluations, instead using the history of gradual and progressive improvements
in solutions over generations, the proposed ML-based optimization opens up a
new direction of optimization algorithm development with advances in AI and ML
approaches.
Related papers
- Model Uncertainty in Evolutionary Optimization and Bayesian Optimization: A Comparative Analysis [5.6787965501364335]
Black-box optimization problems are common in many real-world applications.
These problems require optimization through input-output interactions without access to internal workings.
Two widely used gradient-free optimization techniques are employed to address such challenges.
This paper aims to elucidate the similarities and differences in the utilization of model uncertainty between these two methods.
arXiv Detail & Related papers (2024-03-21T13:59:19Z) - Evolutionary Alternating Direction Method of Multipliers for Constrained
Multi-Objective Optimization with Unknown Constraints [17.392113376816788]
Constrained multi-objective optimization problems (CMOPs) pervade real-world applications in science, engineering, and design.
We present the first of its kind evolutionary optimization framework, inspired by the principles of the alternating direction method of multipliers that decouples objective and constraint functions.
Our framework tackles CMOPs with unknown constraints by reformulating the original problem into an additive form of two subproblems, each of which is allotted a dedicated evolutionary population.
arXiv Detail & Related papers (2024-01-02T00:38:20Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - Better call Surrogates: A hybrid Evolutionary Algorithm for
Hyperparameter optimization [18.359749929678635]
We propose a surrogate-assisted evolutionary algorithm (EA) for hyper parameter optimization of machine learning (ML) models.
The proposed STEADE model initially estimates the objective function landscape using RadialBasis Function, and then transfers the knowledge to an EA technique called Differential Evolution.
We empirically evaluate our model on the hyper parameter optimization problems as a part of the black box optimization challenge at NeurIPS 2020 and demonstrate the improvement brought about by STEADE over the vanilla EA.
arXiv Detail & Related papers (2020-12-11T16:19:59Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - dMFEA-II: An Adaptive Multifactorial Evolutionary Algorithm for
Permutation-based Discrete Optimization Problems [6.943742860591444]
We propose the first adaptation of the recently introduced Multifactorial Evolutionary Algorithm II (MFEA-II) to permutation-based discrete environments.
The performance of the proposed solver has been assessed over 5 different multitasking setups.
arXiv Detail & Related papers (2020-04-14T14:42:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.