Experience-Based Evolutionary Algorithms for Expensive Optimization
- URL: http://arxiv.org/abs/2304.04166v1
- Date: Sun, 9 Apr 2023 05:47:14 GMT
- Title: Experience-Based Evolutionary Algorithms for Expensive Optimization
- Authors: Xunzhao Yu, Yan Wang, Ling Zhu, Dimitar Filev, Xin Yao
- Abstract summary: We argue that hard optimization problems could be tackled efficiently by making better use of experiences gained in related problems.
We propose an experience-based surrogate-assisted evolutionary algorithm (SAEA) framework to enhance the optimization efficiency of expensive problems.
- Score: 8.466374531816427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization algorithms are very different from human optimizers. A human
being would gain more experiences through problem-solving, which helps her/him
in solving a new unseen problem. Yet an optimization algorithm never gains any
experiences by solving more problems. In recent years, efforts have been made
towards endowing optimization algorithms with some abilities of experience
learning, which is regarded as experience-based optimization. In this paper, we
argue that hard optimization problems could be tackled efficiently by making
better use of experiences gained in related problems. We demonstrate our ideas
in the context of expensive optimization, where we aim to find a near-optimal
solution to an expensive optimization problem with as few fitness evaluations
as possible. To achieve this, we propose an experience-based surrogate-assisted
evolutionary algorithm (SAEA) framework to enhance the optimization efficiency
of expensive problems, where experiences are gained across related expensive
tasks via a novel meta-learning method. These experiences serve as the
task-independent parameters of a deep kernel learning surrogate, then the
solutions sampled from the target task are used to adapt task-specific
parameters for the surrogate. With the help of experience learning, competitive
regression-based surrogates can be initialized using only 1$d$ solutions from
the target task ($d$ is the dimension of the decision space). Our experimental
results on expensive multi-objective and constrained optimization problems
demonstrate that experiences gained from related tasks are beneficial for the
saving of evaluation budgets on the target problem.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information [48.784330281177446]
Recent works in learning-integrated optimization have shown promise in settings where the optimization is only partially observed or where general-purposes perform poorly without expert tuning.
We propose using a smooth and learnable Landscape Surrogate as a replacement for $fcirc mathbfg$.
This surrogate, learnable by neural networks, can be computed faster than the $mathbfg$ solver, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization.
arXiv Detail & Related papers (2023-07-18T04:29:16Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - A Data-Driven Evolutionary Transfer Optimization for Expensive Problems
in Dynamic Environments [9.098403098464704]
Data-driven, a.k.a. surrogate-assisted, evolutionary optimization has been recognized as an effective approach for tackling expensive black-box optimization problems.
This paper proposes a simple but effective transfer learning framework to empower data-driven evolutionary optimization to solve dynamic optimization problems.
Experiments on synthetic benchmark test problems and a real-world case study demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-11-05T11:19:50Z) - GPSAF: A Generalized Probabilistic Surrogate-Assisted Framework for
Constrained Single- and Multi-objective Optimization [7.8140593450932965]
This paper proposes a generalized probabilistic surrogate-assisted framework (GPSAF)
GPSAF is applicable to a broad category of unconstrained and constrained, single- and multi-objective optimization algorithms.
arXiv Detail & Related papers (2022-04-06T13:22:30Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - PAMELI: A Meta-Algorithm for Computationally Expensive Multi-Objective
Optimization Problems [0.0]
The proposed algorithm is based on solving a set of surrogate problems defined by models of the real one.
Our algorithm also performs a meta-search for optimal surrogate models and navigation strategies for the optimization landscape.
arXiv Detail & Related papers (2021-03-19T11:18:03Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.