Multiform Evolution for High-Dimensional Problems with Low Effective
Dimensionality
- URL: http://arxiv.org/abs/2401.00168v1
- Date: Sat, 30 Dec 2023 08:13:47 GMT
- Title: Multiform Evolution for High-Dimensional Problems with Low Effective
Dimensionality
- Authors: Yaqing Hou, Mingyang Sun, Abhishek Gupta, Yaochu Jin, Haiyin Piao,
Hongwei Ge, Qiang Zhang
- Abstract summary: We scale evolutionary algorithms to high-dimensional optimization problems that deceptively possess a low effective dimensionality.
A multiform evolutionary algorithm is developed for unifying all formulations into a single multi-task setting.
The resultant joint optimization enables the target task to efficiently reuse solutions evolved across various low-dimensional searches.
- Score: 36.44425198302701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we scale evolutionary algorithms to high-dimensional
optimization problems that deceptively possess a low effective dimensionality
(certain dimensions do not significantly affect the objective function). To
this end, an instantiation of the multiform optimization paradigm is presented,
where multiple low-dimensional counterparts of a target high-dimensional task
are generated via random embeddings. Since the exact relationship between the
auxiliary (low-dimensional) tasks and the target is a priori unknown, a
multiform evolutionary algorithm is developed for unifying all formulations
into a single multi-task setting. The resultant joint optimization enables the
target task to efficiently reuse solutions evolved across various
low-dimensional searches via cross-form genetic transfers, hence speeding up
overall convergence characteristics. To validate the overall efficacy of our
proposed algorithmic framework, comprehensive experimental studies are carried
out on well-known continuous benchmark functions as well as a set of practical
problems in the hyper-parameter tuning of machine learning models and deep
learning models in classification tasks and Predator-Prey games, respectively.
Related papers
- Large-scale Multi-objective Feature Selection: A Multi-phase Search Space Shrinking Approach [0.27624021966289597]
Feature selection is a crucial step in machine learning, especially for high-dimensional datasets.
This paper proposes a novel large-scale multi-objective evolutionary algorithm based on the search space shrinking, termed LMSSS.
The effectiveness of the proposed algorithm is demonstrated through comprehensive experiments on 15 large-scale datasets.
arXiv Detail & Related papers (2024-10-13T23:06:10Z) - Towards Multi-Objective High-Dimensional Feature Selection via
Evolutionary Multitasking [63.91518180604101]
This paper develops a novel EMT framework for high-dimensional feature selection problems, namely MO-FSEMT.
A task-specific knowledge transfer mechanism is designed to leverage the advantage information of each task, enabling the discovery and effective transmission of high-quality solutions.
arXiv Detail & Related papers (2024-01-03T06:34:39Z) - Rank-Based Learning and Local Model Based Evolutionary Algorithm for High-Dimensional Expensive Multi-Objective Problems [1.0499611180329806]
The proposed algorithm consists of three parts: rank-based learning, hyper-volume-based non-dominated search, and local search in the relatively sparse objective space.
The experimental results of benchmark problems and a real-world application on geothermal reservoir heat extraction optimization demonstrate that the proposed algorithm shows superior performance.
arXiv Detail & Related papers (2023-04-19T06:25:04Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - An Effective and Efficient Evolutionary Algorithm for Many-Objective
Optimization [2.5594423685710814]
We develop an effective evolutionary algorithm (E3A) that can handle various many-objective problems.
In E3A, inspired by SDE, a novel population maintenance method is proposed.
We conduct extensive experiments and show that E3A performs better than 11 state-of-the-art many-objective evolutionary algorithms.
arXiv Detail & Related papers (2022-05-31T15:35:46Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.