Transfer Learning Based Co-surrogate Assisted Evolutionary Bi-objective
Optimization for Objectives with Non-uniform Evaluation Times
- URL: http://arxiv.org/abs/2108.13339v1
- Date: Mon, 30 Aug 2021 16:10:15 GMT
- Title: Transfer Learning Based Co-surrogate Assisted Evolutionary Bi-objective
Optimization for Objectives with Non-uniform Evaluation Times
- Authors: Xilu Wang, Yaochu Jin, Sebastian Schmitt, Markus Olhofer
- Abstract summary: Multiobjetive evolutionary algorithms assume that each objective function can be evaluated within the same period of time.
A co-surrogate is adopted to model the functional relationship between the fast and slow objective functions.
A transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective.
- Score: 9.139734850798124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing multiobjetive evolutionary algorithms (MOEAs) implicitly assume
that each objective function can be evaluated within the same period of time.
Typically. this is untenable in many real-world optimization scenarios where
evaluation of different objectives involves different computer simulations or
physical experiments with distinct time complexity. To address this issue, a
transfer learning scheme based on surrogate-assisted evolutionary algorithms
(SAEAs) is proposed, in which a co-surrogate is adopted to model the functional
relationship between the fast and slow objective functions and a transferable
instance selection method is introduced to acquire useful knowledge from the
search process of the fast objective. Our experimental results on DTLZ and UF
test suites demonstrate that the proposed algorithm is competitive for solving
bi-objective optimization where objectives have non-uniform evaluation times.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Large Language Model-Aided Evolutionary Search for Constrained Multiobjective Optimization [15.476478159958416]
We employ a large language model (LLM) to enhance evolutionary search for solving constrained multi-objective optimization problems.
Our aim is to speed up the convergence of the evolutionary population.
arXiv Detail & Related papers (2024-05-09T13:44:04Z) - Computationally Efficient Optimisation of Elbow-Type Draft Tube Using
Neural Network Surrogates [0.0]
This study aims to provide a comprehensive assessment of single-objective and multi-objective optimisation algorithms for the design of an elbow-type draft tube.
The proposed workflow leverages deep neural network surrogates trained on data obtained from numerical simulations.
arXiv Detail & Related papers (2024-01-14T14:05:26Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Bayesian Inverse Transfer in Evolutionary Multiobjective Optimization [29.580786235313987]
We introduce the first Inverse Transfer Multiobjective (invTrEMO)
InvTrEMO harnesses the common objective functions in many prevalent areas, even when decision spaces do not precisely align between tasks.
InvTrEMO yields high-precision inverse models as a significant byproduct, enabling the generation of tailored solutions on-demand.
arXiv Detail & Related papers (2023-12-22T14:12:18Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - A Federated Data-Driven Evolutionary Algorithm for Expensive
Multi/Many-objective Optimization [11.92436948211501]
This paper proposes a federated data-driven evolutionary multi-objective/many-objective optimization algorithm.
We leverage federated learning for surrogate construction so that multiple clients collaboratively train a radial-basis-function-network as the global surrogate.
A new federated acquisition function is proposed for the central server to approximate the objective values using the global surrogate and estimate the uncertainty level of the approximated objective values.
arXiv Detail & Related papers (2021-06-22T22:33:24Z) - Tackling the Objective Inconsistency Problem in Heterogeneous Federated
Optimization [93.78811018928583]
This paper provides a framework to analyze the convergence of federated heterogeneous optimization algorithms.
We propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
arXiv Detail & Related papers (2020-07-15T05:01:23Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.