A Frequency-based Parent Selection for Reducing the Effect of Evaluation
Time Bias in Asynchronous Parallel Multi-objective Evolutionary Algorithms
- URL: http://arxiv.org/abs/2107.12053v2
- Date: Wed, 14 Dec 2022 03:55:23 GMT
- Title: A Frequency-based Parent Selection for Reducing the Effect of Evaluation
Time Bias in Asynchronous Parallel Multi-objective Evolutionary Algorithms
- Authors: Tomohiro Harada
- Abstract summary: This paper conducts experiments on multi-objective optimization problems that simulate the evaluation time bias.
The proposed method can reduce the effect of the evaluation time bias while reducing the computing time of the parallel NSGA-III.
- Score: 0.6853165736531939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Parallel evolutionary algorithms (PEAs) have been studied for reducing the
execution time of evolutionary algorithms by utilizing parallel computing. An
asynchronous PEA (APEA) is a scheme of PEAs that increases computational
efficiency by generating a new solution immediately after a solution evaluation
completes without the idling time of computing nodes. However, because APEA
gives more search opportunities to solutions with shorter evaluation times, the
evaluation time bias of solutions negatively affects the search performance. To
overcome this drawback, this paper proposes a new parent selection method to
reduce the effect of evaluation time bias in APEAs. The proposed method
considers the search frequency of solutions and selects the parent solutions so
that the search progress in the population is uniform regardless of the
evaluation time bias. This paper conducts experiments on multi-objective
optimization problems that simulate the evaluation time bias. The experiments
use NSGA-III, a well-known multi-objective evolutionary algorithm, and compare
the proposed method with the conventional synchronous/asynchronous
parallelization. The experimental results reveal that the proposed method can
reduce the effect of the evaluation time bias while reducing the computing time
of the parallel NSGA-III.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Transfer Learning Based Co-surrogate Assisted Evolutionary Bi-objective
Optimization for Objectives with Non-uniform Evaluation Times [9.139734850798124]
Multiobjetive evolutionary algorithms assume that each objective function can be evaluated within the same period of time.
A co-surrogate is adopted to model the functional relationship between the fast and slow objective functions.
A transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective.
arXiv Detail & Related papers (2021-08-30T16:10:15Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Optimizing the Parameters of A Physical Exercise Dose-Response Model: An
Algorithmic Comparison [1.0152838128195467]
The purpose of this research was to compare the robustness and performance of a local and global optimization algorithm when given the task of fitting the parameters of a common non-linear dose-response model utilized in the field of exercise physiology.
The results of our comparison over 1000 experimental runs demonstrate the superior performance of the evolutionary computation based algorithm to consistently achieve a stronger model fit and holdout performance in comparison to the local search algorithm.
arXiv Detail & Related papers (2020-12-16T22:06:35Z) - A Simulated Annealing Algorithm for Joint Stratification and Sample
Allocation Designs [0.0]
This study combines simulated annealing with delta evaluation to solve the joint stratification and sample allocation problem.
In this problem, atomic strata are partitioned into mutually exclusive and collectively exhaustive strata.
The Bell number of possible solutions is enormous, for even a moderate number of atomic strata, and an additional layer of complexity is added with the evaluation time of each solution.
arXiv Detail & Related papers (2020-11-25T20:27:49Z) - Batch Sequential Adaptive Designs for Global Optimization [5.825138898746968]
Efficient global optimization (EGO) is one of the most popular SAD methods for expensive black-box optimization problems.
For those multiple points EGO methods, the heavy computation and points clustering are the obstacles.
In this work, a novel batch SAD method, named "accelerated EGO", is forwarded by using a refined sampling/importance resampling (SIR) method.
The efficiency of the proposed SAD is validated by nine classic test functions with dimension from 2 to 12.
arXiv Detail & Related papers (2020-10-21T01:11:35Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Time-varying Gaussian Process Bandit Optimization with Non-constant
Evaluation Time [93.6788993843846]
We propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time.
Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem.
arXiv Detail & Related papers (2020-03-10T13:28:33Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.