Batch Sequential Adaptive Designs for Global Optimization
- URL: http://arxiv.org/abs/2010.10698v1
- Date: Wed, 21 Oct 2020 01:11:35 GMT
- Title: Batch Sequential Adaptive Designs for Global Optimization
- Authors: Jianhui Ning and Yao Xiao and Zikang Xiong
- Abstract summary: Efficient global optimization (EGO) is one of the most popular SAD methods for expensive black-box optimization problems.
For those multiple points EGO methods, the heavy computation and points clustering are the obstacles.
In this work, a novel batch SAD method, named "accelerated EGO", is forwarded by using a refined sampling/importance resampling (SIR) method.
The efficiency of the proposed SAD is validated by nine classic test functions with dimension from 2 to 12.
- Score: 5.825138898746968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared with the fixed-run designs, the sequential adaptive designs (SAD)
are thought to be more efficient and effective. Efficient global optimization
(EGO) is one of the most popular SAD methods for expensive black-box
optimization problems. A well-recognized weakness of the original EGO in
complex computer experiments is that it is serial, and hence the modern
parallel computing techniques cannot be utilized to speed up the running of
simulator experiments. For those multiple points EGO methods, the heavy
computation and points clustering are the obstacles. In this work, a novel
batch SAD method, named "accelerated EGO", is forwarded by using a refined
sampling/importance resampling (SIR) method to search the points with large
expected improvement (EI) values. The computation burden of the new method is
much lighter, and the points clustering is also avoided. The efficiency of the
proposed SAD is validated by nine classic test functions with dimension from 2
to 12. The empirical results show that the proposed algorithm indeed can
parallelize original EGO, and gain much improvement compared against the other
parallel EGO algorithm especially under high-dimensional case. Additionally, we
also apply the new method to the hyper-parameter tuning of Support Vector
Machine (SVM). Accelerated EGO obtains comparable cross validation accuracy
with other methods and the CPU time can be reduced a lot due to the parallel
computation and sampling method.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm for Hybrid Flow Shop Scheduling Problems with Multiple Parallel Batch Processing Stages [5.851739146497829]
This study generalizes the problem model, in which users can arbitrarily set certain stages as parallel batch processing stages.
An Adaptive Knowledge-based Multi-Objective Evolutionary Algorithm (AMOEA/D) is designed to simultaneously optimize both makespan and Total Energy Consumption.
The experimental results show that the AMOEA/D is superior to the comparison algorithms in solving the PBHFSP.
arXiv Detail & Related papers (2024-09-27T08:05:56Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Recommender System Expedited Quantum Control Optimization [0.0]
Quantum control optimization algorithms are routinely used to generate optimal quantum gates or efficient quantum state transfers.
There are two main challenges in designing efficient optimization algorithms, namely overcoming the sensitivity to local optima and improving the computational speed.
Here, we propose and demonstrate the use of a machine learning method, specifically the recommender system (RS), to deal with the latter challenge.
arXiv Detail & Related papers (2022-01-29T10:25:41Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Simple and Scalable Parallelized Bayesian Optimization [2.512827436728378]
We propose a simple and scalable BO method for asynchronous parallel settings.
Experiments are carried out with a benchmark function and hyperparameter optimization of multi-layer perceptrons.
arXiv Detail & Related papers (2020-06-24T10:25:27Z) - Differentiable Expected Hypervolume Improvement for Parallel
Multi-Objective Bayesian Optimization [11.956059322407437]
We leverage recent advances in programming models and hardware acceleration for multi-objective BO using Expected Hyper Improvement (EHVI)
We derive a novel formulation of q-Expected Hyper Improvement (qEHVI), an acquisition function that extends EHVI to the parallel, constrained evaluation setting.
Our empirical evaluation demonstrates that qEHVI is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time.
arXiv Detail & Related papers (2020-06-09T06:57:47Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.