Performance Analysis and Improvement of Parallel Differential Evolution
- URL: http://arxiv.org/abs/2101.06599v1
- Date: Sun, 17 Jan 2021 05:57:12 GMT
- Title: Performance Analysis and Improvement of Parallel Differential Evolution
- Authors: Pan Zibin
- Abstract summary: This paper analyzes the design of parallel computation of Differential evolution (DE)
We propose a new exponential crossover operator (NEC) that can be executed parallelly with MKL/CUDA.
In the end, we test the new parallel DE structure, illustrating that the former is much faster.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential evolution (DE) is an effective global evolutionary optimization
algorithm using to solve global optimization problems mainly in a continuous
domain. In this field, researchers pay more attention to improving the
capability of DE to find better global solutions, however, the computational
performance of DE is also a very interesting aspect especially when the problem
scale is quite large. Firstly, this paper analyzes the design of parallel
computation of DE which can easily be executed in Math Kernel Library (MKL) and
Compute Unified Device Architecture (CUDA). Then the essence of the exponential
crossover operator is described and we point out that it cannot be used for
better parallel computation. Later, we propose a new exponential crossover
operator (NEC) that can be executed parallelly with MKL/CUDA. Next, the
extended experiments show that the new crossover operator can speed up DE
greatly. In the end, we test the new parallel DE structure, illustrating that
the former is much faster.
Related papers
- GPU Based Differential Evolution: New Insights and Comparative Study [7.5961910202572644]
This work reviews the main architectural choices made in the literature for GPU based Differential Evolution algorithms.
It introduces a new GPU based numerical optimisation benchmark to evaluate and compare GPU based DE algorithms.
arXiv Detail & Related papers (2024-05-26T12:40:39Z) - Decreasing the Computing Time of Bayesian Optimization using
Generalizable Memory Pruning [56.334116591082896]
We show a wrapper of memory pruning and bounded optimization capable of being used with any surrogate model and acquisition function.
Running BO on high-dimensional or massive data sets becomes intractable due to this time complexity.
All model implementations are run on the MIT Supercloud state-of-the-art computing hardware.
arXiv Detail & Related papers (2023-09-08T14:05:56Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - FastDOG: Fast Discrete Optimization on GPU [23.281726932718232]
We present a massively parallel Lagrange decomposition method for solving 0-1 integer linear programs occurring in structured prediction.
Our primal and dual algorithms require little synchronization between subproblems and optimization over BDDs.
We come close to or outperform some state-of-the-art specialized algorithms while being problem agnostic.
arXiv Detail & Related papers (2021-11-19T15:20:10Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Batch Sequential Adaptive Designs for Global Optimization [5.825138898746968]
Efficient global optimization (EGO) is one of the most popular SAD methods for expensive black-box optimization problems.
For those multiple points EGO methods, the heavy computation and points clustering are the obstacles.
In this work, a novel batch SAD method, named "accelerated EGO", is forwarded by using a refined sampling/importance resampling (SIR) method.
The efficiency of the proposed SAD is validated by nine classic test functions with dimension from 2 to 12.
arXiv Detail & Related papers (2020-10-21T01:11:35Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - Convergence of Meta-Learning with Task-Specific Adaptation over Partial
Parameters [152.03852111442114]
Although model-agnostic metalearning (MAML) is a very successful algorithm meta-learning practice, it can have high computational complexity.
Our paper shows that such complexity can significantly affect the overall convergence performance of ANIL.
arXiv Detail & Related papers (2020-06-16T19:57:48Z) - Do optimization methods in deep learning applications matter? [0.0]
The paper presents arguments on which optimization functions to use and further, which functions would benefit from parallelization efforts.
Our experiments compare off-the-shelf optimization functions(CG, SGD, LM and L-BFGS) in standard CIFAR, MNIST, CartPole and FlappyBird experiments.
arXiv Detail & Related papers (2020-02-28T10:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.