GPU-Accelerated Parallel Gene-pool Optimal Mixing in a Gray-Box
Optimization Setting
- URL: http://arxiv.org/abs/2203.08680v1
- Date: Wed, 16 Mar 2022 15:08:48 GMT
- Title: GPU-Accelerated Parallel Gene-pool Optimal Mixing in a Gray-Box
Optimization Setting
- Authors: Anton Bouter and Peter A.N. Bosman
- Abstract summary: We show how graph coloring can be used to group sets of variables that can undergo variation in parallel without violating dependencies.
We find that, for sufficiently large graphs with limited connectivity, finding high-quality solutions can be achieved up to 100 times faster.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a Gray-Box Optimization (GBO) setting that allows for partial evaluations,
the fitness of an individual can be updated efficiently after a subset of its
variables has been modified. This enables more efficient evolutionary
optimization with the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA)
due to its key strength: Gene-pool Optimal Mixing (GOM). For each solution, GOM
performs variation for many (small) sets of variables. To improve efficiency
even further, parallel computing can be leveraged. For EAs, typically, this
comprises population-wise parallelization. However, unless population sizes are
large, this offers limited gains. For large GBO problems, parallelizing
GOM-based variation holds greater speed-up potential, regardless of population
size. However, this potential cannot be directly exploited because of
dependencies between variables. We show how graph coloring can be used to group
sets of variables that can undergo variation in parallel without violating
dependencies. We test the performance of a CUDA implementation of parallel GOM
on a Graphics Processing Unit (GPU) for the Max-Cut problem, a well-known
problem for which the dependency structure can be controlled. We find that, for
sufficiently large graphs with limited connectivity, finding high-quality
solutions can be achieved up to 100 times faster, showcasing the great
potential of our approach.
Related papers
- Improving genetic algorithms performance via deterministic population
shrinkage [9.334663477968027]
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing scheme on the performance of Genetic Algorithms (GAs)
It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter.
Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.
arXiv Detail & Related papers (2024-01-22T17:05:16Z) - All-to-all reconfigurability with sparse and higher-order Ising machines [0.0]
We introduce a multiplexed architecture that emulates all-to-all network functionality.
We show that running the adaptive parallel tempering algorithm demonstrates competitive algorithmic and prefactor advantages.
scaled magnetic versions of p-bit IMs could lead to orders of magnitude improvements over the state of the art for generic optimization.
arXiv Detail & Related papers (2023-11-21T20:27:02Z) - Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees [60.571030754252824]
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
arXiv Detail & Related papers (2023-08-17T21:15:04Z) - A Joint Python/C++ Library for Efficient yet Accessible Black-Box and
Gray-Box Optimization with GOMEA [0.0]
We introduce the GOMEA library, making existing GOMEA code in C++ accessible through Python.
We show its performance in both Gray-Box Optimization (GBO) and Black-Box Optimization (BBO)
arXiv Detail & Related papers (2023-05-10T15:28:31Z) - Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations [50.591267188664666]
Propulate is an evolutionary optimization algorithm and software package for global optimization.
We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration.
We find that Propulate is up to three orders of magnitude faster without sacrificing solution accuracy.
arXiv Detail & Related papers (2023-01-20T18:17:34Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Adaptive Elastic Training for Sparse Deep Learning on Heterogeneous
Multi-GPU Servers [65.60007071024629]
We show that Adaptive SGD outperforms four state-of-the-art solutions in time-to-accuracy.
We show experimentally that Adaptive SGD outperforms four state-of-the-art solutions in time-to-accuracy.
arXiv Detail & Related papers (2021-10-13T20:58:15Z) - Implementation of Parallel Simplified Swarm Optimization in CUDA [2.322689362836168]
In optimization computing, intelligent swarm algorithms (SIAs) method is suitable for parallelization.
This paper proposed a GPU-based Simplified Swarm Algorithm Optimization (PSSO) based on the platform considering computational ability and versatility.
As the results showed, the time complexity has successfully reduced by an order of magnitude of N, and the problem of resource preemption was avoided entirely.
arXiv Detail & Related papers (2021-10-01T00:15:45Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - More Effective Randomized Search Heuristics for Graph Coloring Through
Dynamic Optimization [15.45783225341009]
We show that evolutionary algorithms can solve the graph coloring problem for bipartite graphs more efficiently by using dynamic optimization.
An island model using 3 islands succeeds in an optimal time of $Theta(m)$ on every $m$-edge bipartite graph, outperforming offspring populations.
arXiv Detail & Related papers (2020-05-28T07:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.