Speed Benchmarking of Genetic Programming Frameworks
- URL: http://arxiv.org/abs/2106.11919v1
- Date: Tue, 25 May 2021 22:06:42 GMT
- Title: Speed Benchmarking of Genetic Programming Frameworks
- Authors: Francisco Baeta, Jo\~ao Correia, Tiago Martins, Penousal Machado
- Abstract summary: Genetic Programming (GP) is known to suffer from the burden of being computationally expensive by design.
In this work, we employ a series of benchmarks meant to compare both the performance and evolution capabilities of different vectorized and iterative implementation approaches.
- Score: 1.1470070927586016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Genetic Programming (GP) is known to suffer from the burden of being
computationally expensive by design. While, over the years, many techniques
have been developed to mitigate this issue, data vectorization, in particular,
is arguably still the most attractive strategy due to the parallel nature of
GP. In this work, we employ a series of benchmarks meant to compare both the
performance and evolution capabilities of different vectorized and iterative
implementation approaches across several existing frameworks. Namely, TensorGP,
a novel open-source engine written in Python, is shown to greatly benefit from
the TensorFlow library to accelerate the domain evaluation phase in GP. The
presented performance benchmarks demonstrate that the TensorGP engine manages
to pull ahead, with relative speedups above two orders of magnitude for
problems with a higher number of fitness cases. Additionally, as a consequence
of being able to compute larger domains, we argue that TensorGP performance
gains aid the discovery of more accurate candidate solutions.
Related papers
- GPU-accelerated Evolutionary Multiobjective Optimization Using Tensorized RVEA [13.319536515278191]
We introduce a large-scale Evolutionary Reference Vector Guided Algorithm (TensorRVEA) for harnessing the advancements of the GPU acceleration.
In numerical benchmark tests involving large-scale populations and problem dimensions,RVEA consistently demonstrates high computational performance, achieving up to over 1000$times$ speedups.
arXiv Detail & Related papers (2024-04-01T15:04:24Z) - AcceleratedLiNGAM: Learning Causal DAGs at the speed of GPUs [57.12929098407975]
We show that by efficiently parallelizing existing causal discovery methods, we can scale them to thousands of dimensions.
Specifically, we focus on the causal ordering subprocedure in DirectLiNGAM and implement GPU kernels to accelerate it.
This allows us to apply DirectLiNGAM to causal inference on large-scale gene expression data with genetic interventions yielding competitive results.
arXiv Detail & Related papers (2024-03-06T15:06:11Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Operation-Level Performance Benchmarking of Graph Neural Networks for
Scientific Applications [0.15469452301122172]
We profile and select low-level operations pertinent to Graph Neural Networks (GNNs) for scientific computing implemented in the Pytorch Geometric software framework.
These are then rigorously benchmarked on NVIDIA A100 GPUs for several various combinations of input values, including tensor sparsity.
At a high level, we conclude that on NVIDIA systems: (1) confounding bottlenecks such as memory inefficiency often dominate runtime costs moreso than data sparsity alone.
We hope that these results serve as a baseline for those developing these operations on specialized hardware and that our subsequent analysis helps to facilitate future software and hardware based optimizations of these operations and
arXiv Detail & Related papers (2022-07-20T15:01:12Z) - Fast Gaussian Process Posterior Mean Prediction via Local Cross
Validation and Precomputation [0.0]
We present a fast posterior mean prediction algorithm called FastMuyGPs.
It is based upon the MuyGPs hyper parameter estimation algorithm and utilizes a combination of leave-one-out cross-validation, nearest neighbors sparsification, and precomputation.
It attains superior accuracy and competitive or superior runtime to both deep neural networks and state-of-the-art GP algorithms.
arXiv Detail & Related papers (2022-05-22T17:38:36Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - TensorGP -- Genetic Programming Engine in TensorFlow [1.1470070927586016]
We investigate the benefits of applying data vectorization and fitness caching methods to domain evaluation in Genetic Programming.
Our performance benchmarks demonstrate that performance gains of up to two orders of magnitude can be achieved on a parallel approach running on dedicated hardware.
arXiv Detail & Related papers (2021-03-12T20:19:37Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Near-linear Time Gaussian Process Optimization with Adaptive Batching
and Resparsification [119.41129787351092]
We introduce BBKB, the first no-regret GP optimization algorithm that provably runs in near-linear time and selects candidates in batches.
We show that the same bound can be used to adaptively delay costly updates to the sparse GP approximation, achieving a near-constant per-step amortized cost.
arXiv Detail & Related papers (2020-02-23T17:43:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.