TensorGP -- Genetic Programming Engine in TensorFlow
- URL: http://arxiv.org/abs/2103.07512v1
- Date: Fri, 12 Mar 2021 20:19:37 GMT
- Title: TensorGP -- Genetic Programming Engine in TensorFlow
- Authors: Francisco Baeta, Jo\~ao Correia, Tiago Martins and Penousal Machado
- Abstract summary: We investigate the benefits of applying data vectorization and fitness caching methods to domain evaluation in Genetic Programming.
Our performance benchmarks demonstrate that performance gains of up to two orders of magnitude can be achieved on a parallel approach running on dedicated hardware.
- Score: 1.1470070927586016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we resort to the TensorFlow framework to investigate the
benefits of applying data vectorization and fitness caching methods to domain
evaluation in Genetic Programming. For this purpose, an independent engine was
developed, TensorGP, along with a testing suite to extract comparative timing
results across different architectures and amongst both iterative and
vectorized approaches. Our performance benchmarks demonstrate that by
exploiting the TensorFlow eager execution model, performance gains of up to two
orders of magnitude can be achieved on a parallel approach running on dedicated
hardware when compared to a standard iterative approach.
Related papers
- Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - A Generic Performance Model for Deep Learning in a Distributed
Environment [0.7829352305480285]
We propose a generic performance model of an application in a distributed environment with a generic expression of the application execution time.
We have evaluated the proposed model on three deep learning frameworks (i.e., MXnet, and Pytorch)
arXiv Detail & Related papers (2023-05-19T13:30:34Z) - Generalized Relation Modeling for Transformer Tracking [13.837171342738355]
One-stream trackers let the template interact with all parts inside the search region throughout all the encoder layers.
This could potentially lead to target-background confusion when the extracted feature representations are not sufficiently discriminative.
We propose a generalized relation modeling method based on adaptive token division.
Our method is superior to the two-stream and one-stream pipelines and achieves state-of-the-art performance on six challenging benchmarks with a real-time running speed.
arXiv Detail & Related papers (2023-03-29T10:29:25Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - Stochastic Generative Flow Networks [89.34644133901647]
Generative Flow Networks (or GFlowNets) learn to sample complex structures through the lens of "inference as control"
Existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with dynamics.
This paper introduces GFlowNets, a new algorithm that extends GFlowNets to environments.
arXiv Detail & Related papers (2023-02-19T03:19:40Z) - Robust Scheduling with GFlowNets [6.6908747077585105]
We propose a new approach to scheduling by sampling proportionally to the proxy metric using a novel GFlowNet method.
We introduce a technique to control the trade-off between diversity and goodness of the proposed schedules at inference time.
arXiv Detail & Related papers (2023-01-17T18:59:15Z) - Joint Feature Learning and Relation Modeling for Tracking: A One-Stream
Framework [76.70603443624012]
We propose a novel one-stream tracking (OSTrack) framework that unifies feature learning and relation modeling.
In this way, discriminative target-oriented features can be dynamically extracted by mutual guidance.
OSTrack achieves state-of-the-art performance on multiple benchmarks, in particular, it shows impressive results on the one-shot tracking benchmark GOT-10k.
arXiv Detail & Related papers (2022-03-22T18:37:11Z) - Speed Benchmarking of Genetic Programming Frameworks [1.1470070927586016]
Genetic Programming (GP) is known to suffer from the burden of being computationally expensive by design.
In this work, we employ a series of benchmarks meant to compare both the performance and evolution capabilities of different vectorized and iterative implementation approaches.
arXiv Detail & Related papers (2021-05-25T22:06:42Z) - TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale
Language Models [60.23234205219347]
TeraPipe is a high-performance token-level pipeline parallel algorithm for synchronous model-parallel training of Transformer-based language models.
We show that TeraPipe can speed up the training by 5.0x for the largest GPT-3 model with 175 billion parameters on an AWS cluster.
arXiv Detail & Related papers (2021-02-16T07:34:32Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - TF-Coder: Program Synthesis for Tensor Manipulations [29.46838583290554]
We present a tool called TF-Coder for programming by example in pruning.
We train models to predict operations from features of the input and output tensors and natural language descriptions of tasks.
TF-Coder solves 63 of 70 real-world tasks within 5 minutes, sometimes finding simpler solutions in less time compared to experienced human programmers.
arXiv Detail & Related papers (2020-03-19T22:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.