Trackable Island-model Genetic Algorithms at Wafer Scale
- URL: http://arxiv.org/abs/2405.03605v1
- Date: Mon, 6 May 2024 16:17:33 GMT
- Title: Trackable Island-model Genetic Algorithms at Wafer Scale
- Authors: Matthew Andres Moreno, Connor Yang, Emily Dolson, Luis Zaman,
- Abstract summary: We present a tracking-enabled asynchronous island-based genetic algorithm (GA) framework for Cerebras Wafer-Scale Engine (WSE) hardware.
We validate phylogenetic reconstructions and demonstrate their suitability for inference of underlying evolutionary conditions.
These benchmark and validation trials reflect strong potential for highly scalable evolutionary computation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emerging ML/AI hardware accelerators, like the 850,000 processor Cerebras Wafer-Scale Engine (WSE), hold great promise to scale up the capabilities of evolutionary computation. However, challenges remain in maintaining visibility into underlying evolutionary processes while efficiently utilizing these platforms' large processor counts. Here, we focus on the problem of extracting phylogenetic information from digital evolution on the WSE platform. We present a tracking-enabled asynchronous island-based genetic algorithm (GA) framework for WSE hardware. Emulated and on-hardware GA benchmarks with a simple tracking-enabled agent model clock upwards of 1 million generations a minute for population sizes reaching 16 million. This pace enables quadrillions of evaluations a day. We validate phylogenetic reconstructions from these trials and demonstrate their suitability for inference of underlying evolutionary conditions. In particular, we demonstrate extraction of clear phylometric signals that differentiate wafer-scale runs with adaptive dynamics enabled versus disabled. Together, these benchmark and validation trials reflect strong potential for highly scalable evolutionary computation that is both efficient and observable. Kernel code implementing the island-model GA supports drop-in customization to support any fixed-length genome content and fitness criteria, allowing it to be leveraged to advance research interests across the community.
Related papers
- Graph Adapter of EEG Foundation Models for Parameter Efficient Fine Tuning [1.8946099300030472]
EEG-GraphAdapter (EGA) is a parameter-efficient fine-tuning (PEFT) approach to address these challenges.
EGA is integrated into pre-trained temporal backbone models as a GNN-based module.
It improves performance by up to 16.1% in the F1-score compared with the backbone BENDR model.
arXiv Detail & Related papers (2024-11-25T07:30:52Z) - On-device Learning of EEGNet-based Network For Wearable Motor Imagery Brain-Computer Interface [2.1710886744493263]
This paper implements a lightweight and efficient on-device learning engine for wearable motor imagery recognition.
We demonstrate a remarkable accuracy gain of up to 7.31% with respect to the baseline with a memory footprint of 15.6 KByte.
Our tailored approach exhibits inference time of 14.9 ms and 0.76 mJ per single inference and 20 us and 0.83 uJ per single update during online training.
arXiv Detail & Related papers (2024-08-25T08:23:51Z) - Trackable Agent-based Evolution Models at Wafer Scale [0.0]
We focus on the problem of extracting phylogenetic information from agent-based evolution on the 850,000 processor Cerebras Wafer Scale Engine (WSE)
We present an asynchronous island-based genetic algorithm (GA) framework for WSE hardware.
We validate phylogenetic reconstructions from these trials and demonstrate their suitability for inference of underlying evolutionary conditions.
arXiv Detail & Related papers (2024-04-16T19:24:14Z) - GPU-accelerated Evolutionary Multiobjective Optimization Using Tensorized RVEA [13.319536515278191]
We introduce a large-scale Evolutionary Reference Vector Guided Algorithm (TensorRVEA) for harnessing the advancements of the GPU acceleration.
In numerical benchmark tests involving large-scale populations and problem dimensions,RVEA consistently demonstrates high computational performance, achieving up to over 1000$times$ speedups.
arXiv Detail & Related papers (2024-04-01T15:04:24Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Optimization of a Hydrodynamic Computational Reservoir through Evolution [58.720142291102135]
We interface with a model of a hydrodynamic system, under development by a startup, as a computational reservoir.
We optimized the readout times and how inputs are mapped to the wave amplitude or frequency using an evolutionary search algorithm.
Applying evolutionary methods to this reservoir system substantially improved separability on an XNOR task, in comparison to implementations with hand-selected parameters.
arXiv Detail & Related papers (2023-04-20T19:15:02Z) - Deep metric learning improves lab of origin prediction of genetically
engineered plasmids [63.05016513788047]
Genetic engineering attribution (GEA) is the ability to make sequence-lab associations.
We propose a method, based on metric learning, that ranks the most likely labs-of-origin.
We are able to extract key signatures in plasmid sequences for particular labs, allowing for an interpretable examination of the model's outputs.
arXiv Detail & Related papers (2021-11-24T16:29:03Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - Maximum Mutation Reinforcement Learning for Scalable Control [25.935468948833073]
Reinforcement Learning (RL) has demonstrated data efficiency and optimal control over large state spaces at the cost of scalable performance.
We present the Evolution-based Soft Actor-Critic (ESAC), a scalable RL algorithm.
arXiv Detail & Related papers (2020-07-24T16:29:19Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.