NeuroEvo: A Cloud-based Platform for Automated Design and Training of
Neural Networks using Evolutionary and Particle Swarm Algorithms
- URL: http://arxiv.org/abs/2210.00286v1
- Date: Sat, 1 Oct 2022 14:10:43 GMT
- Title: NeuroEvo: A Cloud-based Platform for Automated Design and Training of
Neural Networks using Evolutionary and Particle Swarm Algorithms
- Authors: Philip Schroeder
- Abstract summary: This paper introduces a new web platform, NeuroEvo, that allows users to interactively design and train neural network classifiers.
The classification problem and training data are provided by the user and, upon completion of the training process, the best classifier is made available to download and implement in Python, Java, and JavaScript.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evolutionary algorithms (EAs) provide unique advantages for optimizing neural
networks in complex search spaces. This paper introduces a new web platform,
NeuroEvo (neuroevo.io), that allows users to interactively design and train
neural network classifiers using evolutionary and particle swarm algorithms.
The classification problem and training data are provided by the user and, upon
completion of the training process, the best classifier is made available to
download and implement in Python, Java, and JavaScript. NeuroEvo is a
cloud-based application that leverages GPU parallelization to improve the speed
with which the independent evolutionary steps, such as mutation, crossover, and
fitness evaluation, are executed across the population. This paper outlines the
training algorithms and opportunities for users to specify design decisions and
hyperparameter settings. The algorithms described in this paper are also made
available as a Python package, neuroevo (PyPI:
https://pypi.org/project/neuroevo/).
Related papers
- Convolutional Conditional Neural Processes [6.532867867011488]
This thesis advances neural processes in three ways.
ConvNPs improve data efficiency by building in a symmetry called translationvariance.
GNPs directly parametrise dependencies in the predictions of a neural process.
AR CNPs train a neural process without any modifications to the model or training procedure and, at test time, roll out the model in an autoregressive fashion.
arXiv Detail & Related papers (2024-08-18T19:53:38Z) - Forward Direct Feedback Alignment for Online Gradient Estimates of Spiking Neural Networks [0.0]
Spiking neural networks can be simulated energy efficiently on neuromorphic hardware platforms.
We propose a novel neuromorphic algorithm, the textitSpiking Forward Direct Feedback Alignment (SFDFA) algorithm.
arXiv Detail & Related papers (2024-02-06T09:07:12Z) - SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks [4.532517021515834]
Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials.
We introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.
arXiv Detail & Related papers (2023-12-28T18:48:10Z) - SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Sampled Training and Node Inheritance for Fast Evolutionary Neural
Architecture Search [22.483917379706725]
evolutionary neural architecture search (ENAS) has received increasing attention due to the attractive global optimization capability of evolutionary algorithms.
This paper proposes a new framework for fast ENAS based on directed acyclic graph, in which parents are randomly sampled and trained on each mini-batch of training data.
We evaluate the proposed algorithm on the widely used datasets, in comparison with 26 state-of-the-art peer algorithms.
arXiv Detail & Related papers (2020-03-07T12:33:01Z) - Computational optimization of convolutional neural networks using
separated filters architecture [69.73393478582027]
We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
arXiv Detail & Related papers (2020-02-18T17:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.