Asynchronous Distributed Genetic Algorithms with Javascript and JSON
- URL: http://arxiv.org/abs/2401.17234v1
- Date: Tue, 30 Jan 2024 18:23:28 GMT
- Title: Asynchronous Distributed Genetic Algorithms with Javascript and JSON
- Authors: Juan Juli\'an Merelo and Pedro A. Castillo and Juan Luis Jim\'enez
Laredo and Antonio M. Mora and Alberto Prieto
- Abstract summary: We present a distributed evolutionary computation system that uses the computational capabilities of the ubiquituous web browser.
Since computing becomes a social activity and is inherently impredictable, in this paper we will explore the performance of this kind of virtual computer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a connected world, spare CPU cycles are up for grabs, if you only make its
obtention easy enough. In this paper we present a distributed evolutionary
computation system that uses the computational capabilities of the ubiquituous
web browser. Using Asynchronous Javascript and JSON (Javascript Object
Notation, a serialization protocol) allows anybody with a web browser (that is,
mostly everybody connected to the Internet) to participate in a genetic
algorithm experiment with little effort, or none at all. Since, in this case,
computing becomes a social activity and is inherently impredictable, in this
paper we will explore the performance of this kind of virtual computer by
solving simple problems such as the Royal Road function and analyzing how many
machines and evaluations it yields. We will also examine possible performance
bottlenecks and how to solve them, and, finally, issue some advice on how to
set up this kind of experiments to maximize turnout and, thus, performance.
Related papers
- Pushing the Limits: Concurrency Detection in Acyclic Sound Free-Choice Workflow Nets in $O(P^2 + T^2)$ [0.8192907805418583]
Knowing which places and transitions could be executed in parallel helps to understand computation nets.
Kovalyov and Esparza have developed algorithms that compute all concurrent places in $Obig((P+T)TP2big)$ for live and bounded nets.
This paper complements the palette of detection algorithms with the Concurrent Paths (CP) algorithm for sound free-choice workflow nets.
arXiv Detail & Related papers (2024-01-29T12:11:34Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Performance and Energy Consumption of Parallel Machine Learning
Algorithms [0.0]
Machine learning models have achieved remarkable success in various real-world applications.
Model training in machine learning requires large-scale data sets and multiple iterations before it can work properly.
Parallelization of training algorithms is a common strategy to speed up the process of training.
arXiv Detail & Related papers (2023-05-01T13:04:39Z) - Improving Inference Performance of Machine Learning with the
Divide-and-Conquer Principle [0.0]
Many popular machine learning models scale poorly when deployed on CPUs.
We propose a simple, yet effective approach based on the Divide-and-Conquer Principle to tackle this problem.
We implement this idea in the popular OnnxRuntime framework and evaluate its effectiveness with several use cases.
arXiv Detail & Related papers (2023-01-12T15:55:12Z) - The Basis of Design Tools for Quantum Computing: Arrays, Decision
Diagrams, Tensor Networks, and ZX-Calculus [55.58528469973086]
Quantum computers promise to efficiently solve important problems classical computers never will.
A fully automated quantum software stack needs to be developed.
This work provides a look "under the hood" of today's tools and showcases how these means are utilized in them, e.g., for simulation, compilation, and verification of quantum circuits.
arXiv Detail & Related papers (2023-01-10T19:00:00Z) - PARTIME: Scalable and Parallel Processing Over Time with Deep Neural
Networks [68.96484488899901]
We present PARTIME, a library designed to speed up neural networks whenever data is continuously streamed over time.
PARTIME starts processing each data sample at the time in which it becomes available from the stream.
Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning.
arXiv Detail & Related papers (2022-10-17T14:49:14Z) - Distributed Optimization using Heterogeneous Compute Systems [0.0]
We consider the training of deep neural networks on a distributed system of workers with varying compute power.
A naive implementation of synchronous distributed training will result in the faster workers waiting for the slowest worker to complete processing.
We propose to dynamically adjust the data assigned for each worker during the training.
arXiv Detail & Related papers (2021-10-03T11:21:49Z) - Online training for high-performance analogue readout layers in photonic
reservoir computers [2.6104700758143666]
Reservoir Computing is a bio-inspired computing paradigm for processing time-dependent signals.
The major bottleneck of these implementation is the readout layer, based on slow offline post-processing.
Here we propose the use of online training to solve these issues.
arXiv Detail & Related papers (2020-12-19T07:12:26Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.