Evolvable Agents, a Fine Grained Approach for Distributed Evolutionary
Computing: Walking towards the Peer-to-Peer Computing Frontiers
- URL: http://arxiv.org/abs/2401.17224v1
- Date: Tue, 30 Jan 2024 18:11:31 GMT
- Title: Evolvable Agents, a Fine Grained Approach for Distributed Evolutionary
Computing: Walking towards the Peer-to-Peer Computing Frontiers
- Authors: Juan Luis Jim\'enez Laredo and Pedro A. Castillo and Antonio M. Mora
and Juan Juli\'an Merelo
- Abstract summary: We propose a fine grained approach with self-adaptive migration rate for distributed evolutionary computation.
We analyse the approach viability by comparing how solution quality and algorithm speed change when the number of processors increases.
With this experimental setup, our approach shows better scalability than the Island model and a equivalent robustness on the average of the three test functions under study.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we propose a fine grained approach with self-adaptive migration
rate for distributed evolutionary computation. Our target is to gain some
insights on the effects caused by communication when the algorithm scales. To
this end, we consider a set of basic topologies in order to avoid the
overlapping of algorithmic effects between communication and topological
structures. We analyse the approach viability by comparing how solution quality
and algorithm speed change when the number of processors increases and compare
it with an Island model based implementation. A finer-grained approach implies
a better chance of achieving a larger scalable system; such a feature is
crucial concerning large-scale parallel architectures such as Peer-to-Peer
systems. In order to check scalability, we perform a threefold experimental
evaluation of this model: First, we concentrate on the algorithmic results when
the problem scales up to eight nodes in comparison with how it does following
the Island model. Second, we analyse the computing time speedup of the approach
while scaling. Finally, we analyse the network performance with the proposed
self-adaptive migration rate policy that depends on the link latency and
bandwidth. With this experimental setup, our approach shows better scalability
than the Island model and a equivalent robustness on the average of the three
test functions under study.
Related papers
- Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - Topology-Guided Sampling for Fast and Accurate Community Detection [1.0609815608017064]
We present an approach based on topology-guided sampling for accelerating block partitioning.
We also introduce a degree-based thresholding scheme that improves the efficacy of our approach at the expense of speedup.
Our results show that our approach can lead to a speedup of up to 15X over block partitioning without sampling.
arXiv Detail & Related papers (2021-08-15T03:20:10Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - EQ-Net: A Unified Deep Learning Framework for Log-Likelihood Ratio
Estimation and Quantization [25.484585922608193]
We introduce EQ-Net: the first holistic framework that solves both the tasks of log-likelihood ratio (LLR) estimation and quantization using a data-driven method.
We carry out extensive experimental evaluation and demonstrate that our single architecture achieves state-of-the-art results on both tasks.
arXiv Detail & Related papers (2020-12-23T18:11:30Z) - Decentralized Deep Learning using Momentum-Accelerated Consensus [15.333413663982874]
We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset.
We propose and analyze a novel decentralized deep learning algorithm where the agents interact over a fixed communication topology.
Our algorithm is based on the heavy-ball acceleration method used in gradient-based protocol.
arXiv Detail & Related papers (2020-10-21T17:39:52Z) - Distributed Optimization, Averaging via ADMM, and Network Topology [0.0]
We study the connection between network topology and convergence rates for different algorithms on a real world problem of sensor localization.
We also show interesting connections between ADMM and lifted Markov chains besides providing an explicitly characterization of its convergence.
arXiv Detail & Related papers (2020-09-05T21:44:39Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z) - Nonlinear Traffic Prediction as a Matrix Completion Problem with
Ensemble Learning [1.8352113484137629]
This paper addresses the problem of short-term traffic prediction for signalized traffic operations management.
We focus on predicting sensor states in high-resolution (second-by-second)
Our contributions can be summarized as offering three insights.
arXiv Detail & Related papers (2020-01-08T13:10:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.