Distributed genetic algorithm for application placement in the compute continuum leveraging infrastructure nodes for optimization
- URL: http://arxiv.org/abs/2406.09478v1
- Date: Thu, 13 Jun 2024 09:58:21 GMT
- Title: Distributed genetic algorithm for application placement in the compute continuum leveraging infrastructure nodes for optimization
- Authors: Carlos Guerrero, Isaac Lera, Carlos Juiz,
- Abstract summary: Three distributed designs of a genetic algorithm (GA) for resource optimization in fog computing are presented.
Results show that the design with the lowest distribution degree achieves comparable solution quality to the traditional approach but incurs a higher network load.
- Score: 1.3723120574076126
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The increasing complexity of fog computing environments calls for efficient resource optimization techniques. In this paper, we propose and evaluate three distributed designs of a genetic algorithm (GA) for resource optimization in fog computing, within an increasing degree of distribution. The designs leverage the execution of the GA in the fog devices themselves by dealing with the specific features of this domain: constrained resources and widely geographical distribution of the devices. For their evaluation, we implemented a benchmark case using the NSGA-II for the specific problem of optimizing the fog service placement, according to the guidelines of our three distributed designs. These three experimental scenarios were compared with a control case, a traditional centralized version of this GA algorithm, considering solution quality and network overhead. The results show that the design with the lowest distribution degree, which keeps centralized storage of the objective space, achieves comparable solution quality to the traditional approach but incurs a higher network load. The second design, which completely distributes the population between the workers, reduces network overhead but exhibits lower solution diversity while keeping enough good results in terms of optimization objective minimization. Finally, the proposal with a distributed population and that only interchanges solution between the workers' neighbors achieves the lowest network load but with compromised solution quality.
Related papers
- Evaluation and Efficiency Comparison of Evolutionary Algorithms for Service Placement Optimization in Fog Architectures [1.3723120574076126]
The study compares three evolutionary algorithms for the problem of fog service placement.
NSGA-II obtained the highest optimizations of the objectives and the highest diversity of the solution space.
The WSGA algorithm did not show any benefit with regard to the other two algorithms.
arXiv Detail & Related papers (2025-01-17T05:21:00Z) - A RankNet-Inspired Surrogate-Assisted Hybrid Metaheuristic for Expensive Coverage Optimization [3.470566170862975]
We propose a RankNet-Inspired Surrogate-assisted Hybrid Metaheuristic (RI-SHM)
Our algorithm can effectively handle large-scale coverage optimization tasks of up to 300 dimensions and more than 1,800 targets within desirable runtime.
Compared to state-of-the-art algorithms for EMVOPs, RI-SHM consistently outperforms them by up to 56.5$%$ across all tested instances.
arXiv Detail & Related papers (2025-01-13T14:49:05Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Concurrent build direction, part segmentation, and topology optimization
for additive manufacturing using neural networks [2.2911466677853065]
We propose a neural network-based approach to topology optimization that aims to reduce the use of support structures in additive manufacturing.
Our approach uses a network architecture that allows the simultaneous determination of an optimized: (1) part segmentation, (2) the topology of each part, and (3) the build direction of each part.
arXiv Detail & Related papers (2022-10-04T02:17:54Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Large Scale Global Optimization Algorithms for IoT Networks: A
Comparative Study [29.884417706421218]
This work studies the optimization of a wireless sensor network (WNS) at higher dimensions by focusing on the power allocation of decentralized detection.
We apply and compare four algorithms designed to tackle Large scale global optimization (LGSO) problems.
We evaluate the algorithms performance in several different cases by applying them in cases with 300, 600 and 800 dimensions.
arXiv Detail & Related papers (2021-02-22T18:59:22Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks [1.2461503242570644]
We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
arXiv Detail & Related papers (2020-03-16T04:14:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.