Evaluation and Efficiency Comparison of Evolutionary Algorithms for Service Placement Optimization in Fog Architectures
- URL: http://arxiv.org/abs/2501.09958v1
- Date: Fri, 17 Jan 2025 05:21:00 GMT
- Title: Evaluation and Efficiency Comparison of Evolutionary Algorithms for Service Placement Optimization in Fog Architectures
- Authors: Carlos Guerrero, Isaac Lera, Carlos Juiz,
- Abstract summary: The study compares three evolutionary algorithms for the problem of fog service placement.
NSGA-II obtained the highest optimizations of the objectives and the highest diversity of the solution space.
The WSGA algorithm did not show any benefit with regard to the other two algorithms.
- Score: 1.3723120574076126
- License:
- Abstract: This study compares three evolutionary algorithms for the problem of fog service placement: weighted sum genetic algorithm (WSGA), non-dominated sorting genetic algorithm II (NSGA-II), and multiobjective evolutionary algorithm based on decomposition (MOEA/D). A model for the problem domain (fog architecture and fog applications) and for the optimization (objective functions and solutions) is presented. Our main concerns are related to optimize the network latency, the service spread and the use of the resources. The algorithms are evaluated with a random Barabasi-Albert network topology with 100 devices and with two experiment sizes of 100 and 200 application services. The results showed that NSGA-II obtained the highest optimizations of the objectives and the highest diversity of the solution space. On the contrary, MOEA/D was better to reduce the execution times. The WSGA algorithm did not show any benefit with regard to the other two algorithms.
Related papers
- GDSG: Graph Diffusion-based Solution Generator for Optimization Problems in MEC Networks [109.17835015018532]
We present a Graph Diffusion-based Solution Generation (GDSG) method.
This approach is designed to work with suboptimal datasets while converging to the optimal solution large probably.
We build GDSG as a multi-task diffusion model utilizing a Graph Neural Network (GNN) to acquire the distribution of high-quality solutions.
arXiv Detail & Related papers (2024-12-11T11:13:43Z) - A new simplified MOPSO based on Swarm Elitism and Swarm Memory: MO-ETPSO [0.0]
Elitist PSO (MO-ETPSO) is adapted for multi-objective optimization problems.
The proposed algorithm integrates core strategies from the well-established NSGA-II approach.
A novel aspect of the algorithm is the introduction of a swarm memory and swarm elitism.
arXiv Detail & Related papers (2024-02-20T09:36:18Z) - GOOSE Algorithm: A Powerful Optimization Tool for Real-World Engineering Challenges and Beyond [4.939986309170004]
The GOOSE algorithm is benchmarked on 19 well-known test functions.
The proposed algorithm is tested on 10 modern benchmark functions.
The achieved findings attest to the proposed algorithm's superior performance.
arXiv Detail & Related papers (2023-07-19T19:14:25Z) - Genetically Modified Wolf Optimization with Stochastic Gradient Descent
for Optimising Deep Neural Networks [0.0]
This research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms.
A hybrid between Grey Wolf (GWO) and Genetic Modified Algorithms (GA) is explored, in conjunction with Gradient Descent (SGD)
This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality.
arXiv Detail & Related papers (2023-01-21T13:22:09Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - Duck swarm algorithm: theory, numerical optimization, and applications [6.244015536594532]
A swarm intelligence-based optimization algorithm, named Duck Swarm Algorithm (DSA), is proposed in this study.
Two rules are modeled from the finding food and foraging of the duck, which corresponds to the exploration and exploitation phases of the proposed DSA.
Results show that DSA is a high-performance optimization method in terms of convergence speed and exploration-exploitation balance.
arXiv Detail & Related papers (2021-12-27T04:53:36Z) - An Accelerated Variance-Reduced Conditional Gradient Sliding Algorithm
for First-order and Zeroth-order Optimization [111.24899593052851]
Conditional gradient algorithm (also known as the Frank-Wolfe algorithm) has recently regained popularity in the machine learning community.
ARCS is the first zeroth-order conditional gradient sliding type algorithms solving convex problems in zeroth-order optimization.
In first-order optimization, the convergence results of ARCS substantially outperform previous algorithms in terms of the number of gradient query oracle.
arXiv Detail & Related papers (2021-09-18T07:08:11Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.