Performance of Genetic Algorithms in the Context of Software Model
Refactoring
- URL: http://arxiv.org/abs/2308.13875v1
- Date: Sat, 26 Aug 2023 13:25:42 GMT
- Title: Performance of Genetic Algorithms in the Context of Software Model
Refactoring
- Authors: Vittorio Cortellessa, Daniele Di Pompeo, Michele Tucci
- Abstract summary: We conduct a performance analysis of three genetic algorithms to compare them in terms of performance and quality of solutions.
Results show that there are significant differences in performance among the algorithms.
- Score: 1.3812010983144802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software systems continuously evolve due to new functionalities,
requirements, or maintenance activities. In the context of software evolution,
software refactoring has gained a strategic relevance. The space of possible
software refactoring is usually very large, as it is given by the combinations
of different refactoring actions that can produce software system alternatives.
Multi-objective algorithms have shown the ability to discover alternatives by
pursuing different objectives simultaneously. Performance of such algorithms in
the context of software model refactoring is of paramount importance.
Therefore, in this paper, we conduct a performance analysis of three genetic
algorithms to compare them in terms of performance and quality of solutions.
Our results show that there are significant differences in performance among
the algorithms (e.g., PESA2 seems to be the fastest one, while NSGA-II shows
the least memory usage).
Related papers
- RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation [54.707460684650584]
Large Language Models (LLMs) demonstrate human-level capabilities in dialogue, reasoning, and knowledge retention.
Current research addresses this bottleneck by equipping LLMs with external knowledge, a technique known as Retrieval Augmented Generation (RAG)
RAGLAB is a modular and research-oriented open-source library that reproduces 6 existing algorithms and provides a comprehensive ecosystem for investigating RAG algorithms.
arXiv Detail & Related papers (2024-08-21T07:20:48Z) - R2 Indicator and Deep Reinforcement Learning Enhanced Adaptive Multi-Objective Evolutionary Algorithm [1.8641315013048299]
We present a new evolutionary algorithm structure that utilizes a reinforcement learning-based agent.
The proposed R2-reinforcement learning multi-objective evolutionary algorithm (R2-RLMOEA) is compared with six other multi-objective algorithms that are based on R2 indicators.
arXiv Detail & Related papers (2024-04-11T23:50:30Z) - Explainable Benchmarking for Iterative Optimization Heuristics [0.8192907805418583]
We introduce the IOH-Xplainer software framework, for analyzing and understanding the performance of various optimization algorithms.
We examine the impact of different algorithmic components and configurations, offering insights into their performance across diverse scenarios.
arXiv Detail & Related papers (2024-01-31T14:02:26Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - Component-wise Analysis of Automatically Designed Multiobjective
Algorithms on Constrained Problems [0.0]
This study introduces a new methodology to investigate the effects of the final configuration of an automatically designed algorithm.
We apply this methodology to a well-performing Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) designed by the irace package on nine constrained problems.
Our results indicate that the most influential components were the restart and update strategies, with higher increments in performance and more distinct metric values.
arXiv Detail & Related papers (2022-03-25T04:35:01Z) - Generating Instances with Performance Differences for More Than Just Two
Algorithms [2.061388741385401]
We propose fitness-functions to evolve instances that show large performance differences for more than just two algorithms simultaneously.
As a proof-of-principle, we evolve instances of the multi-component Traveling Thief Problem(TTP) for three incomplete TTP-solvers.
arXiv Detail & Related papers (2021-04-29T11:48:41Z) - Identifying Co-Adaptation of Algorithmic and Implementational
Innovations in Deep Reinforcement Learning: A Taxonomy and Case Study of
Inference-based Algorithms [15.338931971492288]
We focus on a series of inference-based actor-critic algorithms to decouple their algorithmic innovations and implementation decisions.
We identify substantial performance drops whenever implementation details are mismatched for algorithmic choices.
Results show which implementation details are co-adapted and co-evolved with algorithms.
arXiv Detail & Related papers (2021-03-31T17:55:20Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - A Two-stage Framework and Reinforcement Learning-based Optimization
Algorithms for Complex Scheduling Problems [54.61091936472494]
We develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together.
The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively.
Results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems.
arXiv Detail & Related papers (2021-03-10T03:16:12Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.