Socio-cognitive agent-oriented evolutionary algorithm with trust-based optimization
- URL: http://arxiv.org/abs/2510.25095v1
- Date: Wed, 29 Oct 2025 01:59:26 GMT
- Title: Socio-cognitive agent-oriented evolutionary algorithm with trust-based optimization
- Authors: Aleksandra Urbańczyk, Krzysztof Czech, Piotr Urbańczyk, Marek Kisiel-Dorohinicki, Aleksander Byrski,
- Abstract summary: Trust-Based Optimization (TBO) is a novel extension of the island model in evolutionary computation that replaces conventional periodic migrations with a flexible, agent-driven interaction mechanism based on trust or reputation.<n> Experimental results demonstrate that TBO generally outperforms the standard island model evolutionary algorithm across various optimization problems.
- Score: 70.49434432747293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the Trust-Based Optimization (TBO), a novel extension of the island model in evolutionary computation that replaces conventional periodic migrations with a flexible, agent-driven interaction mechanism based on trust or reputation. Experimental results demonstrate that TBO generally outperforms the standard island model evolutionary algorithm across various optimization problems. Nevertheless, algorithm performance varies depending on the problem type, with certain configurations being more effective for specific landscapes or dimensions. The findings suggest that trust and reputation mechanisms provide a flexible and adaptive approach to evolutionary optimization, improving solution quality in many cases.
Related papers
- AdaEvolve: Adaptive LLM Driven Zeroth-Order Optimization [61.535567824938205]
We introduce AdaEvolve, a framework that reformulates LLM-driven evolution as a hierarchical adaptive optimization problem.<n>AdaEvolve consistently outperforms the open-ended baselines across 185 different open-ended optimization problems.
arXiv Detail & Related papers (2026-02-23T18:45:31Z) - Optimizers Qualitatively Alter Solutions And We Should Leverage This [62.662640460717476]
Deep Neural Networks (DNNs) can not guarantee convergence to a unique global minimum of the loss when using only local information, such as SGD.<n>We argue that the community should aim at understanding the biases of already existing methods, as well as aim to build new DNNs with the explicit intent of inducing certain properties of the solution.
arXiv Detail & Related papers (2025-07-16T13:33:31Z) - EvoGrad: Metaheuristics in a Differentiable Wonderland [1.065497990128313]
Differentiable programming has revolutionised optimisation by enabling efficient gradient-based training of complex models.<n>EvoGrad is a unified differentiable framework that integrates EC and SI with gradient-based optimisation.<n>Our results show the substantial benefits of fully differentiable evolutionary and swarm optimisation.
arXiv Detail & Related papers (2025-05-28T15:42:07Z) - Integrating Chaotic Evolutionary and Local Search Techniques in Decision Space for Enhanced Evolutionary Multi-Objective Optimization [1.8130068086063336]
This paper focuses on both Single-Objective Multi-Modal Optimization (SOMMOP) and Multi-Objective Optimization (MOO)
In SOMMOP, we integrate chaotic evolution with niching techniques, as well as Persistence-Based Clustering combined with Gaussian mutation.
For MOO, we extend these methods into a comprehensive framework that incorporates Uncertainty-Based Selection, Adaptive Tuning, and introduces a radius ( R ) concept in deterministic crowding.
arXiv Detail & Related papers (2024-11-12T15:18:48Z) - FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - Model Uncertainty in Evolutionary Optimization and Bayesian Optimization: A Comparative Analysis [5.6787965501364335]
Black-box optimization problems are common in many real-world applications.
These problems require optimization through input-output interactions without access to internal workings.
Two widely used gradient-free optimization techniques are employed to address such challenges.
This paper aims to elucidate the similarities and differences in the utilization of model uncertainty between these two methods.
arXiv Detail & Related papers (2024-03-21T13:59:19Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - A Novel Meta-Heuristic Optimization Algorithm Inspired by the Spread of
Viruses [0.0]
A novel nature-inspired meta-heuristic optimization algorithm called virus spread optimization (VSO) is proposed.
VSO loosely mimics the spread of viruses among hosts, and can be effectively applied to solving many challenging and continuous optimization problems.
arXiv Detail & Related papers (2020-06-11T09:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.