TOPSIS-like metaheuristic for LABS problem
- URL: http://arxiv.org/abs/2511.05778v1
- Date: Sat, 08 Nov 2025 00:47:37 GMT
- Title: TOPSIS-like metaheuristic for LABS problem
- Authors: Aleksandra Urbańczyk, Bogumiła Papiernik, Piotr Magiera, Piotr Urbańczyk, Aleksander Byrski,
- Abstract summary: We introduce socio-cognitive mutation mechanisms that integrate strategies of following the best solutions and avoiding the worst.<n>By guiding search agents to imitate high-performing solutions and avoid poor ones, these operators enhance both solution diversity and convergence efficiency.
- Score: 70.49434432747293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the application of socio-cognitive mutation operators inspired by the TOPSIS method to the Low Autocorrelation Binary Sequence (LABS) problem. Traditional evolutionary algorithms, while effective, often suffer from premature convergence and poor exploration-exploitation balance. To address these challenges, we introduce socio-cognitive mutation mechanisms that integrate strategies of following the best solutions and avoiding the worst. By guiding search agents to imitate high-performing solutions and avoid poor ones, these operators enhance both solution diversity and convergence efficiency. Experimental results demonstrate that TOPSIS-inspired mutation outperforms the base algorithm in optimizing LABS sequences. The study highlights the potential of socio-cognitive learning principles in evolutionary computation and suggests directions for further refinement.
Related papers
- Controlled Self-Evolution for Algorithmic Code Optimization [33.82967000330864]
Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles.<n>Existing approaches fail to discover solutions with superior complexity within limited budgets.<n>We propose Controlled Self-Evolution (CSE), which consists of three key components.
arXiv Detail & Related papers (2026-01-12T09:23:13Z) - Socio-cognitive agent-oriented evolutionary algorithm with trust-based optimization [70.49434432747293]
Trust-Based Optimization (TBO) is a novel extension of the island model in evolutionary computation that replaces conventional periodic migrations with a flexible, agent-driven interaction mechanism based on trust or reputation.<n> Experimental results demonstrate that TBO generally outperforms the standard island model evolutionary algorithm across various optimization problems.
arXiv Detail & Related papers (2025-10-29T01:59:26Z) - Synergizing Reinforcement Learning and Genetic Algorithms for Neural Combinatorial Optimization [25.633698252033756]
We propose the Evolutionary Augmentation Mechanism (EAM) to synergize the learning efficiency of DRL with the global search power of GAs.<n>EAM operates by generating solutions from a learned policy and refining them through domain-specific genetic operations such as crossover and mutation.<n>EAM can be seamlessly integrated with state-of-the-art DRL solvers such as the Attention Model, POMO, and SymNCO.
arXiv Detail & Related papers (2025-06-11T05:17:30Z) - Preference Optimization for Combinatorial Optimization Problems [54.87466279363487]
Reinforcement Learning (RL) has emerged as a powerful tool for neural optimization, enabling models learns that solve complex problems without requiring expert knowledge.<n>Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast action spaces.<n>We propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling.
arXiv Detail & Related papers (2025-05-13T16:47:00Z) - Learning Strategies in Particle Swarm Optimizer: A Critical Review and Performance Analysis [0.6437284704257459]
Particle swarm optimization (PSO) is widely adopted among SI algorithms due to its simplicity and efficiency.<n>We review and classify various learning strategies to address this gap, assessing their impact on optimization performance.<n>We discuss open challenges and future directions, emphasizing the need for self-adaptive, intelligent PSO variants.
arXiv Detail & Related papers (2025-04-16T06:50:02Z) - Un-evaluated Solutions May Be Valuable in Expensive Optimization [5.6787965501364335]
We propose a strategic approach that incorporates high-quality, un-evaluated solutions predicted by surrogate models during the selection phase.<n>This approach aims to improve the distribution of evaluated solutions, thereby generating a superior next generation of solutions.
arXiv Detail & Related papers (2024-12-05T04:06:30Z) - Model Uncertainty in Evolutionary Optimization and Bayesian Optimization: A Comparative Analysis [5.6787965501364335]
Black-box optimization problems are common in many real-world applications.
These problems require optimization through input-output interactions without access to internal workings.
Two widely used gradient-free optimization techniques are employed to address such challenges.
This paper aims to elucidate the similarities and differences in the utilization of model uncertainty between these two methods.
arXiv Detail & Related papers (2024-03-21T13:59:19Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Reparameterized Variational Divergence Minimization for Stable Imitation [57.06909373038396]
We study the extent to which variations in the choice of probabilistic divergence may yield more performant ILO algorithms.
We contribute a re parameterization trick for adversarial imitation learning to alleviate the challenges of the promising $f$-divergence minimization framework.
Empirically, we demonstrate that our design choices allow for ILO algorithms that outperform baseline approaches and more closely match expert performance in low-dimensional continuous-control tasks.
arXiv Detail & Related papers (2020-06-18T19:04:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.