Enabling Population-Based Architectures for Neural Combinatorial Optimization
- URL: http://arxiv.org/abs/2601.08696v1
- Date: Tue, 13 Jan 2026 16:19:34 GMT
- Title: Enabling Population-Based Architectures for Neural Combinatorial Optimization
- Authors: Andoni Irazusta Garmendia, Josu Ceberio, Alexander Mendiburu,
- Abstract summary: We study how to make Neural Combinatorial Optimization (NCO) explicitly population-based by learning policies that act on sets of candidate solutions.<n>We make these ideas concrete with two complementary tools: one that improves existing solutions using information shared across the whole population, and the other generates new candidate solutions that explicitly balance being high-quality with diversity.
- Score: 48.248213281039334
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural Combinatorial Optimization (NCO) has mostly focused on learning policies, typically neural networks, that operate on a single candidate solution at a time, either by constructing one from scratch or iteratively improving it. In contrast, decades of work in metaheuristics have shown that maintaining and evolving populations of solutions improves robustness and exploration, and often leads to stronger performance. To close this gap, we study how to make NCO explicitly population-based by learning policies that act on sets of candidate solutions. We first propose a simple taxonomy of population awareness levels and use it to highlight two key design challenges: (i) how to represent a whole population inside a neural network, and (ii) how to learn population dynamics that balance intensification (generating good solutions) and diversification (maintaining variety). We make these ideas concrete with two complementary tools: one that improves existing solutions using information shared across the whole population, and the other generates new candidate solutions that explicitly balance being high-quality with diversity. Experimental results on Maximum Cut and Maximum Independent Set indicate that incorporating population structure is advantageous for learned optimization methods and opens new connections between NCO and classical population-based search.
Related papers
- AdaEvolve: Adaptive LLM Driven Zeroth-Order Optimization [61.535567824938205]
We introduce AdaEvolve, a framework that reformulates LLM-driven evolution as a hierarchical adaptive optimization problem.<n>AdaEvolve consistently outperforms the open-ended baselines across 185 different open-ended optimization problems.
arXiv Detail & Related papers (2026-02-23T18:45:31Z) - Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning [38.623998031868595]
Homotopy paradigm appears across diverse domains such as robust optimization, global optimization, root-finding, and sampling.<n>We propose Neural Predictor-Corrector (NPC), which replaces hand-crafted NPC with automatically learned policies.<n>NPC consistently outperforms classical and specialized baselines in efficiency while demonstrating superior stability across tasks.
arXiv Detail & Related papers (2026-02-03T04:19:48Z) - Discovering Quality-Diversity Algorithms via Meta-Black-Box Optimization [8.5083347559272]
Quality-Diversity is a family of evolutionary algorithms that generate diverse populations of high-performing solutions.<n>We propose using meta-learning to automatically discover novel Quality-Diversity algorithms.
arXiv Detail & Related papers (2025-02-04T10:13:13Z) - Neural Solver Selection for Combinatorial Optimization [23.449310200885893]
We propose the first general framework to coordinate neural solvers, which involves feature extraction, selection model, and selection strategy.<n>We show that the proposed framework can effectively distribute instances and the resulting composite solver can achieve significantly better performance.
arXiv Detail & Related papers (2024-10-13T02:05:41Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - MaxMin-RLHF: Alignment with Diverse Human Preferences [101.57443597426374]
Reinforcement Learning from Human Feedback (RLHF) aligns language models to human preferences by employing a singular reward model derived from preference data.<n>We learn a mixture of preference distributions via an expectation-maximization algorithm to better represent diverse human preferences.<n>Our algorithm achieves an average improvement of more than 16% in win-rates over conventional RLHF algorithms.
arXiv Detail & Related papers (2024-02-14T03:56:27Z) - A two-stage algorithm in evolutionary product unit neural networks for
classification [0.0]
This paper presents a procedure to add broader diversity at the beginning of the evolutionary process.
It consists of creating two initial populations with different parameter settings, evolving them for a small number of generations, selecting the best individuals from each population in the same proportion and combining them to constitute a new initial population.
arXiv Detail & Related papers (2024-02-09T18:56:07Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Efficient Diversity-Driven Ensemble for Deep Neural Networks [28.070540722925152]
We propose Efficient Diversity-Driven Ensemble (EDDE) to address both the diversity and the efficiency of an ensemble.
Compared with other well-known ensemble methods, EDDE can get highest ensemble accuracy with the lowest training cost.
We evaluate EDDE on Computer Vision (CV) and Natural Language Processing (NLP) tasks.
arXiv Detail & Related papers (2021-12-26T04:28:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.