Neural Combinatorial Optimization: a New Player in the Field
- URL: http://arxiv.org/abs/2205.01356v1
- Date: Tue, 3 May 2022 07:54:56 GMT
- Title: Neural Combinatorial Optimization: a New Player in the Field
- Authors: Andoni I. Garmendia, Josu Ceberio, Alexander Mendiburu
- Abstract summary: This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
- Score: 69.23334811890919
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural Combinatorial Optimization attempts to learn good heuristics for
solving a set of problems using Neural Network models and Reinforcement
Learning. Recently, its good performance has encouraged many practitioners to
develop neural architectures for a wide variety of combinatorial problems.
However, the incorporation of such algorithms in the conventional optimization
framework has raised many questions related to their performance and the
experimental comparison with other methods such as exact algorithms, heuristics
and metaheuristics. This paper presents a critical analysis on the
incorporation of algorithms based on neural networks into the classical
combinatorial optimization framework. Subsequently, a comprehensive study is
carried out to analyse the fundamental aspects of such algorithms, including
performance, transferability, computational cost and generalization to
larger-sized instances. To that end, we select the Linear Ordering Problem as a
case of study, an NP-hard problem, and develop a Neural Combinatorial
Optimization model to optimize it. Finally, we discuss how the analysed aspects
apply to a general learning framework, and suggest new directions for future
work in the area of Neural Combinatorial Optimization algorithms.
Related papers
- Neural Solver Selection for Combinatorial Optimization [23.449310200885893]
We propose the first general framework to coordinate neural solvers, which involves feature extraction, selection model, and selection strategy.
We show that the proposed framework can effectively distribute instances and the resulting composite solver can achieve significantly better performance.
arXiv Detail & Related papers (2024-10-13T02:05:41Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - From Learning to Optimize to Learning Optimization Algorithms [4.066869900592636]
We identify key principles that classical algorithms obey, but have up to now, not been used for Learning to Optimize (L2O)
We provide a general design pipeline, taking into account data, architecture and learning strategy, and thereby enabling a synergy between classical optimization and L2O.
We demonstrate the success of these novel principles by designing a new learning-enhanced BFGS algorithm and provide numerical experiments evidencing its adaptation to many settings at test time.
arXiv Detail & Related papers (2024-05-28T14:30:07Z) - Neural Algorithmic Reasoning for Combinatorial Optimisation [20.36694807847833]
We propose leveraging recent advancements in neural reasoning to improve the learning of CO problems.
We suggest pre-training our neural model on relevant algorithms before training it on CO instances.
Our results demonstrate that by using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.
arXiv Detail & Related papers (2023-05-18T13:59:02Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - On the Generalization of Neural Combinatorial Optimization Heuristics [0.7049738935364298]
We show that our proposed meta-learning approach significantly improves the generalization of two state-of-the-art models.
We formalize solving a CO problem over a given instance distribution as a separate learning task.
We investigate meta-learning techniques to learn a model on a variety of tasks, in order to optimize its capacity to adapt to new tasks.
arXiv Detail & Related papers (2022-06-01T22:39:35Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.