Learning fine-grained search space pruning and heuristics for
combinatorial optimization
- URL: http://arxiv.org/abs/2001.01230v1
- Date: Sun, 5 Jan 2020 13:10:39 GMT
- Title: Learning fine-grained search space pruning and heuristics for
combinatorial optimization
- Authors: Juho Lauri, Sourav Dutta, Marco Grassia, Deepak Ajwani
- Abstract summary: We propose a framework for leveraging machine learning techniques to scale-up exact optimization algorithms.
Our framework learns the relatively simpler task of pruning the elements in order to reduce the size of the problem instances.
We show that our framework can prune a large fraction of the input graph and still detect almost all of the maximum cliques.
- Score: 5.72274610208488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combinatorial optimization problems arise in a wide range of applications
from diverse domains. Many of these problems are NP-hard and designing
efficient heuristics for them requires considerable time and experimentation.
On the other hand, the number of optimization problems in the industry
continues to grow. In recent years, machine learning techniques have been
explored to address this gap. We propose a framework for leveraging machine
learning techniques to scale-up exact combinatorial optimization algorithms. In
contrast to the existing approaches based on deep-learning, reinforcement
learning and restricted Boltzmann machines that attempt to directly learn the
output of the optimization problem from its input (with limited success), our
framework learns the relatively simpler task of pruning the elements in order
to reduce the size of the problem instances. In addition, our framework uses
only interpretable learning models based on intuitive features and thus the
learning process provides deeper insights into the optimization problem and the
instance class, that can be used for designing better heuristics. For the
classical maximum clique enumeration problem, we show that our framework can
prune a large fraction of the input graph (around 99 % of nodes in case of
sparse graphs) and still detect almost all of the maximum cliques. This results
in several fold speedups of state-of-the-art algorithms. Furthermore, the model
used in our framework highlights that the chi-squared value of neighborhood
degree has a statistically significant correlation with the presence of a node
in a maximum clique, particularly in dense graphs which constitute a
significant challenge for modern solvers. We leverage this insight to design a
novel heuristic for this problem outperforming the state-of-the-art. Our
heuristic is also of independent interest for maximum clique detection and
enumeration.
Related papers
- Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Exact Combinatorial Optimization with Temporo-Attentional Graph Neural
Networks [17.128882942475]
We investigate two essential aspects of machine learning algorithms for optimization: temporal characteristics and attention.
We argue that for the task of variable selection in the branch-and-bound (B&B) algorithm, incorporating the temporal information as well as the bipartite graph attention improves the solver's performance.
arXiv Detail & Related papers (2023-11-23T08:07:15Z) - Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making [73.48977854003697]
This work introduces a new notion of complexity, the generalized bracketing numbers, which marries constraints on the adversary to the size of the space.
We then instantiate our bounds in several problems of interest, including online prediction and planning of piecewise continuous functions.
arXiv Detail & Related papers (2023-02-10T18:45:52Z) - Searching Large Neighborhoods for Integer Linear Programs with
Contrastive Learning [39.40838358438744]
Linear Programs (ILPs) are powerful tools for modeling and solving a large number of optimization problems.
Large Neighborhood Search (LNS), as a algorithm, can find high quality solutions to ILPs faster than Branch and Bound.
We propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics.
arXiv Detail & Related papers (2023-02-03T07:15:37Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Solving Dynamic Graph Problems with Multi-Attention Deep Reinforcement
Learning [1.3534683694551497]
In recent years, using deep learning techniques to find solutions for NP-hard graph problems has gained much interest.
In this paper, we propose a novel architecture named Graph Temporal Attention with Reinforcement Learning (GTA-RL) to learn solutions for graph-based dynamic optimization problems.
arXiv Detail & Related papers (2022-01-13T11:36:05Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Learning to Sparsify Travelling Salesman Problem Instances [0.5985204759362747]
We use a pruning machine learning as a pre-processing step followed by an exact Programming approach to sparsify the travelling salesman problem.
Our learning approach requires very little training data and is amenable to mathematical analysis.
arXiv Detail & Related papers (2021-04-19T14:35:14Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Gumbel-softmax-based Optimization: A Simple General Framework for
Optimization Problems on Graphs [5.486093983007419]
We propose a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks.
High-quality solutions can be obtained with much less time consuming compared to traditional approaches.
arXiv Detail & Related papers (2020-04-14T14:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.