Towards a Generic Representation of Combinatorial Problems for
Learning-Based Approaches
- URL: http://arxiv.org/abs/2403.06026v2
- Date: Wed, 13 Mar 2024 00:09:46 GMT
- Title: Towards a Generic Representation of Combinatorial Problems for
Learning-Based Approaches
- Authors: L\'eo Boisvert, H\'el\`ene Verhaeghe, Quentin Cappart
- Abstract summary: In recent years, there has been a growing interest in using learning-based approaches for solving problems.
The challenge lies in encoding the targeted problems into a structure compatible with the learning algorithm.
Many existing works have proposed problem-specific representations, often in the form of a graph, to leverage the advantages of textitgraph neural networks
This paper advocates for a fully generic representation of problems for learning-based approaches.
- Score: 2.2526069385327316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been a growing interest in using learning-based
approaches for solving combinatorial problems, either in an end-to-end manner
or in conjunction with traditional optimization algorithms. In both scenarios,
the challenge lies in encoding the targeted combinatorial problems into a
structure compatible with the learning algorithm. Many existing works have
proposed problem-specific representations, often in the form of a graph, to
leverage the advantages of \textit{graph neural networks}. However, these
approaches lack generality, as the representation cannot be easily transferred
from one combinatorial problem to another one. While some attempts have been
made to bridge this gap, they still offer a partial generality only. In
response to this challenge, this paper advocates for progress toward a fully
generic representation of combinatorial problems for learning-based approaches.
The approach we propose involves constructing a graph by breaking down any
constraint of a combinatorial problem into an abstract syntax tree and
expressing relationships (e.g., a variable involved in a constraint) through
the edges. Furthermore, we introduce a graph neural network architecture
capable of efficiently learning from this representation. The tool provided
operates on combinatorial problems expressed in the XCSP3 format, handling all
the constraints available in the 2023 mini-track competition. Experimental
results on four combinatorial problems demonstrate that our architecture
achieves performance comparable to dedicated architectures while maintaining
generality. Our code and trained models are publicly available at
\url{https://github.com/corail-research/learning-generic-csp}.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - GOAL: A Generalist Combinatorial Optimization Agent Learning [0.05461938536945722]
GOAL is a model capable of efficiently solving multiple hard optimization problems (COPs)
Goal consists of a single backbone plus light-weight problem-specific adapters for input and output processing.
We show that GOAL is only slightly inferior to the specialized baselines while being the first multi-task model that solves a wide range of COPs.
arXiv Detail & Related papers (2024-06-21T11:55:20Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - A Unified Analysis of Dynamic Interactive Learning [5.474944738795308]
Previous work by Emamjomeh-Zadeh et al. [ 2020] introduced dynamics into interactive learning as a way to model non-static user preferences.
We give a framework that captures both of the models analyzed by [Emamjomeh-Zadeh et al., 2020], which allows us to study any type of concept evolution.
We also study an efficient algorithm where the learner simply follows the feedback at each round.
arXiv Detail & Related papers (2022-04-14T16:03:37Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Deep Probabilistic Graph Matching [72.6690550634166]
We propose a deep learning-based graph matching framework that works for the original QAP without compromising on the matching constraints.
The proposed method is evaluated on three popularly tested benchmarks (Pascal VOC, Willow Object and SPair-71k) and it outperforms all previous state-of-the-arts on all benchmarks.
arXiv Detail & Related papers (2022-01-05T13:37:27Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - CombOptNet: Fit the Right NP-Hard Problem by Learning Integer
Programming Constraints [20.659237363210774]
We aim to integrate integer programming solvers into neural network architectures as layers capable of learning both the cost terms and the constraints.
The resulting end-to-end trainable architectures jointly extract features from raw data and solve a suitable (learned) problem with state-of-the-art integer programming solvers.
arXiv Detail & Related papers (2021-05-05T21:52:53Z) - Neural Learning of One-of-Many Solutions for Combinatorial Problems in
Structured Output Spaces [20.101005623256626]
We argue that being oblivious to the presence of multiple solutions can severely hamper their training ability.
We present a generic learning framework that adapts an existing prediction network for an RL problem to handle solution multiplicity.
arXiv Detail & Related papers (2020-08-27T08:37:01Z) - Multi-view Graph Learning by Joint Modeling of Consistency and
Inconsistency [65.76554214664101]
Graph learning has emerged as a promising technique for multi-view clustering with its ability to learn a unified and robust graph from multiple views.
We propose a new multi-view graph learning framework, which for the first time simultaneously models multi-view consistency and multi-view inconsistency in a unified objective function.
Experiments on twelve multi-view datasets have demonstrated the robustness and efficiency of the proposed approach.
arXiv Detail & Related papers (2020-08-24T06:11:29Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.