Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets
- URL: http://arxiv.org/abs/2305.17010v3
- Date: Mon, 20 Nov 2023 16:57:12 GMT
- Title: Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets
- Authors: Dinghuai Zhang, Hanjun Dai, Nikolay Malkin, Aaron Courville, Yoshua
Bengio, Ling Pan
- Abstract summary: Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
- Score: 86.43523688236077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combinatorial optimization (CO) problems are often NP-hard and thus out of
reach for exact algorithms, making them a tempting domain to apply machine
learning methods. The highly structured constraints in these problems can
hinder either optimization or sampling directly in the solution space. On the
other hand, GFlowNets have recently emerged as a powerful machinery to
efficiently sample from composite unnormalized densities sequentially and have
the potential to amortize such solution-searching processes in CO, as well as
generate diverse solution candidates. In this paper, we design Markov decision
processes (MDPs) for different combinatorial problems and propose to train
conditional GFlowNets to sample from the solution space. Efficient training
techniques are also developed to benefit long-range credit assignment. Through
extensive experiments on a variety of different CO tasks with synthetic and
realistic data, we demonstrate that GFlowNet policies can efficiently find
high-quality solutions. Our implementation is open-sourced at
https://github.com/zdhNarsil/GFlowNet-CombOpt.
Related papers
- A Random-Key Optimizer for Combinatorial Optimization [0.0]
The Random-Key hubs (RKO) is a versatile and efficient local search method tailored for optimization problems.
Using the random-key concept, RKO encodes solutions as vectors of random keys that are subsequently decoded into feasible solutions via problem-specific decoders.
The RKO framework is able to combine a plethora of classic metaheuristics, each capable of operating independently or in parallel, with solution sharing facilitated through an elite solution pool.
arXiv Detail & Related papers (2024-11-06T22:23:29Z) - Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Graph Q-Learning for Combinatorial Optimization [44.8086492019594]
Graph Neural Networks (GNNs) have been shown to be effective at solving prediction and inference problems on graph data.
We propose and demonstrate that GNNs can be applied to solve Combinatorial Optimization problems.
arXiv Detail & Related papers (2024-01-11T01:15:28Z) - FedDRO: Federated Compositional Optimization for Distributionally Robust
Learning [11.70892315284039]
Large-scale and distributed availability of data demands the development of efficient federated learning gradient algorithms.
We propose efficient FedAvg-type algorithms for solving non- linear compositional gradients in the FL setting.
A key novelty of our work is to develop solution accuracy-independent algorithms that do not require large batch evaluations.
arXiv Detail & Related papers (2023-11-21T14:53:39Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - End-to-End Pareto Set Prediction with Graph Neural Networks for
Multi-objective Facility Location [10.130342722193204]
Facility location problems (FLPs) are a typical class of NP-hard optimization problems, which are widely seen in the supply chain and logistics.
In this paper, we consider the multi-objective facility location problem (MO-FLP) that simultaneously minimizes the overall cost and maximizes the system reliability.
Two graph neural networks are constructed to learn the implicit graph representation on nodes and edges.
arXiv Detail & Related papers (2022-10-27T07:15:55Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.