Unsupervised Learning for Combinatorial Optimization with Principled
Objective Relaxation
- URL: http://arxiv.org/abs/2207.05984v2
- Date: Thu, 14 Jul 2022 03:51:35 GMT
- Title: Unsupervised Learning for Combinatorial Optimization with Principled
Objective Relaxation
- Authors: Haoyu Wang, Nan Wu, Hang Yang, Cong Hao, Pan Li
- Abstract summary: This work proposes an unsupervised learning framework for optimization (CO) problems.
Our key contribution is the observation that if the relaxed objective satisfies entry-wise concavity, a low optimization loss guarantees the quality of the final integral solutions.
In particular, this observation can guide the design of objective models in applications where the objectives are not given explicitly while requiring being modeled in prior.
- Score: 19.582494782591386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using machine learning to solve combinatorial optimization (CO) problems is
challenging, especially when the data is unlabeled. This work proposes an
unsupervised learning framework for CO problems. Our framework follows a
standard relaxation-plus-rounding approach and adopts neural networks to
parameterize the relaxed solutions so that simple back-propagation can train
the model end-to-end. Our key contribution is the observation that if the
relaxed objective satisfies entry-wise concavity, a low optimization loss
guarantees the quality of the final integral solutions. This observation
significantly broadens the applicability of the previous framework inspired by
Erdos' probabilistic method. In particular, this observation can guide the
design of objective models in applications where the objectives are not given
explicitly while requiring being modeled in prior. We evaluate our framework by
solving a synthetic graph optimization problem, and two real-world applications
including resource allocation in circuit design and approximate computing. Our
framework largely outperforms the baselines based on na\"{i}ve relaxation,
reinforcement learning, and Gumbel-softmax tricks.
Related papers
- An Unsupervised Learning Framework Combined with Heuristics for the Maximum Minimal Cut Problem [5.092968949752308]
This work proposes an unsupervised learning framework combined with Maximums for MMCP.
A crucial observation is that each solution corresponds to at least one spanning tree.
We conduct extensive experiments to evaluate our framework and give a specific application.
arXiv Detail & Related papers (2024-08-16T02:07:34Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Unsupervised Learning for Combinatorial Optimization Needs Meta-Learning [14.86600327306136]
A general framework of unsupervised learning for optimization (CO) is to train a neural network (NN) whose output gives a problem solution by directly optimizing the CO objective.
We propose a new objective of unsupervised learning for CO where the goal of learning is to search for good initialization for future problem instances rather than give direct solutions.
We observe that even just the initial solution given by our model before fine-tuning can significantly outperform the baselines under various evaluation settings.
arXiv Detail & Related papers (2023-01-08T22:14:59Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial
Optimization on Graphs [35.14404918074861]
This work proposes an unsupervised learning framework for Combinatorial optimization problems on graphs.
Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets.
We show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution.
arXiv Detail & Related papers (2020-06-18T16:13:36Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.