A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks
- URL: http://arxiv.org/abs/2203.08209v1
- Date: Tue, 15 Mar 2022 19:21:31 GMT
- Title: A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks
- Authors: Ismail R. Alkhouri, George K. Atia, Alvaro Velasquez
- Abstract summary: We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
- Score: 20.170140039052455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of machine learning solutions for reasoning about discrete
structures has brought attention to its adoption within combinatorial
optimization algorithms. Such approaches generally rely on supervised learning
by leveraging datasets of the combinatorial structures of interest drawn from
some distribution of problem instances. Reinforcement learning has also been
employed to find such structures. In this paper, we propose a radically
different approach in that no data is required for training the neural networks
that produce the solution. In particular, we reduce the combinatorial
optimization problem to a neural network and employ a dataless training scheme
to refine the parameters of the network such that those parameters yield the
structure of interest. We consider the combinatorial optimization problems of
finding maximum independent sets and maximum cliques in a graph. In principle,
since these problems belong to the NP-hard complexity class, our proposed
approach can be used to solve any other NP-hard problem. Additionally, we
propose a universal graph reduction procedure to handle large scale graphs. The
reduction exploits community detection for graph partitioning and is applicable
to any graph type and/or density. Experimental evaluation on both synthetic
graphs and real-world benchmarks demonstrates that our method performs on par
with or outperforms state-of-the-art heuristic, reinforcement learning, and
machine learning based methods without requiring any data.
Related papers
- Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data [23.661713049508375]
We propose an algorithm that learns over a submanifold in the setting of a client.
We show that our proposed algorithm converges sub-ly to a neighborhood of a first-order optimal solution by using a novel analysis.
arXiv Detail & Related papers (2024-06-12T17:53:28Z) - GASE: Graph Attention Sampling with Edges Fusion for Solving Vehicle Routing Problems [6.084414764415137]
We propose an adaptive Graph Attention Sampling with the Edges Fusion framework to solve vehicle routing problems.
Our proposed model outperforms the existing methods by 2.08%-6.23% and shows stronger generalization ability.
arXiv Detail & Related papers (2024-05-21T03:33:07Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Resource-constrained Federated Edge Learning with Heterogeneous Data:
Formulation and Analysis [8.863089484787835]
We propose a distributed approximate Newton-type Newton-type training scheme, namely FedOVA, to solve the heterogeneous statistical challenge brought by heterogeneous data.
FedOVA decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning.
arXiv Detail & Related papers (2021-10-14T17:35:24Z) - Combinatorial optimization and reasoning with graph neural networks [7.8107109904672365]
Combinatorial optimization is a well-established area in operations research and computer science.
Recent years have seen a surge of interest in using machine learning, especially graph neural networks (GNNs) as a key building block for tasks.
arXiv Detail & Related papers (2021-02-18T18:47:20Z) - Deep Reinforcement Learning of Graph Matching [63.469961545293756]
Graph matching (GM) under node and pairwise constraints has been a building block in areas from optimization to computer vision.
We present a reinforcement learning solver for GM i.e. RGM that seeks the node correspondence between pairwise graphs.
Our method differs from the previous deep graph matching model in the sense that they are focused on the front-end feature extraction and affinity function learning.
arXiv Detail & Related papers (2020-12-16T13:48:48Z) - Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial
Optimization on Graphs [35.14404918074861]
This work proposes an unsupervised learning framework for Combinatorial optimization problems on graphs.
Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets.
We show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution.
arXiv Detail & Related papers (2020-06-18T16:13:36Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.