Graph Minors Meet Machine Learning: the Power of Obstructions
- URL: http://arxiv.org/abs/2006.04689v1
- Date: Mon, 8 Jun 2020 15:40:04 GMT
- Title: Graph Minors Meet Machine Learning: the Power of Obstructions
- Authors: Faisal N. Abu-Khzam, Mohamed Mahmoud Abd El-Wahab and Noureldin Yosri
- Abstract summary: We show the utility of using obstructions for training neural networks.
Experiments show that training with obstructions results in a huge reduction in number of iterations needed for convergence.
- Score: 0.90238471756546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational intractability has for decades motivated the development of a
plethora of methodologies that mainly aimed at a quality-time trade-off. The
use of Machine Learning techniques has finally emerged as one of the possible
tools to obtain approximate solutions to ${\cal NP}$-hard combinatorial
optimization problems. In a recent article, Dai et al. introduced a method for
computing such approximate solutions for instances of the Vertex Cover problem.
In this paper we consider the effectiveness of selecting a proper training
strategy by considering special problem instances called "obstructions" that we
believe carry some intrinsic properties of the problem itself. Capitalizing on
the recent work of Dai et al. on the Vertex Cover problem, and using the same
case study as well as 19 other problem instances, we show the utility of using
obstructions for training neural networks. Experiments show that training with
obstructions results in a huge reduction in number of iterations needed for
convergence, thus gaining a substantial reduction in the time needed for
training the model.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - A Novel Differentiable Loss Function for Unsupervised Graph Neural
Networks in Graph Partitioning [5.22145960878624]
The graph partitioning problem is recognized as an NP-hard prob-lem.
We introduce a novel pipeline employing an unsupervised graph neural network to solve the graph partitioning problem.
We rigor-ously evaluate our methodology against contemporary state-of-the-art tech-niques, focusing on metrics: cuts and balance, and our findings reveal that our is competitive with these leading methods.
arXiv Detail & Related papers (2023-12-11T23:03:17Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Unsupervised Learning for Robust Fitting:A Reinforcement Learning
Approach [25.851792661168698]
We introduce a novel framework that learns to solve robust model fitting.
Unlike other methods, our work is agnostic to the underlying input features.
We empirically show that our method outperforms existing learning approaches.
arXiv Detail & Related papers (2021-03-05T07:14:00Z) - Reversible Action Design for Combinatorial Optimization with
Reinforcement Learning [35.50454156611722]
Reinforcement learning (RL) has recently emerged as a new framework to tackle these problems.
We propose a general RL framework that not only exhibits state-of-the-art empirical performance but also generalizes to a variety class of COPs.
arXiv Detail & Related papers (2021-02-14T18:05:42Z) - Contrastive Losses and Solution Caching for Predict-and-Optimize [19.31153168397003]
We use a Noise Contrastive approach to motivate a family of surrogate loss functions.
We address a major bottleneck of all predict-and-optimize approaches.
We show that even a very slow growth rate is enough to match the quality of state-of-the-art methods.
arXiv Detail & Related papers (2020-11-10T19:09:12Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.