Deep Learning Chromatic and Clique Numbers of Graphs
- URL: http://arxiv.org/abs/2108.01810v1
- Date: Wed, 4 Aug 2021 02:02:53 GMT
- Title: Deep Learning Chromatic and Clique Numbers of Graphs
- Authors: Jason Van Hulse, Joshua S. Friedman
- Abstract summary: We develop deep learning models to predict the chromatic number and maximum clique size of graphs.
In particular, we show that deep neural networks, and in particular convolutional neural networks, obtain strong performance on this problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have been applied to a wide range of problems across
different application domains with great success. Recently, research into
combinatorial optimization problems in particular has generated much interest
in the machine learning community. In this work, we develop deep learning
models to predict the chromatic number and maximum clique size of graphs, both
of which represent classical NP-complete combinatorial optimization problems
encountered in graph theory. The neural networks are trained using the most
basic representation of the graph, the adjacency matrix, as opposed to
undergoing complex domain-specific feature engineering. The experimental
results show that deep neural networks, and in particular convolutional neural
networks, obtain strong performance on this problem.
Related papers
- Transductive Spiking Graph Neural Networks for Loihi [0.8684584813982095]
We present a fully neuromorphic implementation of spiking graph neural networks designed for Loihi 2.
We showcase the performance benefits of combining neuromorphic Bayesian optimization with our approach for citation graph classification using fixed-precision spiking neurons.
arXiv Detail & Related papers (2024-04-25T21:15:15Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Graph Convolutional Networks from the Perspective of Sheaves and the
Neural Tangent Kernel [0.0]
Graph convolutional networks are a popular class of deep neural network algorithms.
Despite their success, graph convolutional networks exhibit a number of peculiar features, including a bias towards learning oversmoothed and homophilic functions.
We propose to bridge this gap by studying the neural tangent kernel of sheaf convolutional networks.
arXiv Detail & Related papers (2022-08-19T12:46:49Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Analyzing the Performance of Graph Neural Networks with Pipe Parallelism [2.269587850533721]
We focus on Graph Neural Networks (GNNs) that have found great success in tasks such as node or edge classification and link prediction.
New approaches for processing larger networks are needed to advance graph techniques.
We study how GNNs could be parallelized using existing tools and frameworks that are known to be successful in the deep learning community.
arXiv Detail & Related papers (2020-12-20T04:20:38Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z) - Hcore-Init: Neural Network Initialization based on Graph Degeneracy [22.923756039561194]
We propose an adapted version of the k-core structure for the complete weighted multipartite graph extracted from a deep learning architecture.
As a multipartite graph is a combination of bipartite graphs, that are in turn the incidence graphs of hypergraphs, we design k-hypercore decomposition.
arXiv Detail & Related papers (2020-04-16T12:57:14Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - Analyzing Neural Networks Based on Random Graphs [77.34726150561087]
We perform a massive evaluation of neural networks with architectures corresponding to random graphs of various types.
We find that none of the classical numerical graph invariants by itself allows to single out the best networks.
We also find that networks with primarily short-range connections perform better than networks which allow for many long-range connections.
arXiv Detail & Related papers (2020-02-19T11:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.