Can Hybrid Geometric Scattering Networks Help Solve the Maximal Clique
Problem?
- URL: http://arxiv.org/abs/2206.01506v1
- Date: Fri, 3 Jun 2022 11:09:03 GMT
- Title: Can Hybrid Geometric Scattering Networks Help Solve the Maximal Clique
Problem?
- Authors: Yimeng Min, Frederik Wenkel, Michael Perlmutter, Guy Wolf
- Abstract summary: We propose a geometric scattering-based graph neural network (GNN) for approximating solutions of the NP-hard maximal clique (MC) problem.
Our empirical results demonstrate that our method outperforms representative GNN baselines in terms of solution accuracy and inference speed.
- Score: 12.953897563456271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a geometric scattering-based graph neural network (GNN) for
approximating solutions of the NP-hard maximal clique (MC) problem. We
construct a loss function with two terms, one which encourages the network to
find a large set of nodes and the other which acts as a surrogate for the
constraint that the nodes form a clique. We then use this loss to train a novel
GNN architecture that outputs a vector representing the probability for each
node to be part of the MC and apply a rule-based decoder to make our final
prediction. The incorporation of the scattering transform alleviates the
so-called oversmoothing problem that is often encountered in GNNs and would
degrade the performance of our proposed setup. Our empirical results
demonstrate that our method outperforms representative GNN baselines in terms
of solution accuracy and inference speed as well as conventional solvers like
GUROBI with limited time budgets.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Towards a General Recipe for Combinatorial Optimization with Multi-Filter GNNs [13.871690454501389]
We introduce GCON, a novel GNN architecture that leverages a complex filter bank and localized attention mechanisms to solve CO problems on graphs.
GCON is competitive across all tasks and consistently outperforms other specialized GNN-based approaches.
arXiv Detail & Related papers (2024-05-31T00:02:07Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - $\rm A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks [18.772128348519566]
We propose the Aggregation-Aware mixed-precision Quantization ($rm A2Q$) for Graph Neural Networks (GNNs)
Our method can achieve up to 11.4% and 9.5% accuracy improvements on the node-level and graph-level tasks, respectively, and up to 2x speedup on a dedicated hardware accelerator.
arXiv Detail & Related papers (2023-02-01T02:54:35Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Edgeless-GNN: Unsupervised Inductive Edgeless Network Embedding [7.391641422048645]
We study the problem of embedding edgeless nodes such as users who newly enter the underlying network.
We propose Edgeless-GNN, a new framework that enables GNNs to generate node embeddings even for edgeless nodes through unsupervised inductive learning.
arXiv Detail & Related papers (2021-04-12T06:37:31Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.