Reply to: Modern graph neural networks do worse than classical greedy
algorithms in solving combinatorial optimization problems like maximum
independent set
- URL: http://arxiv.org/abs/2302.03602v1
- Date: Fri, 3 Feb 2023 17:21:02 GMT
- Title: Reply to: Modern graph neural networks do worse than classical greedy
algorithms in solving combinatorial optimization problems like maximum
independent set
- Authors: Martin J. A. Schuetz, J. Kyle Brubaker, Helmut G. Katzgraber
- Abstract summary: We argue that Chiara Angelini and Federico Ricci-Tersenghi's comment singles out one particular non-representative example problem.
We provide results showing run-time scaling superior to the results provided by Angelini and Ricci-Tersenghi.
We argue that the internal (parallel) anatomy of graph neural networks is very different from the (sequential) nature of greedy algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide a comprehensive reply to the comment written by Chiara Angelini
and Federico Ricci-Tersenghi [arXiv:2206.13211] and argue that the comment
singles out one particular non-representative example problem, entirely
focusing on the maximum independent set (MIS) on sparse graphs, for which
greedy algorithms are expected to perform well. Conversely, we highlight the
broader algorithmic development underlying our original work, and (within our
original framework) provide additional numerical results showing sizable
improvements over our original results, thereby refuting the comment's
performance statements. We also provide results showing run-time scaling
superior to the results provided by Angelini and Ricci-Tersenghi. Furthermore,
we show that the proposed set of random d-regular graphs does not provide a
universal set of benchmark instances, nor do greedy heuristics provide a
universal algorithmic baseline. Finally, we argue that the internal (parallel)
anatomy of graph neural networks is very different from the (sequential) nature
of greedy algorithms and emphasize that graph neural networks have demonstrated
their potential for superior scalability compared to existing heuristics such
as parallel tempering. We conclude by discussing the conceptual novelty of our
work and outline some potential extensions.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Unitary convolutions for learning on graphs and groups [0.9899763598214121]
We study unitary group convolutions, which allow for deeper networks that are more stable during training.
The main focus of the paper are graph neural networks, where we show that unitary graph convolutions provably avoid over-smoothing.
Our experimental results confirm that unitary graph convolutional networks achieve competitive performance on benchmark datasets.
arXiv Detail & Related papers (2024-10-07T21:09:14Z) - Improving the interpretability of GNN predictions through conformal-based graph sparsification [9.550589670316523]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks.
We propose a GNN emphtraining approach that finds the most predictive subgraph by removing edges and/or nodes.
We rely on reinforcement learning to solve the resulting bi-level optimization with a reward function based on conformal predictions.
arXiv Detail & Related papers (2024-04-18T17:34:47Z) - Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on
Graph Diffusion [17.70434437597516]
We present generalization bounds that scale with the largest singular value of the graph neural network's feature diffusion matrix.
These bounds are numerically much smaller than prior bounds for real-world graphs.
We measure the stability of graph neural networks against noise perturbations using Hessians.
arXiv Detail & Related papers (2023-02-09T05:54:17Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Counting Substructures with Higher-Order Graph Neural Networks:
Possibility and Impossibility Results [58.277290855841976]
We study tradeoffs of computational cost and expressive power of Graph Neural Networks (GNNs)
We show that a new model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs.
In several cases, the proposed algorithm can greatly reduce computational complexity compared to the existing higher-order $k$-GNNs.
arXiv Detail & Related papers (2020-12-06T03:42:54Z) - Revisiting Graph Convolutional Network on Semi-Supervised Node
Classification from an Optimization Perspective [10.178145000390671]
Graph convolutional networks (GCNs) have achieved promising performance on various graph-based tasks.
However they suffer from over-smoothing when stacking more layers.
We present a quantitative study on this observation and develop novel insights towards the deeper GCN.
arXiv Detail & Related papers (2020-09-24T03:36:43Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.