Graph Neural Networks vs Convolutional Neural Networks for Graph Domination Number Prediction
- URL: http://arxiv.org/abs/2511.18150v1
- Date: Sat, 22 Nov 2025 18:34:32 GMT
- Title: Graph Neural Networks vs Convolutional Neural Networks for Graph Domination Number Prediction
- Authors: Randy Davila, Beyzanur Ispir,
- Abstract summary: We investigate machine learning approaches to approximating the emphdomination number of graphs, the minimum size of a dominating set.<n>We compare two neural paradigms: Convolutional Neural Networks (CNNs), which operate on adjacency matrix representations, and Graph Neural Networks (GNNs), which learn directly from graph structure through message passing.<n>Both models offer substantial speedups over exact solvers, with GNNs delivering more than $200times$ acceleration while retaining near-perfect fidelity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate machine learning approaches to approximating the \emph{domination number} of graphs, the minimum size of a dominating set. Exact computation of this parameter is NP-hard, restricting classical methods to small instances. We compare two neural paradigms: Convolutional Neural Networks (CNNs), which operate on adjacency matrix representations, and Graph Neural Networks (GNNs), which learn directly from graph structure through message passing. Across 2,000 random graphs with up to 64 vertices, GNNs achieve markedly higher accuracy ($R^2=0.987$, MAE $=0.372$) than CNNs ($R^2=0.955$, MAE $=0.500$). Both models offer substantial speedups over exact solvers, with GNNs delivering more than $200\times$ acceleration while retaining near-perfect fidelity. Our results position GNNs as a practical surrogate for combinatorial graph invariants, with implications for scalable graph optimization and mathematical discovery.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational
and Temporal Graphs [8.095679736030146]
We investigate $mathcalFOC$NN, a fragment of first-order logic with two variables and counting quantifiers.
We propose a simple graph transformation technique, akin to a preprocessing step, which can be executed in linear time.
Our results consistently demonstrate that R$2$-GNN with the graph transformation outperforms the baseline methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-11-03T00:33:24Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Random Features Strengthen Graph Neural Networks [40.60905158071766]
Graph neural networks (GNNs) are powerful machine learning models for various graph learning tasks.
In this paper, we demonstrate that GNNs become powerful just by adding a random feature to each node.
We show that the addition of random features enables GNNs to solve various problems that normal GNNs, including the graph convolutional networks (GCNs) and graph isomorphism networks (GINs) cannot solve.
arXiv Detail & Related papers (2020-02-08T12:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.