Node Centrality Approximation For Large Networks Based On Inductive
Graph Neural Networks
- URL: http://arxiv.org/abs/2403.04977v1
- Date: Fri, 8 Mar 2024 01:23:12 GMT
- Title: Node Centrality Approximation For Large Networks Based On Inductive
Graph Neural Networks
- Authors: Yiwei Zou, Ting Li, Zong-fu Luo
- Abstract summary: Closeness Centrality (CC) and Betweenness Centrality (BC) are crucial metrics in network analysis.
Their practical implementation on extensive networks remains computationally demanding due to their high time complexity.
We propose the CNCA-IGE model, which is an inductive graph encoder-decoder model designed to rank nodes based on specified CC or BC metrics.
- Score: 2.4012886591705738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Closeness Centrality (CC) and Betweenness Centrality (BC) are crucial metrics
in network analysis, providing essential reference for discerning the
significance of nodes within complex networks. These measures find wide
applications in critical tasks, such as community detection and network
dismantling. However, their practical implementation on extensive networks
remains computationally demanding due to their high time complexity. To
mitigate these computational challenges, numerous approximation algorithms have
been developed to expedite the computation of CC and BC. Nevertheless, even
these approximations still necessitate substantial processing time when applied
to large-scale networks. Furthermore, their output proves sensitive to even
minor perturbations within the network structure.
In this work, We redefine the CC and BC node ranking problem as a machine
learning problem and propose the CNCA-IGE model, which is an encoder-decoder
model based on inductive graph neural networks designed to rank nodes based on
specified CC or BC metrics. We incorporate the MLP-Mixer model as the decoder
in the BC ranking prediction task to enhance the model's robustness and
capacity. Our approach is evaluated on diverse synthetic and real-world
networks of varying scales, and the experimental results demonstrate that the
CNCA-IGE model outperforms state-of-the-art baseline models, significantly
reducing execution time while improving performance.
Related papers
- CCDepth: A Lightweight Self-supervised Depth Estimation Network with Enhanced Interpretability [11.076431337488973]
This study proposes a novel hybrid self-supervised depth estimation network, CCDepth, comprising convolutional neural networks (CNNs) and the white-box CRATE network.
This novel network uses CNNs and the CRATE modules to extract local and global information in images, respectively, thereby boosting learning efficiency and reducing model size.
arXiv Detail & Related papers (2024-09-30T04:19:40Z) - Advanced Financial Fraud Detection Using GNN-CL Model [13.5240775562349]
The innovative GNN-CL model proposed in this paper marks a breakthrough in the field of financial fraud detection.
It combines the advantages of graph neural networks (gnn), convolutional neural networks (cnn) and long short-term memory (LSTM) networks.
A key novelty of this paper is the use of multilayer perceptrons (MLPS) to estimate node similarity.
arXiv Detail & Related papers (2024-07-09T03:59:06Z) - Unsupervised Graph Attention Autoencoder for Attributed Networks using
K-means Loss [0.0]
We introduce a simple, efficient, and clustering-oriented model based on unsupervised textbfGraph Attention textbfAutotextbfEncoder for community detection in attributed networks.
The proposed model adeptly learns representations from both the network's topology and attribute information, simultaneously addressing dual objectives: reconstruction and community discovery.
arXiv Detail & Related papers (2023-11-21T20:45:55Z) - DANI: Fast Diffusion Aware Network Inference with Preserving Topological
Structure Property [2.8948274245812327]
We propose a novel method called DANI to infer the underlying network while preserving its structural properties.
DANI has higher accuracy and lower run time while maintaining structural properties, including modular structure, degree distribution, connected components, density, and clustering coefficients.
arXiv Detail & Related papers (2023-10-02T23:23:00Z) - Spike-and-slab shrinkage priors for structurally sparse Bayesian neural networks [0.16385815610837165]
Sparse deep learning addresses challenges by recovering a sparse representation of the underlying target function.
Deep neural architectures compressed via structured sparsity provide low latency inference, higher data throughput, and reduced energy consumption.
We propose structurally sparse Bayesian neural networks which prune excessive nodes with (i) Spike-and-Slab Group Lasso (SS-GL), and (ii) Spike-and-Slab Group Horseshoe (SS-GHS) priors.
arXiv Detail & Related papers (2023-08-17T17:14:18Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral
Super-Resolution [79.97180849505294]
We propose a novel coupled unmixing network with a cross-attention mechanism, CUCaNet, to enhance the spatial resolution of HSI.
Experiments are conducted on three widely-used HS-MS datasets in comparison with state-of-the-art HSI-SR models.
arXiv Detail & Related papers (2020-07-10T08:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.