Approximating Network Centrality Measures Using Node Embedding and
Machine Learning
- URL: http://arxiv.org/abs/2006.16392v4
- Date: Sun, 1 Nov 2020 19:32:15 GMT
- Title: Approximating Network Centrality Measures Using Node Embedding and
Machine Learning
- Authors: Matheus R. F. Mendon\c{c}a, Andr\'e M. S. Barreto, and Artur Ziviani
- Abstract summary: We propose an approach for efficiently approximating node centralities for large networks using Neural Networks and Graph Embedding techniques.
Our proposed model, entitled Network Centrality Approximation using Graph Embedding (NCA-GE), uses the adjacency matrix of a graph and a set of features for each node.
NCA-GE has a time complexity of $O(|E|)$, $E$ being the set of edges of a graph, making it suitable for large networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting information from real-world large networks is a key challenge
nowadays. For instance, computing a node centrality may become unfeasible
depending on the intended centrality due to its computational cost. One
solution is to develop fast methods capable of approximating network
centralities. Here, we propose an approach for efficiently approximating node
centralities for large networks using Neural Networks and Graph Embedding
techniques. Our proposed model, entitled Network Centrality Approximation using
Graph Embedding (NCA-GE), uses the adjacency matrix of a graph and a set of
features for each node (here, we use only the degree) as input and computes the
approximate desired centrality rank for every node. NCA-GE has a time
complexity of $O(|E|)$, $E$ being the set of edges of a graph, making it
suitable for large networks. NCA-GE also trains pretty fast, requiring only a
set of a thousand small synthetic scale-free graphs (ranging from 100 to 1000
nodes each), and it works well for different node centralities, network sizes,
and topologies. Finally, we compare our approach to the state-of-the-art method
that approximates centrality ranks using the degree and eigenvector
centralities as input, where we show that the NCA-GE outperforms the former in
a variety of scenarios.
Related papers
- Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - Inferential SIR-GN: Scalable Graph Representation Learning [0.4699313647907615]
Graph representation learning methods generate numerical vector representations for the nodes in a network.
In this work, we propose Inferential SIR-GN, a model which is pre-trained on random graphs, then computes node representations rapidly.
We demonstrate that the model is able to capture node's structural role information, and show excellent performance at node and graph classification tasks, on unseen networks.
arXiv Detail & Related papers (2021-11-08T20:56:37Z) - Position-based Hash Embeddings For Scaling Graph Neural Networks [8.87527266373087]
Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
arXiv Detail & Related papers (2021-08-31T22:42:25Z) - Large-Scale Network Embedding in Apache Spark [1.3769786711365102]
We propose an efficient and effective distributed algorithm for network embedding on large graphs using Apache Spark.
We demonstrate in various experiments that our proposed approach is able to handle graphs with billions of edges within a few hours and is at least 4 times faster than the state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-20T04:42:24Z) - A QUBO formulation for top-$\tau$ eigencentrality nodes [0.0]
We lay the foundations for solving the eigencentrality problem of ranking the importance of the nodes of a network with scores from the eigenvector of the network.
The problem is reformulated as a quadratic unconstrained binary optimization (QUBO) that can be solved on both quantum architectures.
The results focus on correctly identifying a given number of the most important nodes in numerous networks given by the sparse vector solution of our QUBO formulation of the problem of identifying the top-$tau$ highest eigencentrality nodes in a network on both the D-Wave and IBM quantum computers.
arXiv Detail & Related papers (2021-05-01T05:35:44Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.