CenGCN: Centralized Convolutional Networks with Vertex Imbalance for
Scale-Free Graphs
- URL: http://arxiv.org/abs/2202.07826v1
- Date: Wed, 16 Feb 2022 02:18:16 GMT
- Title: CenGCN: Centralized Convolutional Networks with Vertex Imbalance for
Scale-Free Graphs
- Authors: Feng Xia, Lei Wang, Tao Tang, Xin Chen, Xiangjie Kong, Giles Oatley,
Irwin King
- Abstract summary: We propose a novel centrality-based framework named CenGCN to address the inequality of information.
We present two variants CenGCN_D and CenGCN_E, based on degree centrality and eigenvector centrality, respectively.
Results demonstrate that the two variants significantly outperform state-of-the-art baselines.
- Score: 38.427695265783726
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Convolutional Networks (GCNs) have achieved impressive performance in a
wide variety of areas, attracting considerable attention. The core step of GCNs
is the information-passing framework that considers all information from
neighbors to the central vertex to be equally important. Such equal importance,
however, is inadequate for scale-free networks, where hub vertices propagate
more dominant information due to vertex imbalance. In this paper, we propose a
novel centrality-based framework named CenGCN to address the inequality of
information. This framework first quantifies the similarity between hub
vertices and their neighbors by label propagation with hub vertices. Based on
this similarity and centrality indices, the framework transforms the graph by
increasing or decreasing the weights of edges connecting hub vertices and
adding self-connections to vertices. In each non-output layer of the GCN, this
framework uses a hub attention mechanism to assign new weights to connected
non-hub vertices based on their common information with hub vertices. We
present two variants CenGCN\_D and CenGCN\_E, based on degree centrality and
eigenvector centrality, respectively. We also conduct comprehensive
experiments, including vertex classification, link prediction, vertex
clustering, and network visualization. The results demonstrate that the two
variants significantly outperform state-of-the-art baselines.
Related papers
- Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention [27.29964380651613]
We envision the graph as a network of interconnected node sets without compressing each cluster into a single embedding.
To enable effective information transfer among these node sets, we propose the Node-to-Cluster Attention (N2C-Attn) mechanism.
We show how N2C-Attn combines bi-level feature maps of queries and keys, demonstrating its capability to merge dual-granularity information.
arXiv Detail & Related papers (2024-10-09T10:30:01Z) - A multi-core periphery perspective: Ranking via relative centrality [4.33459568143131]
Community and core-periphery are two widely studied graph structures.
The impact of inferring the core-periphery structure of a graph on understanding its community structure is not well utilized.
We introduce a novel for graphs with ground truth communities, where each community has a densely connected part (the core) and the rest is more sparse (the periphery)
arXiv Detail & Related papers (2024-06-06T20:21:27Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Graph Spectral Embedding using the Geodesic Betweeness Centrality [76.27138343125985]
We introduce the Graph Sylvester Embedding (GSE), an unsupervised graph representation of local similarity, connectivity, and global structure.
GSE uses the solution of the Sylvester equation to capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2022-05-07T04:11:23Z) - A Modular Framework for Centrality and Clustering in Complex Networks [0.6423239719448168]
In this paper, we study two important such network analysis techniques, namely, centrality and clustering.
An information-flow based model is adopted for clustering, which itself builds upon an information theoretic measure for computing centrality.
Our clustering naturally inherits the flexibility to accommodate edge directionality, as well as different interpretations and interplay between edge weights and node degrees.
arXiv Detail & Related papers (2021-11-23T03:01:29Z) - Feature Correlation Aggregation: on the Path to Better Graph Neural
Networks [37.79964911718766]
Prior to the introduction of Graph Neural Networks (GNNs), modeling and analyzing irregular data, particularly graphs, was thought to be the Achilles' heel of deep learning.
This paper introduces a central node permutation variant function through a frustratingly simple and innocent-looking modification to the core operation of a GNN.
A tangible boost in performance of the model is observed where the model surpasses previous state-of-the-art results by a significant margin while employing fewer parameters.
arXiv Detail & Related papers (2021-09-20T05:04:26Z) - SPAGAN: Shortest Path Graph Attention Network [187.75441278910708]
Graph convolutional networks (GCN) have recently demonstrated their potential in analyzing non-grid structure data that can be represented as graphs.
We propose a novel GCN model, which we term as Shortest Path Graph Attention Network (SPAGAN)
arXiv Detail & Related papers (2021-01-10T03:18:34Z) - Learning to Cluster Faces via Confidence and Connectivity Estimation [136.5291151775236]
We propose a fully learnable clustering framework without requiring a large number of overlapped subgraphs.
Our method significantly improves clustering accuracy and thus performance of the recognition models trained on top, yet it is an order of magnitude more efficient than existing supervised methods.
arXiv Detail & Related papers (2020-04-01T13:39:37Z) - An Uncoupled Training Architecture for Large Graph Learning [20.784230322205232]
We present Node2Grids, a flexible uncoupled training framework for embedding graph data into grid-like data.
By ranking each node's influence through degree, Node2Grids selects the most influential first-order as well as second-order neighbors with central node fusion information.
For further improving the efficiency of downstream tasks, a simple CNN-based neural network is employed to capture the significant information from the mapped grid-like data.
arXiv Detail & Related papers (2020-03-21T11:49:16Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.