Node Similarity Preserving Graph Convolutional Networks
- URL: http://arxiv.org/abs/2011.09643v2
- Date: Mon, 8 Mar 2021 10:48:14 GMT
- Title: Node Similarity Preserving Graph Convolutional Networks
- Authors: Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu and Jiliang Tang
- Abstract summary: Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
- Score: 51.520749924844054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have achieved tremendous success in various
real-world applications due to their strong ability in graph representation
learning. GNNs explore the graph structure and node features by aggregating and
transforming information within node neighborhoods. However, through
theoretical and empirical analysis, we reveal that the aggregation process of
GNNs tends to destroy node similarity in the original feature space. There are
many scenarios where node similarity plays a crucial role. Thus, it has
motivated the proposed framework SimP-GCN that can effectively and efficiently
preserve node similarity while exploiting graph structure. Specifically, to
balance information from graph structure and node features, we propose a
feature similarity preserving aggregation which adaptively integrates graph
structure and node features. Furthermore, we employ self-supervised learning to
explicitly capture the complex feature similarity and dissimilarity relations
between nodes. We validate the effectiveness of SimP-GCN on seven benchmark
datasets including three assortative and four disassorative graphs. The results
demonstrate that SimP-GCN outperforms representative baselines. Further probe
shows various advantages of the proposed framework. The implementation of
SimP-GCN is available at \url{https://github.com/ChandlerBang/SimP-GCN}.
Related papers
- Probability Passing for Graph Neural Networks: Graph Structure and Representations Joint Learning [8.392545965667288]
Graph Neural Networks (GNNs) have achieved notable success in the analysis of non-Euclidean data across a wide range of domains.
To solve this problem, Latent Graph Inference (LGI) is proposed to infer a task-specific latent structure by computing similarity or edge probability of node features.
We introduce a novel method called Probability Passing to refine the generated graph structure by aggregating edge probabilities of neighboring nodes.
arXiv Detail & Related papers (2024-07-15T13:01:47Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - LSGNN: Towards General Graph Neural Network in Node Classification by
Local Similarity [59.41119013018377]
We propose to use the local similarity (LocalSim) to learn node-level weighted fusion, which can also serve as a plug-and-play module.
For better fusion, we propose a novel and efficient Initial Residual Difference Connection (IRDC) to extract more informative multi-hop information.
Our proposed method, namely Local Similarity Graph Neural Network (LSGNN), can offer comparable or superior state-of-the-art performance on both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2023-05-07T09:06:11Z) - Graph Ordering Attention Networks [22.468776559433614]
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data.
We introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood.
GOAT layer demonstrates its increased performance in modeling graph metrics that capture complex information.
arXiv Detail & Related papers (2022-04-11T18:13:19Z) - CatGCN: Graph Convolutional Networks with Categorical Node Features [99.555850712725]
CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
arXiv Detail & Related papers (2020-09-11T09:25:17Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.