Should Graph Convolution Trust Neighbors? A Simple Causal Inference
Method
- URL: http://arxiv.org/abs/2010.11797v2
- Date: Sun, 6 Jun 2021 06:29:34 GMT
- Title: Should Graph Convolution Trust Neighbors? A Simple Causal Inference
Method
- Authors: Fuli Feng, Weiran Huang, Xiangnan He, Xin Xin, Qifan Wang, Tat-Seng
Chua
- Abstract summary: Graph Convolutional Network (GCN) is an emerging technique for information retrieval (IR) applications.
This work focuses on the local structure discrepancy of testing nodes, which has received little scrutiny.
We analyze the working mechanism of GCN with causal graph, estimating the causal effect of a node's local structure for the prediction.
- Score: 114.48708191371524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Network (GCN) is an emerging technique for information
retrieval (IR) applications. While GCN assumes the homophily property of a
graph, real-world graphs are never perfect: the local structure of a node may
contain discrepancy, e.g., the labels of a node's neighbors could vary. This
pushes us to consider the discrepancy of local structure in GCN modeling.
Existing work approaches this issue by introducing an additional module such as
graph attention, which is expected to learn the contribution of each neighbor.
However, such module may not work reliably as expected, especially when there
lacks supervision signal, e.g., when the labeled data is small. Moreover,
existing methods focus on modeling the nodes in the training data, and never
consider the local structure discrepancy of testing nodes.
This work focuses on the local structure discrepancy issue for testing nodes,
which has received little scrutiny. From a novel perspective of causality, we
investigate whether a GCN should trust the local structure of a testing node
when predicting its label. To this end, we analyze the working mechanism of GCN
with causal graph, estimating the causal effect of a node's local structure for
the prediction. The idea is simple yet effective: given a trained GCN model, we
first intervene the prediction by blocking the graph structure; we then compare
the original prediction with the intervened prediction to assess the causal
effect of the local structure on the prediction. Through this way, we can
eliminate the impact of local structure discrepancy and make more accurate
prediction. Extensive experiments on seven node classification datasets show
that our method effectively enhances the inference stage of GCN.
Related papers
- Probability Passing for Graph Neural Networks: Graph Structure and Representations Joint Learning [8.392545965667288]
Graph Neural Networks (GNNs) have achieved notable success in the analysis of non-Euclidean data across a wide range of domains.
To solve this problem, Latent Graph Inference (LGI) is proposed to infer a task-specific latent structure by computing similarity or edge probability of node features.
We introduce a novel method called Probability Passing to refine the generated graph structure by aggregating edge probabilities of neighboring nodes.
arXiv Detail & Related papers (2024-07-15T13:01:47Z) - Structure Enhanced Graph Neural Networks for Link Prediction [6.872826041648584]
We propose Structure Enhanced Graph neural network (SEG) for link prediction.
SEG incorporates surrounding topological information of target nodes into an ordinary GNN model.
Experiments on the OGB link prediction datasets demonstrate that SEG achieves state-of-the-art results.
arXiv Detail & Related papers (2022-01-14T03:49:30Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Node Feature Kernels Increase Graph Convolutional Network Robustness [19.076912727990326]
The robustness of Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance.
In this paper, a random matrix theory analysis is possible.
It is observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem.
arXiv Detail & Related papers (2021-09-04T04:20:45Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Node Similarity Preserving Graph Convolutional Networks [51.520749924844054]
Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
arXiv Detail & Related papers (2020-11-19T04:18:01Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - AM-GCN: Adaptive Multi-channel Graph Convolutional Networks [85.0332394224503]
We study whether Graph Convolutional Networks (GCNs) can optimally integrate node features and topological structures in a complex graph with rich information.
We propose an adaptive multi-channel graph convolutional networks for semi-supervised classification (AM-GCN)
Our experiments show that AM-GCN extracts the most correlated information from both node features and topological structures substantially.
arXiv Detail & Related papers (2020-07-05T08:16:03Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.