Graph Partner Neural Networks for Semi-Supervised Learning on Graphs
- URL: http://arxiv.org/abs/2110.09182v1
- Date: Mon, 18 Oct 2021 10:56:56 GMT
- Title: Graph Partner Neural Networks for Semi-Supervised Learning on Graphs
- Authors: Langzhang Liang, Cuiyun Gao, Shiyi Chen, Shishi Duan, Yu pan, Junjin
Zheng, Lei Wang, Zenglin Xu
- Abstract summary: Graph Convolutional Networks (GCNs) are powerful for processing graphstructured data and have achieved state-of-the-art performance in several tasks such as node classification, link prediction, and graph classification.
It is inevitable for deep GCNs to suffer from an over-smoothing issue that the representations of nodes will tend to be indistinguishable after repeated graph convolution operations.
We propose the Graph Partner Neural Network (GPNN) which incorporates a de- parameterized GCN and a parameter-sharing scheme.
- Score: 16.489177915147785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Networks (GCNs) are powerful for processing
graph-structured data and have achieved state-of-the-art performance in several
tasks such as node classification, link prediction, and graph classification.
However, it is inevitable for deep GCNs to suffer from an over-smoothing issue
that the representations of nodes will tend to be indistinguishable after
repeated graph convolution operations. To address this problem, we propose the
Graph Partner Neural Network (GPNN) which incorporates a de-parameterized GCN
and a parameter-sharing MLP. We provide empirical and theoretical evidence to
demonstrate the effectiveness of the proposed MLP partner on tackling
over-smoothing while benefiting from appropriate smoothness. To further tackle
over-smoothing and regulate the learning process, we introduce a well-designed
consistency contrastive loss and KL divergence loss. Besides, we present a
graph enhancement technique to improve the overall quality of edges in graphs.
While most GCNs can work with shallow architecture only, GPNN can obtain better
results through increasing model depth. Experiments on various node
classification tasks have demonstrated the state-of-the-art performance of
GPNN. Meanwhile, extensive ablation studies are conducted to investigate the
contributions of each component in tackling over-smoothing and improving
performance.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional
Network [102.27090022283208]
Graph Convolutional Network (GCN) plays pivotal roles in many real-world applications.
GCN often exhibits performance disparity with respect to node degrees, resulting in worse predictive accuracy for low-degree nodes.
We formulate the problem of mitigating the degree-related performance disparity in GCN from the perspective of the Rawlsian difference principle.
arXiv Detail & Related papers (2022-02-28T05:07:57Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Revisiting Graph Convolutional Network on Semi-Supervised Node
Classification from an Optimization Perspective [10.178145000390671]
Graph convolutional networks (GCNs) have achieved promising performance on various graph-based tasks.
However they suffer from over-smoothing when stacking more layers.
We present a quantitative study on this observation and develop novel insights towards the deeper GCN.
arXiv Detail & Related papers (2020-09-24T03:36:43Z) - Graph Convolutional Networks for Graphs Containing Missing Features [5.426650977249329]
We propose an approach that adapts Graph Convolutional Network (GCN) to graphs containing missing features.
In contrast to traditional strategy, our approach integrates the processing of missing features and graph learning within the same neural network architecture.
We demonstrate through extensive experiments that our approach significantly outperforms the imputation-based methods in node classification and link prediction tasks.
arXiv Detail & Related papers (2020-07-09T06:47:21Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.