Connecting Graph Convolutional Networks and Graph-Regularized PCA
- URL: http://arxiv.org/abs/2006.12294v2
- Date: Tue, 2 Mar 2021 18:08:13 GMT
- Title: Connecting Graph Convolutional Networks and Graph-Regularized PCA
- Authors: Lingxiao Zhao, Leman Akoglu
- Abstract summary: Graph convolution operator of the GCN model is originally motivated from a localized first-order approximation of spectral graph convolutions.
We establish a textitmathematical connection between graph convolution and graph-regularized PCA (GPCA)
- Score: 23.592657600394215
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph convolution operator of the GCN model is originally motivated from a
localized first-order approximation of spectral graph convolutions. This work
stands on a different view; establishing a \textit{mathematical connection
between graph convolution and graph-regularized PCA} (GPCA). Based on this
connection, GCN architecture, shaped by stacking graph convolution layers,
shares a close relationship with stacking GPCA. We empirically demonstrate that
the \textit{unsupervised} embeddings by GPCA paired with a 1- or 2-layer MLP
achieves similar or even better performance than GCN on semi-supervised node
classification tasks across five datasets including Open Graph Benchmark
\footnote{\url{https://ogb.stanford.edu/}}. This suggests that the prowess of
GCN is driven by graph based regularization. In addition, we extend GPCA to the
(semi-)supervised setting and show that it is equivalent to GPCA on a graph
extended with "ghost" edges between nodes of the same label. Finally, we
capitalize on the discovered relationship to design an effective initialization
strategy based on stacking GPCA, enabling GCN to converge faster and achieve
robust performance at large number of layers. Notably, the proposed
initialization is general-purpose and applies to other GNNs.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Graph Convolutional Network For Semi-supervised Node Classification With Subgraph Sketching [0.27624021966289597]
We propose the Graph-Learning-Dual Graph Convolutional Neural Network called GLDGCN.
We apply GLDGCN to the semi-supervised node classification task.
Compared with the baseline methods, we achieve higher classification accuracy on three citation networks.
arXiv Detail & Related papers (2024-04-19T09:08:12Z) - From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited [51.24526202984846]
Graph-based semi-supervised learning (GSSL) has long been a hot research topic.
graph convolutional networks (GCNs) have become the predominant techniques for their promising performance.
arXiv Detail & Related papers (2023-09-24T10:10:21Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Quadratic GCN for Graph Classification [1.1470070927586016]
Graph Convolutional Networks (GCNs) have been extensively used to classify vertices in graphs.
GCNs have been extended to graph classification tasks (GCT)
We propose a novel solution combining GCN, methods from knowledge graphs, and a new self-regularized activation function.
arXiv Detail & Related papers (2021-04-14T10:01:09Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Directed Graph Convolutional Network [15.879411956536885]
We extend spectral-based graph convolution to directed graphs by using first- and second-order proximity.
A new GCN model, called DGCN, is then designed to learn representations on the directed graph.
arXiv Detail & Related papers (2020-04-29T06:19:10Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.