Geom-GCN: Geometric Graph Convolutional Networks
- URL: http://arxiv.org/abs/2002.05287v2
- Date: Fri, 14 Feb 2020 01:47:35 GMT
- Title: Geom-GCN: Geometric Graph Convolutional Networks
- Authors: Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, Bo Yang
- Abstract summary: We propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.
The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation.
We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.
- Score: 15.783571061254847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Message-passing neural networks (MPNNs) have been successfully applied to
representation learning on graphs in a variety of real-world applications.
However, two fundamental weaknesses of MPNNs' aggregators limit their ability
to represent graph-structured data: losing the structural information of nodes
in neighborhoods and lacking the ability to capture long-range dependencies in
disassortative graphs. Few studies have noticed the weaknesses from different
perspectives. From the observations on classical neural network and network
geometry, we propose a novel geometric aggregation scheme for graph neural
networks to overcome the two weaknesses. The behind basic idea is the
aggregation on a graph can benefit from a continuous space underlying the
graph. The proposed aggregation scheme is permutation-invariant and consists of
three modules, node embedding, structural neighborhood, and bi-level
aggregation. We also present an implementation of the scheme in graph
convolutional networks, termed Geom-GCN (Geometric Graph Convolutional
Networks), to perform transductive learning on graphs. Experimental results
show the proposed Geom-GCN achieved state-of-the-art performance on a wide
range of open datasets of graphs. Code is available at
https://github.com/graphdml-uiuc-jlu/geom-gcn.
Related papers
- Graph Convolutional Network For Semi-supervised Node Classification With Subgraph Sketching [0.27624021966289597]
We propose the Graph-Learning-Dual Graph Convolutional Neural Network called GLDGCN.
We apply GLDGCN to the semi-supervised node classification task.
Compared with the baseline methods, we achieve higher classification accuracy on three citation networks.
arXiv Detail & Related papers (2024-04-19T09:08:12Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Schema-Aware Deep Graph Convolutional Networks for Heterogeneous Graphs [10.526065883783899]
Graph convolutional network (GCN) based approaches have achieved significant progress for solving complex, graph-structured problems.
We propose our GCN framework 'Deep Heterogeneous Graph Convolutional Network (DHGCN)'
It takes advantage of the schema of a heterogeneous graph and uses a hierarchical approach to effectively utilize information many hops away.
arXiv Detail & Related papers (2021-05-03T06:24:27Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Incomplete Graph Representation and Learning via Partial Graph Neural
Networks [7.227805463462352]
In many applications, graph may be coming in an incomplete form where attributes of graph nodes are partially unknown/missing.
Existing GNNs are generally designed on complete graphs which can not deal with attribute-incomplete graph data directly.
We develop a novel partial aggregation based GNNs, named Partial Graph Neural Networks (PaGNNs) for attribute-incomplete graph representation and learning.
arXiv Detail & Related papers (2020-03-23T08:29:59Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.