On the Impact of Communities on Semi-supervised Classification Using
Graph Neural Networks
- URL: http://arxiv.org/abs/2010.16245v2
- Date: Fri, 5 Mar 2021 15:56:27 GMT
- Title: On the Impact of Communities on Semi-supervised Classification Using
Graph Neural Networks
- Authors: Hussain Hussain, Tomislav Duricic, Elisabeth Lex, Roman Kern, and
Denis Helic
- Abstract summary: We systematically study the impact of community structure on the performance of GNNs in semi-supervised node classification on graphs.
Our results suggest that communities typically have a major impact on the learning process and classification performance.
- Score: 0.5872014229110213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are effective in many applications. Still, there
is a limited understanding of the effect of common graph structures on the
learning process of GNNs. In this work, we systematically study the impact of
community structure on the performance of GNNs in semi-supervised node
classification on graphs. Following an ablation study on six datasets, we
measure the performance of GNNs on the original graphs, and the change in
performance in the presence and the absence of community structure. Our results
suggest that communities typically have a major impact on the learning process
and classification performance. For example, in cases where the majority of
nodes from one community share a single classification label, breaking up
community structure results in a significant performance drop. On the other
hand, for cases where labels show low correlation with communities, we find
that the graph structure is rather irrelevant to the learning process, and a
feature-only baseline becomes hard to beat. With our work, we provide deeper
insights in the abilities and limitations of GNNs, including a set of general
guidelines for model selection based on the graph structure.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
Graph Neural Networks (GNNs) combine information from adjacent nodes by successive applications of graph convolutions.
We study the generalization gaps of GNNs on both node-level and graph-level tasks.
We show that the generalization gaps decrease with the number of nodes in the training graphs.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Harnessing Collective Structure Knowledge in Data Augmentation for Graph Neural Networks [25.12261412297796]
Graph neural networks (GNNs) have achieved state-of-the-art performance in graph representation learning.
We propose a novel approach, namely collective structure knowledge-augmented graph neural network (CoS-GNN)
arXiv Detail & Related papers (2024-05-17T08:50:00Z) - T2-GNN: Graph Neural Networks for Graphs with Incomplete Features and
Structure via Teacher-Student Distillation [65.43245616105052]
Graph Neural Networks (GNNs) have been a prevailing technique for tackling various analysis tasks on graph data.
In this paper, we propose a general GNN framework based on teacher-student distillation to improve the performance of GNNs on incomplete graphs.
arXiv Detail & Related papers (2022-12-24T13:49:44Z) - Semantic Graph Neural Network with Multi-measure Learning for
Semi-supervised Classification [5.000404730573809]
Graph Neural Networks (GNNs) have attracted increasing attention in recent years.
Recent studies have shown that GNNs are vulnerable to the complex underlying structure of the graph.
We propose a novel framework for semi-supervised classification.
arXiv Detail & Related papers (2022-12-04T06:17:11Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - From Local Structures to Size Generalization in Graph Neural Networks [53.3202754533658]
Graph neural networks (GNNs) can process graphs of different sizes.
Their ability to generalize across sizes, specifically from small to large graphs, is still not well understood.
arXiv Detail & Related papers (2020-10-17T19:36:54Z) - Graph Clustering with Graph Neural Networks [5.305362965553278]
Graph Neural Networks (GNNs) have achieved state-of-the-art results on many graph analysis tasks.
Unsupervised problems on graphs, such as graph clustering, have proved more resistant to advances in GNNs.
We introduce Deep Modularity Networks (DMoN), an unsupervised pooling method inspired by the modularity measure of clustering quality.
arXiv Detail & Related papers (2020-06-30T15:30:49Z) - The Impact of Global Structural Information in Graph Neural Networks
Applications [5.629161809575013]
Graph Neural Networks (GNNs) rely on the graph structure to define an aggregation strategy.
A known limitation of GNNs is that, as the number of layers increases, information gets smoothed and squashed.
We give access to global information to several GNN models and observe the impact it has on downstream performance.
arXiv Detail & Related papers (2020-06-06T08:52:18Z) - A Collective Learning Framework to Boost GNN Expressiveness [25.394456460032625]
We consider the task of inductive node classification using Graph Neural Networks (GNNs) in supervised and semi-supervised settings.
We propose a general collective learning approach to increase the representation power of any existing GNN.
We evaluate performance on five real-world network datasets and demonstrate consistent, significant improvement in node classification accuracy.
arXiv Detail & Related papers (2020-03-26T22:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.