Complete the Missing Half: Augmenting Aggregation Filtering with
Diversification for Graph Convolutional Neural Networks
- URL: http://arxiv.org/abs/2212.10822v1
- Date: Wed, 21 Dec 2022 07:24:03 GMT
- Title: Complete the Missing Half: Augmenting Aggregation Filtering with
Diversification for Graph Convolutional Neural Networks
- Authors: Sitao Luan, Mingde Zhao, Chenqing Hua, Xiao-Wen Chang, Doina Precup
- Abstract summary: We show that current Graph Neural Networks (GNNs) are potentially a problematic factor underlying all GNN models for learning on certain datasets.
We augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity.
Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations.
In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
- Score: 46.14626839260314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The core operation of current Graph Neural Networks (GNNs) is the aggregation
enabled by the graph Laplacian or message passing, which filters the
neighborhood information of nodes. Though effective for various tasks, in this
paper, we show that they are potentially a problematic factor underlying all
GNN models for learning on certain datasets, as they force the node
representations similar, making the nodes gradually lose their identity and
become indistinguishable. Hence, we augment the aggregation operations with
their dual, i.e. diversification operators that make the node more distinct and
preserve the identity. Such augmentation replaces the aggregation with a
two-channel filtering process that, in theory, is beneficial for enriching the
node representations. In practice, the proposed two-channel filters can be
easily patched on existing GNN methods with diverse training strategies,
including spectral and spatial (message passing) methods. In the experiments,
we observe desired characteristics of the models and significant performance
boost upon the baselines on 9 node classification tasks.
Related papers
- SF-GNN: Self Filter for Message Lossless Propagation in Deep Graph Neural Network [38.669815079957566]
Graph Neural Network (GNN) with the main idea of encoding graph structure information of graphs by propagation and aggregation has developed rapidly.
It achieved excellent performance in representation learning of multiple types of graphs such as homogeneous graphs, heterogeneous graphs, and more complex graphs like knowledge graphs.
For the phenomenon of performance degradation in deep GNNs, we propose a new perspective.
arXiv Detail & Related papers (2024-07-03T02:40:39Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Feature Correlation Aggregation: on the Path to Better Graph Neural
Networks [37.79964911718766]
Prior to the introduction of Graph Neural Networks (GNNs), modeling and analyzing irregular data, particularly graphs, was thought to be the Achilles' heel of deep learning.
This paper introduces a central node permutation variant function through a frustratingly simple and innocent-looking modification to the core operation of a GNN.
A tangible boost in performance of the model is observed where the model surpasses previous state-of-the-art results by a significant margin while employing fewer parameters.
arXiv Detail & Related papers (2021-09-20T05:04:26Z) - Node Similarity Preserving Graph Convolutional Networks [51.520749924844054]
Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
arXiv Detail & Related papers (2020-11-19T04:18:01Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Complete the Missing Half: Augmenting Aggregation Filtering with
Diversification for Graph Convolutional Networks [46.14626839260314]
We show that current Graph Neural Networks (GNNs) are potentially a problematic factor underlying all GNN methods for learning on certain datasets.
We augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity.
Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations.
In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
arXiv Detail & Related papers (2020-08-20T08:45:16Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.