Graph Convolutional Neural Networks with Node Transition
Probability-based Message Passing and DropNode Regularization
- URL: http://arxiv.org/abs/2008.12578v2
- Date: Thu, 18 Mar 2021 13:48:49 GMT
- Title: Graph Convolutional Neural Networks with Node Transition
Probability-based Message Passing and DropNode Regularization
- Authors: Tien Huu Do, Duc Minh Nguyen, Giannis Bekoulis, Adrian Munteanu, Nikos
Deligiannis
- Abstract summary: Graph convolutional neural networks (GCNNs) have received much attention recently, owing to their capability in handling graph-structured data.
This work presents a new method to improve the message passing process based on node transition probabilities.
We also propose a novel regularization method termed DropNode to address the over-fitting and over-smoothing issues simultaneously.
- Score: 32.260055351563324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional neural networks (GCNNs) have received much attention
recently, owing to their capability in handling graph-structured data. Among
the existing GCNNs, many methods can be viewed as instances of a neural message
passing motif; features of nodes are passed around their neighbors, aggregated
and transformed to produce better nodes' representations. Nevertheless, these
methods seldom use node transition probabilities, a measure that has been found
useful in exploring graphs. Furthermore, when the transition probabilities are
used, their transition direction is often improperly considered in the feature
aggregation step, resulting in an inefficient weighting scheme. In addition,
although a great number of GCNN models with increasing level of complexity have
been introduced, the GCNNs often suffer from over-fitting when being trained on
small graphs. Another issue of the GCNNs is over-smoothing, which tends to make
nodes' representations indistinguishable. This work presents a new method to
improve the message passing process based on node transition probabilities by
properly considering the transition direction, leading to a better weighting
scheme in nodes' features aggregation compared to the existing counterpart.
Moreover, we propose a novel regularization method termed DropNode to address
the over-fitting and over-smoothing issues simultaneously. DropNode randomly
discards part of a graph, thus it creates multiple deformed versions of the
graph, leading to data augmentation regularization effect. Additionally,
DropNode lessens the connectivity of the graph, mitigating the effect of
over-smoothing in deep GCNNs. Extensive experiments on eight benchmark datasets
for node and graph classification tasks demonstrate the effectiveness of the
proposed methods in comparison with the state of the art.
Related papers
- SF-GNN: Self Filter for Message Lossless Propagation in Deep Graph Neural Network [38.669815079957566]
Graph Neural Network (GNN) with the main idea of encoding graph structure information of graphs by propagation and aggregation has developed rapidly.
It achieved excellent performance in representation learning of multiple types of graphs such as homogeneous graphs, heterogeneous graphs, and more complex graphs like knowledge graphs.
For the phenomenon of performance degradation in deep GNNs, we propose a new perspective.
arXiv Detail & Related papers (2024-07-03T02:40:39Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - $\rm A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks [18.772128348519566]
We propose the Aggregation-Aware mixed-precision Quantization ($rm A2Q$) for Graph Neural Networks (GNNs)
Our method can achieve up to 11.4% and 9.5% accuracy improvements on the node-level and graph-level tasks, respectively, and up to 2x speedup on a dedicated hardware accelerator.
arXiv Detail & Related papers (2023-02-01T02:54:35Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - NCGNN: Node-level Capsule Graph Neural Network [45.23653314235767]
Node-level Capsule Graph Neural Network (NCGNN) represents nodes as groups of capsules.
novel dynamic routing procedure is developed to adaptively select appropriate capsules for aggregation.
NCGNN can well address the over-smoothing issue and outperforms the state of the arts by producing better node embeddings for classification.
arXiv Detail & Related papers (2020-12-07T06:46:17Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.