Tree Decomposed Graph Neural Network
- URL: http://arxiv.org/abs/2108.11022v1
- Date: Wed, 25 Aug 2021 02:47:16 GMT
- Title: Tree Decomposed Graph Neural Network
- Authors: Yu Wang, Tyler Derr
- Abstract summary: We propose a tree decomposition method to disentangle neighborhoods in different layers to alleviate feature smoothing.
We also characterize the multi-hop dependency via graph diffusion within our tree decomposition formulation to construct Tree Decomposed Graph Neural Network (TDGNN)
Comprehensive experiments demonstrate the superior performance of TDGNN on both homophily and heterophily networks.
- Score: 11.524511007436791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved significant success in learning
better representations by performing feature propagation and transformation
iteratively to leverage neighborhood information. Nevertheless, iterative
propagation restricts the information of higher-layer neighborhoods to be
transported through and fused with the lower-layer neighborhoods', which
unavoidably results in feature smoothing between neighborhoods in different
layers and can thus compromise the performance, especially on heterophily
networks. Furthermore, most deep GNNs only recognize the importance of
higher-layer neighborhoods while yet to fully explore the importance of
multi-hop dependency within the context of different layer neighborhoods in
learning better representations. In this work, we first theoretically analyze
the feature smoothing between neighborhoods in different layers and empirically
demonstrate the variance of the homophily level across neighborhoods at
different layers. Motivated by these analyses, we further propose a tree
decomposition method to disentangle neighborhoods in different layers to
alleviate feature smoothing among these layers. Moreover, we characterize the
multi-hop dependency via graph diffusion within our tree decomposition
formulation to construct Tree Decomposed Graph Neural Network (TDGNN), which
can flexibly incorporate information from large receptive fields and aggregate
this information utilizing the multi-hop dependency. Comprehensive experiments
demonstrate the superior performance of TDGNN on both homophily and heterophily
networks under a variety of node classification settings. Extensive parameter
analysis highlights the ability of TDGNN to prevent over-smoothing and
incorporate features from shallow layers with deeper multi-hop dependencies,
which provides new insights towards deeper graph neural networks. Code of
TDGNN: http://github.com/YuWVandy/TDGNN
Related papers
- Graph Elimination Networks [8.806990624643333]
Graph Neural Networks (GNNs) are widely applied across various domains, yet they perform poorly in deep layers.
We show that the root cause of GNNs' performance degradation in deep layers lies in ineffective neighborhood feature propagation.
We introduce Graph Elimination Networks (GENs), which employ a specific algorithm to eliminate redundancies during neighborhood propagation.
arXiv Detail & Related papers (2024-01-02T14:58:59Z) - DepWiGNN: A Depth-wise Graph Neural Network for Multi-hop Spatial
Reasoning in Text [52.699307699505646]
We propose a novel Depth-Wise Graph Neural Network (DepWiGNN) to handle multi-hop spatial reasoning.
Specifically, we design a novel node memory scheme and aggregate the information over the depth dimension instead of the breadth dimension of the graph.
Experimental results on two challenging multi-hop spatial reasoning datasets show that DepWiGNN outperforms existing spatial reasoning methods.
arXiv Detail & Related papers (2023-10-19T08:07:22Z) - AGNN: Alternating Graph-Regularized Neural Networks to Alleviate
Over-Smoothing [29.618952407794776]
We propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL)
GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem.
AGNN is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks.
arXiv Detail & Related papers (2023-04-14T09:20:03Z) - Simple and Efficient Heterogeneous Graph Neural Network [55.56564522532328]
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure.
This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN)
arXiv Detail & Related papers (2022-07-06T10:01:46Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Simplifying approach to Node Classification in Graph Neural Networks [7.057970273958933]
We decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance.
We show that not all features generated via aggregation steps are useful, and often using these less informative features can be detrimental to the performance of the GNN model.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model achieves comparable or even higher accuracy than state-of-the-art GNN models.
arXiv Detail & Related papers (2021-11-12T14:53:22Z) - Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural
Networks [68.9026534589483]
RioGNN is a novel Reinforced, recursive and flexible neighborhood selection guided multi-relational Graph Neural Network architecture.
RioGNN can learn more discriminative node embedding with enhanced explainability due to the recognition of individual importance of each relation.
arXiv Detail & Related papers (2021-04-16T04:30:06Z) - RAN-GNNs: breaking the capacity limits of graph neural networks [43.66682619000099]
Graph neural networks have become a staple in problems addressing learning and analysis of data defined over graphs.
Recent works attribute this to the need to consider multiple neighborhood sizes at the same time and adaptively tune them.
We show that employing a randomly-wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations.
arXiv Detail & Related papers (2021-03-29T12:34:36Z) - Enhance Information Propagation for Graph Neural Network by
Heterogeneous Aggregations [7.3136594018091134]
Graph neural networks are emerging as continuation of deep learning success w.r.t. graph data.
We propose to enhance information propagation among GNN layers by combining heterogeneous aggregations.
We empirically validate the effectiveness of HAG-Net on a number of graph classification benchmarks.
arXiv Detail & Related papers (2021-02-08T08:57:56Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.