All Nodes are created Not Equal: Node-Specific Layer Aggregation and Filtration for GNN
- URL: http://arxiv.org/abs/2405.07892v1
- Date: Mon, 13 May 2024 16:27:12 GMT
- Title: All Nodes are created Not Equal: Node-Specific Layer Aggregation and Filtration for GNN
- Authors: Shilong Wang, Hao Wu, Yifan Duan, Guibin Zhang, Guohao Li, Yuxuan Liang, Shirui Pan, Kun Wang, Yang Wang,
- Abstract summary: We propose a Node-Specific Layer Aggregation and filtration architecture, termed NoSAF, for Graph Neural Networks.
NoSAF provides a reliable information filter for each layer's nodes to sieve out information beneficial for the subsequent layer.
NoSAF also incorporates a compensation mechanism that replenishes information in every layer of the model, allowing NoSAF to perform meaningful computations even in very deep layers.
- Score: 61.28174338723467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ever-designed Graph Neural Networks, though opening a promising path for the modeling of the graph-structure data, unfortunately introduce two daunting obstacles to their deployment on devices. (I) Most of existing GNNs are shallow, due mostly to the over-smoothing and gradient-vanish problem as they go deeper as convolutional architectures. (II) The vast majority of GNNs adhere to the homophily assumption, where the central node and its adjacent nodes share the same label. This assumption often poses challenges for many GNNs working with heterophilic graphs. Addressing the aforementioned issue has become a looming challenge in enhancing the robustness and scalability of GNN applications. In this paper, we take a comprehensive and systematic approach to overcoming the two aforementioned challenges for the first time. We propose a Node-Specific Layer Aggregation and Filtration architecture, termed NoSAF, a framework capable of filtering and processing information from each individual nodes. NoSAF introduces the concept of "All Nodes are Created Not Equal" into every layer of deep networks, aiming to provide a reliable information filter for each layer's nodes to sieve out information beneficial for the subsequent layer. By incorporating a dynamically updated codebank, NoSAF dynamically optimizes the optimal information outputted downwards at each layer. This effectively overcomes heterophilic issues and aids in deepening the network. To compensate for the information loss caused by the continuous filtering in NoSAF, we also propose NoSAF-D (Deep), which incorporates a compensation mechanism that replenishes information in every layer of the model, allowing NoSAF to perform meaningful computations even in very deep layers.
Related papers
- The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive
field [39.679151680622375]
We introduce the Snowflake Hypothesis -- a novel paradigm underpinning the concept of one node, one receptive field''
We employ the simplest gradient and node-level cosine distance as guiding principles to regulate the aggregation depth for each node.
The observational results demonstrate that our hypothesis can serve as a universal operator for a range of tasks.
arXiv Detail & Related papers (2023-08-19T15:21:12Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - DPAR: Decoupled Graph Neural Networks with Node-Level Differential Privacy [30.15971370844865]
We aim to achieve node-level differential privacy (DP) for training GNNs so that a node and its edges are protected.
We propose a Decoupled GNN with Differentially Private Approximate Personalized PageRank (DPAR) for training GNNs with an enhanced privacy-utility tradeoff.
arXiv Detail & Related papers (2022-10-10T05:34:25Z) - NDGGNET-A Node Independent Gate based Graph Neural Networks [6.155450481110693]
For nodes with sparse connectivity, it is difficult to obtain enough information through a single GNN layer.
In this thesis, we define a novel framework that allows the normal GNN model to accommodate more layers.
Experimental results show that our proposed model can effectively increase the model depth and perform well on several datasets.
arXiv Detail & Related papers (2022-05-11T08:51:04Z) - Enhance Information Propagation for Graph Neural Network by
Heterogeneous Aggregations [7.3136594018091134]
Graph neural networks are emerging as continuation of deep learning success w.r.t. graph data.
We propose to enhance information propagation among GNN layers by combining heterogeneous aggregations.
We empirically validate the effectiveness of HAG-Net on a number of graph classification benchmarks.
arXiv Detail & Related papers (2021-02-08T08:57:56Z) - NCGNN: Node-level Capsule Graph Neural Network [45.23653314235767]
Node-level Capsule Graph Neural Network (NCGNN) represents nodes as groups of capsules.
novel dynamic routing procedure is developed to adaptively select appropriate capsules for aggregation.
NCGNN can well address the over-smoothing issue and outperforms the state of the arts by producing better node embeddings for classification.
arXiv Detail & Related papers (2020-12-07T06:46:17Z) - Deoscillated Graph Collaborative Filtering [74.55967586618287]
Collaborative Filtering (CF) signals are crucial for a Recommender System(RS) model to learn user and item embeddings.
Recent Graph Neural Networks(GNNs) propose to stack multiple aggregation layers to propagate high-order signals.
We propose a new RS model, named as textbfDeoscillated textbfGraph textbfCollaborative textbfFiltering(DGCF)
arXiv Detail & Related papers (2020-11-04T02:26:53Z) - Tackling Over-Smoothing for General Graph Convolutional Networks [88.71154017107257]
We study how general GCNs act with the increase in depth, including generic GCN, GCN with bias, ResGCN, and APPNP.
We propose DropEdge to alleviate over-smoothing by randomly removing a certain number of edges at each training epoch.
arXiv Detail & Related papers (2020-08-22T16:14:01Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.