Enhancing Intra-class Information Extraction for Heterophilous Graphs:
One Neural Architecture Search Approach
- URL: http://arxiv.org/abs/2211.10990v1
- Date: Sun, 20 Nov 2022 14:37:09 GMT
- Title: Enhancing Intra-class Information Extraction for Heterophilous Graphs:
One Neural Architecture Search Approach
- Authors: Lanning Wei, Zhiqiang He, Huan Zhao, Quanming Yao
- Abstract summary: We propose IIE-GNN (Intra-class Information Enhanced Graph Neural Networks) to achieve two improvements.
A unified framework is proposed based on the literature, in which the intra-class information from the node itself and neighbors can be extracted.
We also conduct experiments to show that IIE-GNN can improve the model performance by designing node-wise GNNs to enhance intra-class information extraction.
- Score: 41.84399177525008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Graph Neural Networks (GNNs) have been popular in graph
representation learning which assumes the homophily property, i.e., the
connected nodes have the same label or have similar features. However, they may
fail to generalize into the heterophilous graphs which in the low/medium level
of homophily. Existing methods tend to address this problem by enhancing the
intra-class information extraction, i.e., either by designing better GNNs to
improve the model effectiveness, or re-designing the graph structures to
incorporate more potential intra-class nodes from distant hops. Despite the
success, we observe two aspects that can be further improved: (a) enhancing the
ego feature information extraction from node itself which is more reliable in
extracting the intra-class information; (b) designing node-wise GNNs can better
adapt to the nodes with different homophily ratios. In this paper, we propose a
novel method IIE-GNN (Intra-class Information Enhanced Graph Neural Networks)
to achieve two improvements. A unified framework is proposed based on the
literature, in which the intra-class information from the node itself and
neighbors can be extracted based on seven carefully designed blocks. With the
help of neural architecture search (NAS), we propose a novel search space based
on the framework, and then provide an architecture predictor to design GNNs for
each node. We further conduct experiments to show that IIE-GNN can improve the
model performance by designing node-wise GNNs to enhance intra-class
information extraction.
Related papers
- Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network [4.078409998614025]
Heterophily, nodes with different labels tend to be connected based on semantic meanings, Graph Neural Networks (GNNs) often exhibit suboptimal performance.
We propose and demonstrate that the valuable semantic information inherent in heterophily can be utilized effectively in graph learning.
We propose HiGNN, an innovative approach that constructs an additional new graph structure, that integrates heterophilous information by leveraging node distribution.
arXiv Detail & Related papers (2024-03-26T03:29:42Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Network In Graph Neural Network [9.951298152023691]
We present a model-agnostic methodology that allows arbitrary GNN models to increase their model capacity by making the model deeper.
Instead of adding or widening GNN layers, NGNN deepens a GNN model by inserting non-linear feedforward neural network layer(s) within each GNN layer.
arXiv Detail & Related papers (2021-11-23T03:58:56Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Enhance Information Propagation for Graph Neural Network by
Heterogeneous Aggregations [7.3136594018091134]
Graph neural networks are emerging as continuation of deep learning success w.r.t. graph data.
We propose to enhance information propagation among GNN layers by combining heterogeneous aggregations.
We empirically validate the effectiveness of HAG-Net on a number of graph classification benchmarks.
arXiv Detail & Related papers (2021-02-08T08:57:56Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.