NAFS: A Simple yet Tough-to-beat Baseline for Graph Representation
Learning
- URL: http://arxiv.org/abs/2206.08583v1
- Date: Fri, 17 Jun 2022 06:53:04 GMT
- Title: NAFS: A Simple yet Tough-to-beat Baseline for Graph Representation
Learning
- Authors: Wentao Zhang, Zeang Sheng, Mingyu Yang, Yang Li, Yu Shen, Zhi Yang,
Bin Cui
- Abstract summary: We present node-adaptive feature smoothing (NAFS), a simple non-parametric method that constructs node representations without parameter learning.
We conduct experiments on four benchmark datasets on two different application scenarios: node clustering and link prediction.
Remarkably, NAFS with feature ensemble outperforms the state-of-the-art GNNs on these tasks.
- Score: 26.79012993334157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, graph neural networks (GNNs) have shown prominent performance in
graph representation learning by leveraging knowledge from both graph structure
and node features. However, most of them have two major limitations. First,
GNNs can learn higher-order structural information by stacking more layers but
can not deal with large depth due to the over-smoothing issue. Second, it is
not easy to apply these methods on large graphs due to the expensive
computation cost and high memory usage. In this paper, we present node-adaptive
feature smoothing (NAFS), a simple non-parametric method that constructs node
representations without parameter learning. NAFS first extracts the features of
each node with its neighbors of different hops by feature smoothing, and then
adaptively combines the smoothed features. Besides, the constructed node
representation can further be enhanced by the ensemble of smoothed features
extracted via different smoothing strategies. We conduct experiments on four
benchmark datasets on two different application scenarios: node clustering and
link prediction. Remarkably, NAFS with feature ensemble outperforms the
state-of-the-art GNNs on these tasks and mitigates the aforementioned two
limitations of most learning-based GNN counterparts.
Related papers
- Graph Sparsification via Mixture of Graphs [67.40204130771967]
We introduce Mixture-of-Graphs (MoG) to dynamically select tailored pruning solutions for each node.
MoG incorporates multiple sparsifier experts, each characterized by unique sparsity levels and pruning criteria, and selects the appropriate experts for each node.
Experiments on four large-scale OGB datasets and two superpixel datasets, equipped with five GNNs, demonstrate that MoG identifies subgraphs at higher sparsity levels.
arXiv Detail & Related papers (2024-05-23T07:40:21Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Learning heterophilious edge to drop: A general framework for boosting
graph neural networks [19.004710957882402]
This work aims at mitigating the negative impacts of heterophily by optimizing graph structure for the first time.
We propose a structure learning method called LHE to identify heterophilious edges to drop.
Experiments demonstrate the remarkable performance improvement of GNNs with emphLHE on multiple datasets across full spectrum of homophily level.
arXiv Detail & Related papers (2022-05-23T14:07:29Z) - Simplifying approach to Node Classification in Graph Neural Networks [7.057970273958933]
We decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance.
We show that not all features generated via aggregation steps are useful, and often using these less informative features can be detrimental to the performance of the GNN model.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model achieves comparable or even higher accuracy than state-of-the-art GNN models.
arXiv Detail & Related papers (2021-11-12T14:53:22Z) - Improving Graph Neural Networks with Simple Architecture Design [7.057970273958933]
We introduce several key design strategies for graph neural networks.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN)
We show that the proposed model outperforms other state of the art GNN models and achieves up to 64% improvements in accuracy on node classification tasks.
arXiv Detail & Related papers (2021-05-17T06:46:01Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.