Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias
- URL: http://arxiv.org/abs/2309.14907v1
- Date: Tue, 26 Sep 2023 13:09:43 GMT
- Title: Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias
- Authors: Zhihao Shi, Jie Wang, Fanghua Lu, Hanzhu Chen, Defu Lian, Zheng Wang,
Jieping Ye, Feng Wu
- Abstract summary: We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
- Score: 75.44877675117749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node representation learning on attributed graphs -- whose nodes are
associated with rich attributes (e.g., texts and protein sequences) -- plays a
crucial role in many important downstream tasks. To encode the attributes and
graph structures simultaneously, recent studies integrate pre-trained models
with graph neural networks (GNNs), where pre-trained models serve as node
encoders (NEs) to encode the attributes. As jointly training large NEs and GNNs
on large-scale graphs suffers from severe scalability issues, many methods
propose to train NEs and GNNs separately. Consequently, they do not take
feature convolutions in GNNs into consideration in the training phase of NEs,
leading to a significant learning bias from that by the joint training. To
address this challenge, we propose an efficient label regularization technique,
namely Label Deconvolution (LD), to alleviate the learning bias by a novel and
highly scalable approximation to the inverse mapping of GNNs. The inverse
mapping leads to an objective function that is equivalent to that by the joint
training, while it can effectively incorporate GNNs in the training phase of
NEs against the learning bias. More importantly, we show that LD converges to
the optimal objective function values by thejoint training under mild
assumptions. Experiments demonstrate LD significantly outperforms
state-of-the-art methods on Open Graph Benchmark datasets.
Related papers
- DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification [25.831508778029097]
We introduce AMUD, which quantifies the relationship between node profiles and topology from a statistical perspective.
We also propose ADPA as a new directed graph learning paradigm for AMUD.
arXiv Detail & Related papers (2023-12-07T07:54:11Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Simple yet Effective Gradient-Free Graph Convolutional Networks [20.448409424929604]
Linearized Graph Neural Networks (GNNs) have attracted great attention in recent years for graph representation learning.
In this paper, we relate over-smoothing with the vanishing gradient phenomenon and craft a gradient-free training framework.
Our methods achieve better and more stable performances on node classification tasks with varying depths and cost much less training time.
arXiv Detail & Related papers (2023-02-01T11:00:24Z) - Neighborhood Convolutional Network: A New Paradigm of Graph Neural
Networks for Node Classification [12.062421384484812]
Graph Convolutional Network (GCN) decouples neighborhood aggregation and feature transformation in each convolutional layer.
In this paper, we propose a new paradigm of GCN, termed Neighborhood Convolutional Network (NCN)
In this way, the model could inherit the merit of decoupled GCN for aggregating neighborhood information, at the same time, develop much more powerful feature learning modules.
arXiv Detail & Related papers (2022-11-15T02:02:51Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.