Noise-robust Graph Learning by Estimating and Leveraging Pairwise
Interactions
- URL: http://arxiv.org/abs/2106.07451v2
- Date: Sat, 3 Jun 2023 01:42:37 GMT
- Title: Noise-robust Graph Learning by Estimating and Leveraging Pairwise
Interactions
- Authors: Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu,
Wenbing Huang, Yixuan Li, Junzhou Huang
- Abstract summary: This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs.
PI-GNN relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.
Our proposed framework PI-GNN contributes two novel components: (1) a confidence-aware PI estimation model that adaptively estimates the PI labels, and (2) a decoupled training approach that leverages the estimated PI labels.
- Score: 123.07967420310796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Teaching Graph Neural Networks (GNNs) to accurately classify nodes under
severely noisy labels is an important problem in real-world graph learning
applications, but is currently underexplored. Although pairwise training
methods have demonstrated promise in supervised metric learning and
unsupervised contrastive learning, they remain less studied on noisy graphs,
where the structural pairwise interactions (PI) between nodes are abundant and
thus might benefit label noise learning rather than the pointwise methods. This
paper bridges the gap by proposing a pairwise framework for noisy node
classification on graphs, which relies on the PI as a primary learning proxy in
addition to the pointwise learning from the noisy node class labels. Our
proposed framework PI-GNN contributes two novel components: (1) a
confidence-aware PI estimation model that adaptively estimates the PI labels,
which are defined as whether the two nodes share the same node labels, and (2)
a decoupled training approach that leverages the estimated PI labels to
regularize a node classification model for robust node classification.
Extensive experiments on different datasets and GNN architectures demonstrate
the effectiveness of PI-GNN, yielding a promising improvement over the
state-of-the-art methods. Code is publicly available at
https://github.com/TianBian95/pi-gnn.
Related papers
- Learning on Graphs with Out-of-Distribution Nodes [33.141867473074264]
Graph Neural Networks (GNNs) are state-of-the-art models for performing prediction tasks on graphs.
This work defines the problem of graph learning with out-of-distribution nodes.
We propose Out-of-Distribution Graph Attention Network (OODGAT), a novel GNN model which explicitly models the interaction between different kinds of nodes.
arXiv Detail & Related papers (2023-08-13T08:10:23Z) - Learning on Graphs under Label Noise [5.909452203428086]
We develop a novel approach dubbed Consistent Graph Neural Network (CGNN) to solve the problem of learning on graphs with label noise.
Specifically, we employ graph contrastive learning as a regularization term, which promotes two views of augmented nodes to have consistent representations.
To detect noisy labels on the graph, we present a sample selection technique based on the homophily assumption.
arXiv Detail & Related papers (2023-06-14T01:38:01Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.