CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection
- URL: http://arxiv.org/abs/2311.11473v2
- Date: Thu, 14 Dec 2023 17:17:52 GMT
- Title: CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection
- Authors: Yifan Li, Zhen Tan, Kai Shu, Zongsheng Cao, Yu Kong, Huan Liu
- Abstract summary: We introduce a novel Class-wise Selection for Graph Neural Networks, dubbed CSGNN.
To tackle the class imbalance issue, we introduce a dynamic class-wise selection mechanism, leveraging the clustering technique to identify clean nodes.
To alleviate the problem of noisy labels, built on the concept of the memorization effect, CSGNN prioritizes learning from clean nodes before noisy ones.
- Score: 45.83801634434111
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as a powerful tool for
representation learning on graphs, but they often suffer from overfitting and
label noise issues, especially when the data is scarce or imbalanced. Different
from the paradigm of previous methods that rely on single-node confidence, in
this paper, we introduce a novel Class-wise Selection for Graph Neural
Networks, dubbed CSGNN, which employs a neighbor-aggregated latent space to
adaptively select reliable nodes across different classes. Specifically, 1) to
tackle the class imbalance issue, we introduce a dynamic class-wise selection
mechanism, leveraging the clustering technique to identify clean nodes based on
the neighbor-aggregated confidences. In this way, our approach can avoid the
pitfalls of biased sampling which is common with global threshold techniques.
2) To alleviate the problem of noisy labels, built on the concept of the
memorization effect, CSGNN prioritizes learning from clean nodes before noisy
ones, thereby iteratively enhancing model performance while mitigating label
noise. Through extensive experiments, we demonstrate that CSGNN outperforms
state-of-the-art methods in terms of both effectiveness and robustness.
Related papers
- BANGS: Game-Theoretic Node Selection for Graph Self-Training [39.70859692050266]
Graph self-training is a semi-supervised learning method that iteratively selects a set of unlabeled data to retrain the underlying graph neural network (GNN) model.
We propose BANGS, a novel framework that unifies the labeling strategy with conditional mutual information as the objective of node selection.
Our approach -- grounded in game theory -- selects nodes in a fashion and provides theoretical guarantees for robustness under noisy objective.
arXiv Detail & Related papers (2024-10-12T03:31:28Z) - Cluster-based Graph Collaborative Filtering [55.929052969825825]
Graph Convolution Networks (GCNs) have succeeded in learning user and item representations for recommendation systems.
Most existing GCN-based methods overlook the multiple interests of users while performing high-order graph convolution.
We propose a novel GCN-based recommendation model, termed Cluster-based Graph Collaborative Filtering (ClusterGCF)
arXiv Detail & Related papers (2024-04-16T07:05:16Z) - Robust Training of Graph Neural Networks via Noise Governance [27.767913371777247]
Graph Neural Networks (GNNs) have become widely-used models for semi-supervised learning.
In this paper, we consider an important yet challenging scenario where labels on nodes of graphs are not only noisy but also scarce.
We propose a novel RTGNN framework that achieves better robustness by learning to explicitly govern label noise.
arXiv Detail & Related papers (2022-11-12T09:25:32Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Noise-robust Graph Learning by Estimating and Leveraging Pairwise
Interactions [123.07967420310796]
This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs.
PI-GNN relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.
Our proposed framework PI-GNN contributes two novel components: (1) a confidence-aware PI estimation model that adaptively estimates the PI labels, and (2) a decoupled training approach that leverages the estimated PI labels.
arXiv Detail & Related papers (2021-06-14T14:23:08Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - GANs for learning from very high class conditional noisy labels [1.6516902135723865]
We use Generative Adversarial Networks (GANs) to design a class conditional label noise (CCN) robust scheme for binary classification.
It first generates a set of correctly labelled data points from noisy labelled data and 0.1% or 1% clean labels.
arXiv Detail & Related papers (2020-10-19T15:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.