Resist Label Noise with PGM for Graph Neural Networks
- URL: http://arxiv.org/abs/2311.02116v1
- Date: Fri, 3 Nov 2023 02:47:06 GMT
- Title: Resist Label Noise with PGM for Graph Neural Networks
- Authors: Qingqing Ge, Jianxiang Yu, Zeyuan Zhao and Xiang Li
- Abstract summary: We propose a novel graphical probabilistic model (PGM) based framework LNP.
Given a noisy label set and a clean label set, our goal is to maximize the likelihood of labels in the clean set.
We show that LNP can lead to inspiring performance in high noise-rate situations.
- Score: 4.566850249315913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While robust graph neural networks (GNNs) have been widely studied for graph
perturbation and attack, those for label noise have received significantly less
attention. Most existing methods heavily rely on the label smoothness
assumption to correct noisy labels, which adversely affects their performance
on heterophilous graphs. Further, they generally perform poorly in high
noise-rate scenarios. To address these problems, in this paper, we propose a
novel probabilistic graphical model (PGM) based framework LNP. Given a noisy
label set and a clean label set, our goal is to maximize the likelihood of
labels in the clean set. We first present LNP-v1, which generates clean labels
based on graphs only in the Bayesian network. To further leverage the
information of clean labels in the noisy label set, we put forward LNP-v2,
which incorporates the noisy label set into the Bayesian network to generate
clean labels. The generative process can then be used to predict labels for
unlabeled nodes. We conduct extensive experiments to show the robustness of LNP
on varying noise types and rates, and also on graphs with different
heterophilies. In particular, we show that LNP can lead to inspiring
performance in high noise-rate situations.
Related papers
- Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning [8.387189407144403]
Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
arXiv Detail & Related papers (2024-02-07T13:32:47Z) - Resurrecting Label Propagation for Graphs with Heterophily and Label Noise [40.11022005996222]
Label noise is a common challenge in large datasets, as it can significantly degrade the generalization ability of deep neural networks.
We study graph label noise in the context of arbitrary heterophily, with the aim of rectifying noisy labels and assigning labels to previously unlabeled nodes.
$R2LP$ is an iterative algorithm with three steps: (1) reconstruct the graph to recover the homophily property, (2) utilize label propagation to rectify the noisy labels, and (3) select high-confidence labels to retain for the next iteration.
arXiv Detail & Related papers (2023-10-25T11:28:26Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise
Learning [113.8799653759137]
We introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin.
BadLabel is crafted based on the label-flipping attack against standard classification.
We propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
arXiv Detail & Related papers (2023-05-28T06:26:23Z) - Robust Training of Graph Neural Networks via Noise Governance [27.767913371777247]
Graph Neural Networks (GNNs) have become widely-used models for semi-supervised learning.
In this paper, we consider an important yet challenging scenario where labels on nodes of graphs are not only noisy but also scarce.
We propose a novel RTGNN framework that achieves better robustness by learning to explicitly govern label noise.
arXiv Detail & Related papers (2022-11-12T09:25:32Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z) - NRGNN: Learning a Label Noise-Resistant Graph Neural Network on Sparsely
and Noisily Labeled Graphs [20.470934944907608]
Graph Neural Networks (GNNs) have achieved promising results for semi-supervised learning tasks on graphs such as node classification.
Many real-world graphs are often sparsely and noisily labeled, which could significantly degrade the performance of GNNs.
We propose to develop a label noise-resistant GNN for semi-supervised node classification.
arXiv Detail & Related papers (2021-06-08T22:12:44Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.