Combating Bilateral Edge Noise for Robust Link Prediction
- URL: http://arxiv.org/abs/2311.01196v1
- Date: Thu, 2 Nov 2023 12:47:49 GMT
- Title: Combating Bilateral Edge Noise for Robust Link Prediction
- Authors: Zhanke Zhou, Jiangchao Yao, Jiaxu Liu, Xiawei Guo, Quanming Yao, Li
He, Liang Wang, Bo Zheng, Bo Han
- Abstract summary: We propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies.
Experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations.
- Score: 56.43882298843564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although link prediction on graphs has achieved great success with the
development of graph neural networks (GNNs), the potential robustness under the
edge noise is still less investigated. To close this gap, we first conduct an
empirical study to disclose that the edge noise bilaterally perturbs both input
topology and target label, yielding severe performance degradation and
representation collapse. To address this dilemma, we propose an
information-theory-guided principle, Robust Graph Information Bottleneck
(RGIB), to extract reliable supervision signals and avoid representation
collapse. Different from the basic information bottleneck, RGIB further
decouples and balances the mutual dependence among graph topology, target
labels, and representation, building new learning objectives for robust
representation against the bilateral noise. Two instantiations, RGIB-SSL and
RGIB-REP, are explored to leverage the merits of different methodologies, i.e.,
self-supervised learning and data reparameterization, for implicit and explicit
data denoising, respectively. Extensive experiments on six datasets and three
GNNs with diverse noisy scenarios verify the effectiveness of our RGIB
instantiations. The code is publicly available at:
https://github.com/tmlr-group/RGIB.
Related papers
- NoisyGL: A Comprehensive Benchmark for Graph Neural Networks under Label Noise [21.65452861777135]
Graph Neural Networks (GNNs) exhibit strong potential in node classification task through a message-passing mechanism.
Label noise is common in real-world graph data, negatively impacting GNNs by propagating incorrect information during training.
We introduce NoisyGL, the first comprehensive benchmark for graph neural networks under label noise.
arXiv Detail & Related papers (2024-06-06T17:45:00Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Graph Neural Network-based Early Bearing Fault Detection [0.18275108630751835]
A novel graph neural network-based fault detection method is proposed.
It builds a bridge between AI and real-world running mechanical systems.
We find that the proposed method can successfully detect faulty objects that are mixed in the normal object region.
arXiv Detail & Related papers (2022-04-24T08:54:55Z) - Toward Enhanced Robustness in Unsupervised Graph Representation
Learning: A Graph Information Bottleneck Perspective [48.01303380298564]
We propose a novel unbiased robust UGRL method called Robust Graph Information Bottleneck (RGIB)
Our RGIB attempts to learn robust node representations against adversarial perturbations by preserving the original information in the benign graph while eliminating the adversarial information in the adversarial graph.
arXiv Detail & Related papers (2022-01-21T06:26:50Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.