Uncertainty-Aware Robust Learning on Noisy Graphs
- URL: http://arxiv.org/abs/2306.08210v2
- Date: Thu, 13 Mar 2025 14:30:06 GMT
- Title: Uncertainty-Aware Robust Learning on Noisy Graphs
- Authors: Shuyi Chen, Kaize Ding, Shixiang Zhu,
- Abstract summary: We propose a novel uncertainty-aware graph learning framework inspired by distributionally robust optimization.<n>We use a graph neural network-based encoder to embed the node features and find the optimal node embeddings.<n>Such an uncertainty-aware learning process leads to improved node representations and a more robust graph predictive model.
- Score: 22.848589361600382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have excelled in various graph learning tasks, particularly node classification. However, their performance is often hampered by noisy measurements in real-world graphs, which can corrupt critical patterns in the data. To address this, we propose a novel uncertainty-aware graph learning framework inspired by distributionally robust optimization. Specifically, we use a graph neural network-based encoder to embed the node features and find the optimal node embeddings by minimizing the worst-case risk through a minimax formulation. Such an uncertainty-aware learning process leads to improved node representations and a more robust graph predictive model that effectively mitigates the impact of uncertainty arising from data noise. Our experimental results demonstrate superior predictive performance over baselines across noisy scenarios.
Related papers
- BetaExplainer: A Probabilistic Method to Explain Graph Neural Networks [1.798554018133928]
Graph neural networks (GNNs) are powerful tools for conducting inference on graph data.
Many interpretable GNN methods exist, but they cannot quantify uncertainty in edge weights.
We proposed BetaExplainer which addresses these issues by using a sparsity-inducing prior to mask unimportant edges.
arXiv Detail & Related papers (2024-12-16T16:45:26Z) - GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Bayesian Inductive Learner for Graph Resiliency under uncertainty [1.9254132307399257]
We propose a Bayesian graph neural network-based framework for identifying critical nodes in a large graph.
The fidelity and the gain in computational complexity offered by the framework are illustrated.
arXiv Detail & Related papers (2020-12-26T07:22:29Z) - Unsupervised Adversarially-Robust Representation Learning on Graphs [26.48111798048012]
Recent works have demonstrated that deep learning on graphs is vulnerable to adversarial attacks.
In this paper, we focus on the underlying problem of learning robust representations on graphs via mutual information.
arXiv Detail & Related papers (2020-12-04T09:29:16Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.