Uncertainty-Aware Robust Learning on Noisy Graphs
- URL: http://arxiv.org/abs/2306.08210v1
- Date: Wed, 14 Jun 2023 02:45:14 GMT
- Title: Uncertainty-Aware Robust Learning on Noisy Graphs
- Authors: Shuyi Chen, Kaize Ding, Shixiang Zhu
- Abstract summary: This paper proposes a novel uncertainty-aware graph learning framework motivated by distributionally robust optimization.
Specifically, we use a graph neural network-based encoder to embed the node features and find the optimal node embeddings.
Such an uncertainty-aware learning process leads to improved node representations and a more robust graph predictive model.
- Score: 16.66112191539017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks have shown impressive capabilities in solving various
graph learning tasks, particularly excelling in node classification. However,
their effectiveness can be hindered by the challenges arising from the
widespread existence of noisy measurements associated with the topological or
nodal information present in real-world graphs. These inaccuracies in
observations can corrupt the crucial patterns within the graph data, ultimately
resulting in undesirable performance in practical applications. To address
these issues, this paper proposes a novel uncertainty-aware graph learning
framework motivated by distributionally robust optimization. Specifically, we
use a graph neural network-based encoder to embed the node features and find
the optimal node embeddings by minimizing the worst-case risk through a minimax
formulation. Such an uncertainty-aware learning process leads to improved node
representations and a more robust graph predictive model that effectively
mitigates the impact of uncertainty arising from data noise. Our experimental
result shows that the proposed framework achieves superior predictive
performance compared to the state-of-the-art baselines under various noisy
settings.
Related papers
- GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Bayesian Inductive Learner for Graph Resiliency under uncertainty [1.9254132307399257]
We propose a Bayesian graph neural network-based framework for identifying critical nodes in a large graph.
The fidelity and the gain in computational complexity offered by the framework are illustrated.
arXiv Detail & Related papers (2020-12-26T07:22:29Z) - Unsupervised Adversarially-Robust Representation Learning on Graphs [26.48111798048012]
Recent works have demonstrated that deep learning on graphs is vulnerable to adversarial attacks.
In this paper, we focus on the underlying problem of learning robust representations on graphs via mutual information.
arXiv Detail & Related papers (2020-12-04T09:29:16Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.