Trustworthy Graph Neural Networks: Aspects, Methods and Trends
- URL: http://arxiv.org/abs/2205.07424v2
- Date: Wed, 21 Feb 2024 09:54:52 GMT
- Title: Trustworthy Graph Neural Networks: Aspects, Methods and Trends
- Authors: He Zhang, Bang Wu, Xingliang Yuan, Shirui Pan, Hanghang Tong, Jian Pei
- Abstract summary: Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
- Score: 115.84291569988748
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph neural networks (GNNs) have emerged as a series of competent graph
learning methods for diverse real-world scenarios, ranging from daily
applications like recommendation systems and question answering to cutting-edge
technologies such as drug discovery in life sciences and n-body simulation in
astrophysics. However, task performance is not the only requirement for GNNs.
Performance-oriented GNNs have exhibited potential adverse effects like
vulnerability to adversarial attacks, unexplainable discrimination against
disadvantaged groups, or excessive resource consumption in edge computing
environments. To avoid these unintentional harms, it is necessary to build
competent GNNs characterised by trustworthiness. To this end, we propose a
comprehensive roadmap to build trustworthy GNNs from the view of the various
computing technologies involved. In this survey, we introduce basic concepts
and comprehensively summarise existing efforts for trustworthy GNNs from six
aspects, including robustness, explainability, privacy, fairness,
accountability, and environmental well-being. Additionally, we highlight the
intricate cross-aspect relations between the above six aspects of trustworthy
GNNs. Finally, we present a thorough overview of trending directions for
facilitating the research and industrialisation of trustworthy GNNs.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
Privacy and OOD Challenges [75.37448213291668]
This paper systematically reviews existing Graph Neural Networks (GNNs)
We first highlight the four key challenges faced by existing GNNs, paving the way for our exploration of real-world GNN models.
arXiv Detail & Related papers (2024-03-07T13:10:37Z) - When Graph Neural Network Meets Causality: Opportunities, Methodologies and An Outlook [23.45046265345568]
Graph Neural Networks (GNNs) have emerged as powerful representation learning tools for capturing complex dependencies within diverse graph-structured data.
GNNs have raised serious concerns regarding their trustworthiness, including susceptibility to distribution shift, biases towards certain populations, and lack of explainability.
Integrating causal learning techniques into GNNs has sparked numerous ground-breaking studies since many GNN trustworthiness issues can be alleviated.
arXiv Detail & Related papers (2023-12-19T13:26:14Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - Can Directed Graph Neural Networks be Adversarially Robust? [26.376780541893154]
This study aims to harness the profound trust implications offered by directed graphs to bolster the robustness and resilience of Graph Neural Networks (GNNs)
We introduce a new and realistic directed graph attack setting and propose an innovative, universal, and efficient message-passing framework as a plug-in layer.
This framework outstanding clean accuracy and state-of-the-art robust performance, offering superior defense against both transfer and adaptive attacks.
arXiv Detail & Related papers (2023-06-03T04:56:04Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.