Recent Advances in Reliable Deep Graph Learning: Inherent Noise,
Distribution Shift, and Adversarial Attack
- URL: http://arxiv.org/abs/2202.07114v2
- Date: Mon, 8 May 2023 09:03:35 GMT
- Title: Recent Advances in Reliable Deep Graph Learning: Inherent Noise,
Distribution Shift, and Adversarial Attack
- Authors: Jintang Li, Bingzhe Wu, Chengbin Hou, Guoji Fu, Yatao Bian, Liang
Chen, Junzhou Huang, Zibin Zheng
- Abstract summary: Deep graph learning (DGL) has achieved remarkable progress in both business and scientific areas.
Applying DGL to real-world applications faces a series of reliability threats including inherent noise, distribution shift, and adversarial attacks.
- Score: 56.132920116154885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep graph learning (DGL) has achieved remarkable progress in both business
and scientific areas ranging from finance and e-commerce to drug and advanced
material discovery. Despite the progress, applying DGL to real-world
applications faces a series of reliability threats including inherent noise,
distribution shift, and adversarial attacks. This survey aims to provide a
comprehensive review of recent advances for improving the reliability of DGL
algorithms against the above threats. In contrast to prior related surveys
which mainly focus on adversarial attacks and defense, our survey covers more
reliability-related aspects of DGL, i.e., inherent noise and distribution
shift. Additionally, we discuss the relationships among above aspects and
highlight some important issues to be explored in future research.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Trustworthiness of Stochastic Gradient Descent in Distributed Learning [22.41687499847953]
Distributed learning (DL) leverages multiple nodes to accelerate training, enabling the efficient optimization of large-scale models.
SGD, a key optimization algorithm, plays a central role in this process.
Communication bottlenecks often limit scalability and efficiency, leading to the increasing adoption of compressed SGD techniques to alleviate these challenges.
Despite addressing communication overheads, compressed SGD introduces trustworthiness concerns, as gradient exchanges among nodes are vulnerable to attacks like gradient inversion (GradInv) and membership inference attacks (MIA)
arXiv Detail & Related papers (2024-10-28T20:02:05Z) - A Survey of Out-of-distribution Generalization for Graph Machine Learning from a Causal View [5.651037052334014]
Graph machine learning (GML) has been successfully applied across a wide range of tasks.
GML faces significant challenges in generalizing over out-of-distribution (OOD) data.
Recent advancements have underscored the crucial role of causality-driven approaches in overcoming these generalization challenges.
arXiv Detail & Related papers (2024-09-15T20:41:18Z) - Generative AI for Secure Physical Layer Communications: A Survey [80.0638227807621]
Generative Artificial Intelligence (GAI) stands at the forefront of AI innovation, demonstrating rapid advancement and unparalleled proficiency in generating diverse content.
In this paper, we offer an extensive survey on the various applications of GAI in enhancing security within the physical layer of communication networks.
We delve into the roles of GAI in addressing challenges of physical layer security, focusing on communication confidentiality, authentication, availability, resilience, and integrity.
arXiv Detail & Related papers (2024-02-21T06:22:41Z) - When Graph Neural Network Meets Causality: Opportunities, Methodologies and An Outlook [23.45046265345568]
Graph Neural Networks (GNNs) have emerged as powerful representation learning tools for capturing complex dependencies within diverse graph-structured data.
GNNs have raised serious concerns regarding their trustworthiness, including susceptibility to distribution shift, biases towards certain populations, and lack of explainability.
Integrating causal learning techniques into GNNs has sparked numerous ground-breaking studies since many GNN trustworthiness issues can be alleviated.
arXiv Detail & Related papers (2023-12-19T13:26:14Z) - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and
Privacy Protection [136.71290968343826]
Trustworthy graph learning (TwGL) aims to solve the above problems from a technical viewpoint.
In contrast to conventional graph learning research which mainly cares about model performance, TwGL considers various reliability and safety aspects.
arXiv Detail & Related papers (2022-05-20T08:10:35Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.