Recent Advances in Reliable Deep Graph Learning: Inherent Noise,
Distribution Shift, and Adversarial Attack
- URL: http://arxiv.org/abs/2202.07114v2
- Date: Mon, 8 May 2023 09:03:35 GMT
- Title: Recent Advances in Reliable Deep Graph Learning: Inherent Noise,
Distribution Shift, and Adversarial Attack
- Authors: Jintang Li, Bingzhe Wu, Chengbin Hou, Guoji Fu, Yatao Bian, Liang
Chen, Junzhou Huang, Zibin Zheng
- Abstract summary: Deep graph learning (DGL) has achieved remarkable progress in both business and scientific areas.
Applying DGL to real-world applications faces a series of reliability threats including inherent noise, distribution shift, and adversarial attacks.
- Score: 56.132920116154885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep graph learning (DGL) has achieved remarkable progress in both business
and scientific areas ranging from finance and e-commerce to drug and advanced
material discovery. Despite the progress, applying DGL to real-world
applications faces a series of reliability threats including inherent noise,
distribution shift, and adversarial attacks. This survey aims to provide a
comprehensive review of recent advances for improving the reliability of DGL
algorithms against the above threats. In contrast to prior related surveys
which mainly focus on adversarial attacks and defense, our survey covers more
reliability-related aspects of DGL, i.e., inherent noise and distribution
shift. Additionally, we discuss the relationships among above aspects and
highlight some important issues to be explored in future research.
Related papers
- A Timely Survey on Vision Transformer for Deepfake Detection [11.410817278428533]
Vision Transformer (ViT)-based approaches showcase superior performance in generality and efficiency.
This survey aims to equip researchers with a nuanced understanding of ViT's pivotal role in deepfake detection.
arXiv Detail & Related papers (2024-05-14T09:33:04Z) - Generative AI for Secure Physical Layer Communications: A Survey [80.0638227807621]
Generative Artificial Intelligence (GAI) stands at the forefront of AI innovation, demonstrating rapid advancement and unparalleled proficiency in generating diverse content.
In this paper, we offer an extensive survey on the various applications of GAI in enhancing security within the physical layer of communication networks.
We delve into the roles of GAI in addressing challenges of physical layer security, focusing on communication confidentiality, authentication, availability, resilience, and integrity.
arXiv Detail & Related papers (2024-02-21T06:22:41Z) - When Graph Neural Network Meets Causality: Opportunities, Methodologies and An Outlook [23.45046265345568]
Graph Neural Networks (GNNs) have emerged as powerful representation learning tools for capturing complex dependencies within diverse graph-structured data.
GNNs have raised serious concerns regarding their trustworthiness, including susceptibility to distribution shift, biases towards certain populations, and lack of explainability.
Integrating causal learning techniques into GNNs has sparked numerous ground-breaking studies since many GNN trustworthiness issues can be alleviated.
arXiv Detail & Related papers (2023-12-19T13:26:14Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual
Learning [76.47138162283714]
Forgetting refers to the loss or deterioration of previously acquired information or knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
Survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and
Privacy Protection [136.71290968343826]
Trustworthy graph learning (TwGL) aims to solve the above problems from a technical viewpoint.
In contrast to conventional graph learning research which mainly cares about model performance, TwGL considers various reliability and safety aspects.
arXiv Detail & Related papers (2022-05-20T08:10:35Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - Deep Learning meets Liveness Detection: Recent Advancements and
Challenges [3.2011056280404637]
We present a comprehensive survey on the literature related to deep-feature-based FAS methods since 2017.
We cover predominant public datasets for FAS in chronological order, their evolutional progress, and the evaluation criteria.
arXiv Detail & Related papers (2021-12-29T19:24:58Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.