Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of
Graph Machine Learning
- URL: http://arxiv.org/abs/2111.04314v1
- Date: Mon, 8 Nov 2021 07:55:13 GMT
- Title: Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of
Graph Machine Learning
- Authors: Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang
Yang, Jie Tang
- Abstract summary: Adversarial attacks on graphs have posed a major threat to the robustness of graph machine learning (GML) models.
We present the Graph Robustness Benchmark (GRB) to provide a scalable, unified, modular, and reproducible evaluation for the adversarial robustness of GML models.
- Score: 24.500868045285287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks on graphs have posed a major threat to the robustness of
graph machine learning (GML) models. Naturally, there is an ever-escalating
arms race between attackers and defenders. However, the strategies behind both
sides are often not fairly compared under the same and realistic conditions. To
bridge this gap, we present the Graph Robustness Benchmark (GRB) with the goal
of providing a scalable, unified, modular, and reproducible evaluation for the
adversarial robustness of GML models. GRB standardizes the process of attacks
and defenses by 1) developing scalable and diverse datasets, 2) modularizing
the attack and defense implementations, and 3) unifying the evaluation protocol
in refined scenarios. By leveraging the GRB pipeline, the end-users can focus
on the development of robust GML models with automated data processing and
experimental evaluations. To support open and reproducible research on graph
adversarial learning, GRB also hosts public leaderboards across different
scenarios. As a starting point, we conduct extensive experiments to benchmark
baseline techniques. GRB is open-source and welcomes contributions from the
community. Datasets, codes, leaderboards are available at
https://cogdl.ai/grb/home.
Related papers
- GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning [0.0]
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations.
Current graph neural network models face the challenge of requiring extensive labeled data.
We propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning.
arXiv Detail & Related papers (2024-09-12T03:09:05Z) - GLBench: A Comprehensive Benchmark for Graph with Large Language Models [41.89444363336435]
We introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios.
GLBench provides a fair and thorough evaluation of different categories of GraphLLM methods, along with traditional baselines such as graph neural networks.
arXiv Detail & Related papers (2024-07-10T08:20:47Z) - GraphFM: A Comprehensive Benchmark for Graph Foundation Model [33.157367455390144]
Foundation Models (FMs) serve as a general class for the development of artificial intelligence systems.
Despite extensive research into self-supervised learning as the cornerstone of FMs, several outstanding issues persist.
The extent of generalization capability on downstream tasks remains unclear.
It is unknown how effectively these models can scale to large datasets.
arXiv Detail & Related papers (2024-06-12T15:10:44Z) - Graph Augmentation for Recommendation [30.77695833436189]
Graph augmentation with contrastive learning has gained significant attention in the field of recommendation systems.
We propose a principled framework called GraphAug that generates denoised self-supervised signals, enhancing recommender systems.
The GraphAug framework incorporates a graph information bottleneck (GIB)-regularized augmentation paradigm, which automatically distills informative self-supervision information.
arXiv Detail & Related papers (2024-03-25T11:47:53Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Graph Generative Model for Benchmarking Graph Neural Networks [73.11514658000547]
We introduce a novel graph generative model that learns and reproduces the distribution of real-world graphs in a privacy-controlled way.
Our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
arXiv Detail & Related papers (2022-07-10T06:42:02Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.