Reinforcement Learning For Data Poisoning on Graph Neural Networks
- URL: http://arxiv.org/abs/2102.06800v1
- Date: Fri, 12 Feb 2021 22:34:53 GMT
- Title: Reinforcement Learning For Data Poisoning on Graph Neural Networks
- Authors: Jacob Dineen, A S M Ahsan-Ul Haque, Matthew Bielskas
- Abstract summary: Adversarial Machine Learning has emerged as a substantial subfield of Computer Science.
We will study the novel problem of Data Poisoning (training time) attack on Neural Networks for Graph Classification using Reinforcement Learning Agents.
- Score: 0.5156484100374058
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Adversarial Machine Learning has emerged as a substantial subfield of
Computer Science due to a lack of robustness in the models we train along with
crowdsourcing practices that enable attackers to tamper with data. In the last
two years, interest has surged in adversarial attacks on graphs yet the Graph
Classification setting remains nearly untouched. Since a Graph Classification
dataset consists of discrete graphs with class labels, related work has forgone
direct gradient optimization in favor of an indirect Reinforcement Learning
approach. We will study the novel problem of Data Poisoning (training time)
attack on Neural Networks for Graph Classification using Reinforcement Learning
Agents.
Related papers
- Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks [4.552065156611815]
Graph Neural Networks (GNNs) are recognized as potent tools for processing real-world data organized in graph structures.
In inductive GNNs, which allow for the processing of graph-structured data without relying on predefined graph structures, are becoming increasingly important in a wide range of applications.
This paper identifies a new method of performing unsupervised model-stealing attacks against inductive GNNs.
arXiv Detail & Related papers (2024-05-20T18:01:15Z) - Attacks on Node Attributes in Graph Neural Networks [32.40598187698689]
This research investigates the vulnerability of graph models through the application of feature based adversarial attacks.
Our findings indicate that decision time attacks using Projected Gradient Descent (PGD) are more potent compared to poisoning attacks that employ Mean Node Embeddings and Graph Contrastive Learning strategies.
arXiv Detail & Related papers (2024-02-19T17:52:29Z) - GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Adversarial Attacks on Graph Classification via Bayesian Optimisation [25.781404695921122]
We present a novel optimisation-based attack method for graph classification models.
Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied.
We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks.
arXiv Detail & Related papers (2021-11-04T13:01:20Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z) - Adaptive Graph Auto-Encoder for General Data Clustering [90.8576971748142]
Graph-based clustering plays an important role in the clustering area.
Recent studies about graph convolution neural networks have achieved impressive success on graph type data.
We propose a graph auto-encoder for general data clustering, which constructs the graph adaptively according to the generative perspective of graphs.
arXiv Detail & Related papers (2020-02-20T10:11:28Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.