Watermarking Graph Neural Networks based on Backdoor Attacks
- URL: http://arxiv.org/abs/2110.11024v1
- Date: Thu, 21 Oct 2021 09:59:59 GMT
- Title: Watermarking Graph Neural Networks based on Backdoor Attacks
- Authors: Jing Xu, Stjepan Picek
- Abstract summary: We present a watermarking framework for Graph Neural Networks (GNNs) for both graph and node classification tasks.
Our framework can verify the ownership of GNN models with a very high probability (around $100%$) for both tasks.
- Score: 10.844454900508566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have achieved promising performance in various
real-world applications. Building a powerful GNN model is not a trivial task,
as it requires a large amount of training data, powerful computing resources,
and human expertise on fine-tuning the model. What is more, with the
development of adversarial attacks, e.g., model stealing attacks, GNNs raise
challenges to model authentication. To avoid copyright infringement on GNNs, it
is necessary to verify the ownership of the GNN models.
In this paper, we present a watermarking framework for GNNs for both graph
and node classification tasks. We 1) design two strategies to generate
watermarked data for the graph classification and one for the node
classification task, 2) embed the watermark into the host model through
training to obtain the watermarked GNN model, and 3) verify the ownership of
the suspicious model in a black-box setting. The experiments show that our
framework can verify the ownership of GNN models with a very high probability
(around $100\%$) for both tasks. In addition, we experimentally show that our
watermarking approach is still effective even when considering suspicious
models obtained from different architectures than the owner's.
Related papers
- GENIE: Watermarking Graph Neural Networks for Link Prediction [5.1323099412421636]
Graph Neural Networks (GNNs) have advanced the field of machine learning by utilizing graph-structured data.
Recent studies have shown GNNs to be vulnerable to model-stealing attacks.
Watermarking has been shown to be effective at protecting the IP of a GNN model.
arXiv Detail & Related papers (2024-06-07T10:12:01Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - GrOVe: Ownership Verification of Graph Neural Networks using Embeddings [13.28269672097063]
Graph neural networks (GNNs) have emerged as a state-of-the-art approach to model and draw inferences from large scale graph-structured data.
Prior work has shown that GNNs are prone to model extraction attacks.
We present GrOVe, a state-of-the-art GNN model fingerprinting scheme.
arXiv Detail & Related papers (2023-04-17T19:06:56Z) - Rethinking White-Box Watermarks on Deep Learning Models under Neural
Structural Obfuscation [24.07604618918671]
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations.
White-box watermarking is believed to be accurate, credible and secure against most known watermark removal attacks.
We present the first systematic study on how the mainstream white-box watermarks are commonly vulnerable to neural structural obfuscation with textitdummy neurons.
arXiv Detail & Related papers (2023-03-17T02:21:41Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Watermarking Graph Neural Networks by Random Graphs [38.70278014164124]
It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models.
In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN.
During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership.
arXiv Detail & Related papers (2020-11-01T14:22:48Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.