Watermarking Graph Neural Networks by Random Graphs
- URL: http://arxiv.org/abs/2011.00512v2
- Date: Thu, 1 Apr 2021 12:18:42 GMT
- Title: Watermarking Graph Neural Networks by Random Graphs
- Authors: Xiangyu Zhao, Hanzhou Wu and Xinpeng Zhang
- Abstract summary: It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models.
In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN.
During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership.
- Score: 38.70278014164124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many learning tasks require us to deal with graph data which contains rich
relational information among elements, leading increasing graph neural network
(GNN) models to be deployed in industrial products for improving the quality of
service. However, they also raise challenges to model authentication. It is
necessary to protect the ownership of the GNN models, which motivates us to
present a watermarking method to GNN models in this paper. In the proposed
method, an Erdos-Renyi (ER) random graph with random node feature vectors and
labels is randomly generated as a trigger to train the GNN to be protected
together with the normal samples. During model training, the secret watermark
is embedded into the label predictions of the ER graph nodes. During model
verification, by activating a marked GNN with the trigger ER graph, the
watermark can be reconstructed from the output to verify the ownership. Since
the ER graph was randomly generated, by feeding it to a non-marked GNN, the
label predictions of the graph nodes are random, resulting in a low false alarm
rate (of the proposed work). Experimental results have also shown that, the
performance of a marked GNN on its original task will not be impaired.
Moreover, it is robust against model compression and fine-tuning, which has
shown the superiority and applicability.
Related papers
- GENIE: Watermarking Graph Neural Networks for Link Prediction [5.1323099412421636]
Graph Neural Networks (GNNs) have advanced the field of machine learning by utilizing graph-structured data.
Recent studies have shown GNNs to be vulnerable to model-stealing attacks.
Watermarking has been shown to be effective at protecting the IP of a GNN model.
arXiv Detail & Related papers (2024-06-07T10:12:01Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - GrOVe: Ownership Verification of Graph Neural Networks using Embeddings [13.28269672097063]
Graph neural networks (GNNs) have emerged as a state-of-the-art approach to model and draw inferences from large scale graph-structured data.
Prior work has shown that GNNs are prone to model extraction attacks.
We present GrOVe, a state-of-the-art GNN model fingerprinting scheme.
arXiv Detail & Related papers (2023-04-17T19:06:56Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Watermarking Graph Neural Networks based on Backdoor Attacks [10.844454900508566]
We present a watermarking framework for Graph Neural Networks (GNNs) for both graph and node classification tasks.
Our framework can verify the ownership of GNN models with a very high probability (around $100%$) for both tasks.
arXiv Detail & Related papers (2021-10-21T09:59:59Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.