PreGIP: Watermarking the Pretraining of Graph Neural Networks for Deep
Intellectual Property Protection
- URL: http://arxiv.org/abs/2402.04435v1
- Date: Tue, 6 Feb 2024 22:13:49 GMT
- Title: PreGIP: Watermarking the Pretraining of Graph Neural Networks for Deep
Intellectual Property Protection
- Authors: Enyan Dai, Minhua Lin, Suhang Wang
- Abstract summary: Pretraining on Graph Neural Networks (GNNs) has shown great power in facilitating various downstream tasks.
adversaries may illegally copy and deploy the pretrained GNN models for their downstream tasks.
We propose a novel framework named PreGIP to watermark the pretraining of GNN encoder for IP protection while maintain the high-quality of the embedding space.
- Score: 35.7109941139987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pretraining on Graph Neural Networks (GNNs) has shown great power in
facilitating various downstream tasks. As pretraining generally requires huge
amount of data and computational resources, the pretrained GNNs are high-value
Intellectual Properties (IP) of the legitimate owner. However, adversaries may
illegally copy and deploy the pretrained GNN models for their downstream tasks.
Though initial efforts have been made to watermark GNN classifiers for IP
protection, these methods require the target classification task for
watermarking, and thus are not applicable to self-supervised pretraining of GNN
models. Hence, in this work, we propose a novel framework named PreGIP to
watermark the pretraining of GNN encoder for IP protection while maintain the
high-quality of the embedding space. PreGIP incorporates a task-free
watermarking loss to watermark the embedding space of pretrained GNN encoder. A
finetuning-resistant watermark injection is further deployed. Theoretical
analysis and extensive experiments show the effectiveness of {\method} in IP
protection and maintaining high-performance for downstream tasks.
Related papers
- GENIE: Watermarking Graph Neural Networks for Link Prediction [5.1323099412421636]
Graph Neural Networks (GNNs) have advanced the field of machine learning by utilizing graph-structured data.
Recent studies have shown GNNs to be vulnerable to model-stealing attacks.
Watermarking has been shown to be effective at protecting the IP of a GNN model.
arXiv Detail & Related papers (2024-06-07T10:12:01Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Watermarking Graph Neural Networks based on Backdoor Attacks [10.844454900508566]
We present a watermarking framework for Graph Neural Networks (GNNs) for both graph and node classification tasks.
Our framework can verify the ownership of GNN models with a very high probability (around $100%$) for both tasks.
arXiv Detail & Related papers (2021-10-21T09:59:59Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Watermarking Graph Neural Networks by Random Graphs [38.70278014164124]
It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models.
In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN.
During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership.
arXiv Detail & Related papers (2020-11-01T14:22:48Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.