GNN4IP: Graph Neural Network for Hardware Intellectual Property Piracy
Detection
- URL: http://arxiv.org/abs/2107.09130v1
- Date: Mon, 19 Jul 2021 20:13:16 GMT
- Title: GNN4IP: Graph Neural Network for Hardware Intellectual Property Piracy
Detection
- Authors: Rozhin Yasaei, Shih-Yuan Yu, Emad Kasaeyan Naeini, Mohammad Abdullah
Al Faruque
- Abstract summary: Globalization of the IC supply chain exposes IP providers to theft and illegal redistribution of IPs.
We propose a novel methodology, GNN4IP, to assess similarities between circuits and detect IP piracy.
GNN4IP detects IP piracy with 96% accuracy in our dataset and recognizes the original IP in its obfuscated version with 100% accuracy.
- Score: 4.575465912399431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aggressive time-to-market constraints and enormous hardware design and
fabrication costs have pushed the semiconductor industry toward hardware
Intellectual Properties (IP) core design. However, the globalization of the
integrated circuits (IC) supply chain exposes IP providers to theft and illegal
redistribution of IPs. Watermarking and fingerprinting are proposed to detect
IP piracy. Nevertheless, they come with additional hardware overhead and cannot
guarantee IP security as advanced attacks are reported to remove the watermark,
forge, or bypass it. In this work, we propose a novel methodology, GNN4IP, to
assess similarities between circuits and detect IP piracy. We model the
hardware design as a graph and construct a graph neural network model to learn
its behavior using the comprehensive dataset of register transfer level codes
and gate-level netlists that we have gathered. GNN4IP detects IP piracy with
96% accuracy in our dataset and recognizes the original IP in its obfuscated
version with 100% accuracy.
Related papers
- Older and Wiser: The Marriage of Device Aging and Intellectual Property Protection of Deep Neural Networks [10.686965180113118]
Deep neural networks (DNNs) are often kept secret due to high training costs and privacy concerns.
We propose a novel hardware-software co-design approach for DNN intellectual property (IP) protection.
Hardware-wise, we employ random aging to produce authorized chips.
Software-wise, we propose a novel DOFT, which allows pre-trained DNNs to maintain their original accuracy on authorized chips.
arXiv Detail & Related papers (2024-06-21T04:49:17Z) - GENIE: Watermarking Graph Neural Networks for Link Prediction [5.1323099412421636]
Graph Neural Networks (GNNs) have advanced the field of machine learning by utilizing graph-structured data.
Recent studies have shown GNNs to be vulnerable to model-stealing attacks.
Watermarking has been shown to be effective at protecting the IP of a GNN model.
arXiv Detail & Related papers (2024-06-07T10:12:01Z) - AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement
Learning [16.751700469734708]
We propose AttackGNN, the first red-team attack on GNN-based techniques in hardware security.
We target five GNN-based techniques for four crucial classes of problems in hardware security: IP piracy, detecting/localizing HTs, reverse engineering, and hardware obfuscation.
arXiv Detail & Related papers (2024-02-21T17:18:25Z) - PreGIP: Watermarking the Pretraining of Graph Neural Networks for Deep
Intellectual Property Protection [35.7109941139987]
Pretraining on Graph Neural Networks (GNNs) has shown great power in facilitating various downstream tasks.
adversaries may illegally copy and deploy the pretrained GNN models for their downstream tasks.
We propose a novel framework named PreGIP to watermark the pretraining of GNN encoder for IP protection while maintain the high-quality of the embedding space.
arXiv Detail & Related papers (2024-02-06T22:13:49Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Stealing Maggie's Secrets -- On the Challenges of IP Theft Through FPGA Reverse Engineering [5.695727681053481]
We present a real-world case study on a Lattice iCE40 FPGA found inside iPhone 7.
By reverse engineering the proprietary signal-processing algorithm implemented on Maggie, we generate novel insights into the actual efforts required to commit FPGA IP theft.
We then introduce general netlist reverse engineering techniques that drastically reduce the required manual effort.
arXiv Detail & Related papers (2023-12-11T08:17:04Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Deep Intellectual Property Protection: A Survey [70.98782484559408]
Deep Neural Networks (DNNs) have made revolutionary progress in recent years, and are widely used in various fields.
The goal of this paper is to provide a comprehensive survey of two mainstream DNN IP protection methods: deep watermarking and deep fingerprinting.
arXiv Detail & Related papers (2023-04-28T03:34:43Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Passport-aware Normalization for Deep Model Protection [122.61289882357022]
We propose a new passport-aware normalization formulation for deep learning models.
It only needs to add another passport-aware branch for IP protection.
It is demonstrated to be robust not only to common attack techniques like fine-tuning and model compression, but also to ambiguity attacks.
arXiv Detail & Related papers (2020-10-29T17:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.