Deep Intellectual Property Protection: A Survey
- URL: http://arxiv.org/abs/2304.14613v2
- Date: Sat, 17 Jun 2023 09:08:18 GMT
- Title: Deep Intellectual Property Protection: A Survey
- Authors: Yuchen Sun, Tianpeng Liu, Panhe Hu, Qing Liao, Shaojing Fu, Nenghai
Yu, Deke Guo, Yongxiang Liu, Li Liu
- Abstract summary: Deep Neural Networks (DNNs) have made revolutionary progress in recent years, and are widely used in various fields.
The goal of this paper is to provide a comprehensive survey of two mainstream DNN IP protection methods: deep watermarking and deep fingerprinting.
- Score: 70.98782484559408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs), from AlexNet to ResNet to ChatGPT, have made
revolutionary progress in recent years, and are widely used in various fields.
The high performance of DNNs requires a huge amount of high-quality data,
expensive computing hardware, and excellent DNN architectures that are costly
to obtain. Therefore, trained DNNs are becoming valuable assets and must be
considered the Intellectual Property (IP) of the legitimate owner who created
them, in order to protect trained DNN models from illegal reproduction,
stealing, redistribution, or abuse. Although being a new emerging and
interdisciplinary field, numerous DNN model IP protection methods have been
proposed. Given this period of rapid evolution, the goal of this paper is to
provide a comprehensive survey of two mainstream DNN IP protection methods:
deep watermarking and deep fingerprinting, with a proposed taxonomy. More than
190 research contributions are included in this survey, covering many aspects
of Deep IP Protection: problem definition, main threats and challenges, merits
and demerits of deep watermarking and deep fingerprinting methods, evaluation
metrics, and performance discussion. We finish the survey by identifying
promising directions for future research.
Related papers
- Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks [3.4673556247932225]
spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions.
Without adequate safeguards, proprietary SNN architectures are at risk of theft, replication, or misuse.
We pioneer an investigation into adapting two prominent watermarking approaches, namely, fingerprint-based and backdoor-based mechanisms.
arXiv Detail & Related papers (2024-05-07T06:42:30Z) - DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification [46.47446944218544]
This paper introduces DNNShield, a novel approach for protection of Deep Neural Networks (DNNs)
DNNShield embeds unique identifiers within the model architecture using specialized protection layers.
We validate the effectiveness and efficiency of DNNShield through extensive evaluations across three datasets and four model architectures.
arXiv Detail & Related papers (2024-03-11T10:27:36Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - Deep Serial Number: Computational Watermarking for DNN Intellectual
Property Protection [53.40245698216239]
DSN (Deep Serial Number) is a watermarking algorithm designed specifically for deep neural networks (DNNs)
Inspired by serial numbers in safeguarding conventional software IP, we propose the first implementation of serial number embedding within DNNs.
arXiv Detail & Related papers (2020-11-17T21:42:40Z) - DeepPeep: Exploiting Design Ramifications to Decipher the Architecture
of Compact DNNs [2.3651168422805027]
"DeepPeep" is a two-stage attack methodology to reverse-engineer the architecture of building blocks in compact DNNs.
"Secure MobileNet-V1" provides a significant reduction in inference latency and improvement in predictive performance.
arXiv Detail & Related papers (2020-07-30T06:01:41Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Industrial Scale Privacy Preserving Deep Neural Network [23.690146141150407]
We propose an industrial scale privacy preserving neural network learning paradigm, which is secure against semi-honest adversaries.
We conduct experiments on real-world fraud detection dataset and financial distress prediction dataset.
arXiv Detail & Related papers (2020-03-11T10:15:37Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.