NaturalFinger: Generating Natural Fingerprint with Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2305.17868v1
- Date: Mon, 29 May 2023 03:17:03 GMT
- Title: NaturalFinger: Generating Natural Fingerprint with Generative
Adversarial Networks
- Authors: Kang Yang, Kunhao Lai
- Abstract summary: We propose NaturalFinger which generates natural fingerprint with generative adversarial networks (GANs)
Our approach achieves 0.91 ARUC value on the FingerBench dataset (154 models), exceeding the optimal baseline (MetaV) over 17%.
- Score: 4.536351805614037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network (DNN) models have become a critical asset of the model
owner as training them requires a large amount of resource (i.e. labeled data).
Therefore, many fingerprinting schemes have been proposed to safeguard the
intellectual property (IP) of the model owner against model extraction and
illegal redistribution. However, previous schemes adopt unnatural images as the
fingerprint, such as adversarial examples and noisy images, which can be easily
perceived and rejected by the adversary. In this paper, we propose
NaturalFinger which generates natural fingerprint with generative adversarial
networks (GANs). Besides, our proposed NaturalFinger fingerprints the decision
difference areas rather than the decision boundary, which is more robust. The
application of GAN not only allows us to generate more imperceptible samples,
but also enables us to generate unrestricted samples to explore the decision
boundary.To demonstrate the effectiveness of our fingerprint approach, we
evaluate our approach against four model modification attacks including
adversarial training and two model extraction attacks. Experiments show that
our approach achieves 0.91 ARUC value on the FingerBench dataset (154 models),
exceeding the optimal baseline (MetaV) over 17\%.
Related papers
- Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Robust Retraining-free GAN Fingerprinting via Personalized Normalization [21.63902009635896]
The proposed method can embed different fingerprints inside the GAN by just changing the input of the ParamGen Nets.
The performance of the proposed method in terms of robustness against both model-level and image-level attacks is superior to the state-of-the-art.
arXiv Detail & Related papers (2023-11-09T16:09:12Z) - Attributing Image Generative Models using Latent Fingerprints [33.037718660732544]
Generative models have enabled the creation of contents that are indistinguishable from those taken from nature.
One potential risk mitigation strategy is to attribute generative models via fingerprinting.
This paper investigates the use of latent semantic dimensions as fingerprints.
arXiv Detail & Related papers (2023-04-17T00:13:10Z) - Fingerprinting Deep Neural Networks Globally via Universal Adversarial
Perturbations [22.89321897726347]
We propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model.
Our framework can detect model IP breaches with confidence 99.99 %$ within only $20$ fingerprints of the suspect model.
arXiv Detail & Related papers (2022-02-17T11:29:50Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - Fingerprinting Multi-exit Deep Neural Network Models via Inference Time [18.12409619358209]
We propose a novel approach to fingerprint multi-exit models via inference time rather than inference predictions.
Specifically, we design an effective method to generate a set of fingerprint samples to craft the inference process with a unique and robust inference time cost.
arXiv Detail & Related papers (2021-10-07T04:04:01Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z) - Reverse Engineering of Generative Models: Inferring Model
Hyperparameters from Generated Images [36.08924910193875]
State-of-the-art (SOTA) Generative Models (GMs) can synthesize photo-realistic images that are hard for humans to distinguish from genuine photos.
We propose reverse engineering of GMs to infer model hyper parameters from the images generated by these models.
We show that our fingerprint estimation can be leveraged for deepfake detection and image attribution.
arXiv Detail & Related papers (2021-06-15T04:19:26Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.