Robust Retraining-free GAN Fingerprinting via Personalized Normalization
- URL: http://arxiv.org/abs/2311.05478v1
- Date: Thu, 9 Nov 2023 16:09:12 GMT
- Title: Robust Retraining-free GAN Fingerprinting via Personalized Normalization
- Authors: Jianwei Fei, Zhihua Xia, Benedetta Tondi, and Mauro Barni
- Abstract summary: The proposed method can embed different fingerprints inside the GAN by just changing the input of the ParamGen Nets.
The performance of the proposed method in terms of robustness against both model-level and image-level attacks is superior to the state-of-the-art.
- Score: 21.63902009635896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been significant growth in the commercial
applications of generative models, licensed and distributed by model developers
to users, who in turn use them to offer services. In this scenario, there is a
need to track and identify the responsible user in the presence of a violation
of the license agreement or any kind of malicious usage. Although there are
methods enabling Generative Adversarial Networks (GANs) to include invisible
watermarks in the images they produce, generating a model with a different
watermark, referred to as a fingerprint, for each user is time- and
resource-consuming due to the need to retrain the model to include the desired
fingerprint. In this paper, we propose a retraining-free GAN fingerprinting
method that allows model developers to easily generate model copies with the
same functionality but different fingerprints. The generator is modified by
inserting additional Personalized Normalization (PN) layers whose parameters
(scaling and bias) are generated by two dedicated shallow networks (ParamGen
Nets) taking the fingerprint as input. A watermark decoder is trained
simultaneously to extract the fingerprint from the generated images. The
proposed method can embed different fingerprints inside the GAN by just
changing the input of the ParamGen Nets and performing a feedforward pass,
without finetuning or retraining. The performance of the proposed method in
terms of robustness against both model-level and image-level attacks is also
superior to the state-of-the-art.
Related papers
- How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models [32.29120988096214]
This paper introduces a novel approach to model fingerprinting that assigns responsibility for the generated images.
Our method modifies generative models based on each user's unique digital fingerprint, imprinting a unique identifier onto the resultant content that can be traced back to the user.
arXiv Detail & Related papers (2023-06-07T19:44:14Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - Securing Deep Generative Models with Universal Adversarial Signature [69.51685424016055]
Deep generative models pose threats to society due to their potential misuse.
In this paper, we propose to inject a universal adversarial signature into an arbitrary pre-trained generative model.
The proposed method is validated on the FFHQ and ImageNet datasets with various state-of-the-art generative models.
arXiv Detail & Related papers (2023-05-25T17:59:01Z) - Comparative analysis of segmentation and generative models for
fingerprint retrieval task [0.0]
Fingerprints deteriorate in quality if the fingers are dirty, wet, injured or when sensors malfunction.
This paper proposes a deep learning approach to address these issues using Generative (GAN) and models.
In our research, the u-net model performed better than the GAN networks.
arXiv Detail & Related papers (2022-09-13T17:21:14Z) - FIGO: Enhanced Fingerprint Identification Approach Using GAN and One
Shot Learning Techniques [0.0]
We propose a Fingerprint Identification approach based on Generative adversarial network and One-shot learning techniques.
First, we propose a Pix2Pix model to transform low-quality fingerprint images to a higher level of fingerprint images pixel by pixel directly in the fingerprint enhancement tier.
Second, we construct a fully automated fingerprint feature extraction model using a one-shot learning approach to differentiate each fingerprint from the others in the fingerprint identification process.
arXiv Detail & Related papers (2022-08-11T02:45:42Z) - FDeblur-GAN: Fingerprint Deblurring using Generative Adversarial Network [22.146795282680667]
We propose a fingerprint deblurring model FDe-GAN, based on the conditional Generative Adversarial Networks (cGANs) and multi-stage framework of the stack GAN.
We integrate two auxiliary sub-networks into the model for the deblurring task.
We achieve an accuracy of 95.18% on our fingerprint database for the task of matching deblurred and ground truth fingerprints.
arXiv Detail & Related papers (2021-06-21T18:37:20Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z) - Learning to Disentangle GAN Fingerprint for Fake Image Attribution [25.140200292000046]
We propose a GAN Fingerprint Disentangling Network (GFD-Net) to disentangle the fingerprint from GAN-generated images and produce a content-irrelevant representation for fake image attribution.
A series of constraints are provided to guarantee the stability and discriminability of the fingerprint, which in turn helps content-irrelevant feature extraction.
Experiments show that our GFD-Net achieves superior fake image attribution performance in both closed-world and open-world testing.
arXiv Detail & Related papers (2021-06-16T12:50:40Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.