Attributing Image Generative Models using Latent Fingerprints
- URL: http://arxiv.org/abs/2304.09752v2
- Date: Fri, 26 May 2023 23:25:19 GMT
- Title: Attributing Image Generative Models using Latent Fingerprints
- Authors: Guangyu Nie, Changhoon Kim, Yezhou Yang, Yi Ren
- Abstract summary: Generative models have enabled the creation of contents that are indistinguishable from those taken from nature.
One potential risk mitigation strategy is to attribute generative models via fingerprinting.
This paper investigates the use of latent semantic dimensions as fingerprints.
- Score: 33.037718660732544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models have enabled the creation of contents that are
indistinguishable from those taken from nature. Open-source development of such
models raised concerns about the risks of their misuse for malicious purposes.
One potential risk mitigation strategy is to attribute generative models via
fingerprinting. Current fingerprinting methods exhibit a significant tradeoff
between robust attribution accuracy and generation quality while lacking design
principles to improve this tradeoff. This paper investigates the use of latent
semantic dimensions as fingerprints, from where we can analyze the effects of
design variables, including the choice of fingerprinting dimensions, strength,
and capacity, on the accuracy-quality tradeoff. Compared with previous SOTA,
our method requires minimum computation and is more applicable to large-scale
models. We use StyleGAN2 and the latent diffusion model to demonstrate the
efficacy of our method.
Related papers
- MergePrint: Robust Fingerprinting against Merging Large Language Models [1.9249287163937978]
We propose a novel fingerprinting method MergePrint that embeds robust fingerprints designed to preserve ownership claims even after model merging.
By optimizing against a pseudo-merged model, MergePrint generates fingerprints that remain detectable after merging.
This approach provides a practical fingerprinting strategy for asserting ownership in cases of misappropriation through model merging.
arXiv Detail & Related papers (2024-10-11T08:00:49Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models [32.29120988096214]
This paper introduces a novel approach to model fingerprinting that assigns responsibility for the generated images.
Our method modifies generative models based on each user's unique digital fingerprint, imprinting a unique identifier onto the resultant content that can be traced back to the user.
arXiv Detail & Related papers (2023-06-07T19:44:14Z) - NaturalFinger: Generating Natural Fingerprint with Generative
Adversarial Networks [4.536351805614037]
We propose NaturalFinger which generates natural fingerprint with generative adversarial networks (GANs)
Our approach achieves 0.91 ARUC value on the FingerBench dataset (154 models), exceeding the optimal baseline (MetaV) over 17%.
arXiv Detail & Related papers (2023-05-29T03:17:03Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - Learning Robust Representations Of Generative Models Using Set-Based
Artificial Fingerprints [14.191129493685212]
Existing methods approximate the distance between the models via their sample distributions.
We consider unique traces (a.k.a. "artificial fingerprints") as representations of generative models.
We propose a new learning method based on set-encoding and contrastive training.
arXiv Detail & Related papers (2022-06-04T23:20:07Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z) - High-Robustness, Low-Transferability Fingerprinting of Neural Networks [78.2527498858308]
This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks.
It features high-robustness to the base model against model pruning as well as low-transferability to unassociated models.
arXiv Detail & Related papers (2021-05-14T21:48:23Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.