ManiFPT: Defining and Analyzing Fingerprints of Generative Models
- URL: http://arxiv.org/abs/2402.10401v2
- Date: Thu, 29 Feb 2024 08:02:27 GMT
- Title: ManiFPT: Defining and Analyzing Fingerprints of Generative Models
- Authors: Hae Jin Song, Mahyar Khayatkhoei, Wael AbdAlmageed
- Abstract summary: We formalize the definition of artifact and fingerprint in generative models.
We propose an algorithm for computing them in practice.
We study the structure of the fingerprints and observe that it is very predictive of the effect of different design choices on the generative process.
- Score: 16.710998621718193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have shown that generative models leave traces of their
underlying generative process on the generated samples, broadly referred to as
fingerprints of a generative model, and have studied their utility in detecting
synthetic images from real ones. However, the extend to which these
fingerprints can distinguish between various types of synthetic image and help
identify the underlying generative process remain under-explored. In
particular, the very definition of a fingerprint remains unclear, to our
knowledge. To that end, in this work, we formalize the definition of artifact
and fingerprint in generative models, propose an algorithm for computing them
in practice, and finally study its effectiveness in distinguishing a large
array of different generative models. We find that using our proposed
definition can significantly improve the performance on the task of identifying
the underlying generative process from samples (model attribution) compared to
existing methods. Additionally, we study the structure of the fingerprints, and
observe that it is very predictive of the effect of different design choices on
the generative process.
Related papers
- How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - RenAIssance: A Survey into AI Text-to-Image Generation in the Era of
Large Model [93.8067369210696]
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions.
Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps.
In the era of large models, scaling up model size and the integration with large language models have further improved the performance of TTI models.
arXiv Detail & Related papers (2023-09-02T03:27:20Z) - Model Synthesis for Zero-Shot Model Attribution [26.835046772924258]
generative models are shaping various fields such as art, design, and human-computer interaction.
We propose a model synthesis technique, which generates numerous synthetic models mimicking the fingerprint patterns of real-world generative models.
Our experiments demonstrate that this fingerprint extractor, trained solely on synthetic models, achieves impressive zero-shot generalization on a wide range of real-world generative models.
arXiv Detail & Related papers (2023-07-29T13:00:42Z) - Learning Robust Representations Of Generative Models Using Set-Based
Artificial Fingerprints [14.191129493685212]
Existing methods approximate the distance between the models via their sample distributions.
We consider unique traces (a.k.a. "artificial fingerprints") as representations of generative models.
We propose a new learning method based on set-encoding and contrastive training.
arXiv Detail & Related papers (2022-06-04T23:20:07Z) - Self-supervised GAN Detector [10.963740942220168]
generative models can be abused with malicious purposes, such as fraud, defamation, and fake news.
We propose a novel framework to distinguish the unseen generated images outside of the training settings.
Our proposed method is composed of the artificial fingerprint generator reconstructing the high-quality artificial fingerprints of GAN images.
arXiv Detail & Related papers (2021-11-12T06:19:04Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Unsupervised Discovery of Disentangled Manifolds in GANs [74.24771216154105]
Interpretable generation process is beneficial to various image editing applications.
We propose a framework to discover interpretable directions in the latent space given arbitrary pre-trained generative adversarial networks.
arXiv Detail & Related papers (2020-11-24T02:18:08Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - Reverse Engineering Configurations of Neural Text Generation Models [86.9479386959155]
The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area.
We conduct an extensive suite of diagnostic tests to observe whether modeling choices leave detectable artifacts in the text they generate.
Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone.
arXiv Detail & Related papers (2020-04-13T21:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.