Learning Robust Representations Of Generative Models Using Set-Based
Artificial Fingerprints
- URL: http://arxiv.org/abs/2206.02067v1
- Date: Sat, 4 Jun 2022 23:20:07 GMT
- Title: Learning Robust Representations Of Generative Models Using Set-Based
Artificial Fingerprints
- Authors: Hae Jin Song, Wael AbdAlmageed
- Abstract summary: Existing methods approximate the distance between the models via their sample distributions.
We consider unique traces (a.k.a. "artificial fingerprints") as representations of generative models.
We propose a new learning method based on set-encoding and contrastive training.
- Score: 14.191129493685212
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With recent progress in deep generative models, the problem of identifying
synthetic data and comparing their underlying generative processes has become
an imperative task for various reasons, including fighting visual
misinformation and source attribution. Existing methods often approximate the
distance between the models via their sample distributions. In this paper, we
approach the problem of fingerprinting generative models by learning
representations that encode the residual artifacts left by the generative
models as unique signals that identify the source models. We consider these
unique traces (a.k.a. "artificial fingerprints") as representations of
generative models, and demonstrate their usefulness in both the discriminative
task of source attribution and the unsupervised task of defining a similarity
between the underlying models. We first extend the existing studies on
fingerprints of GANs to four representative classes of generative models (VAEs,
Flows, GANs and score-based models), and demonstrate their existence and
attributability. We then improve the stability and attributability of the
fingerprints by proposing a new learning method based on set-encoding and
contrastive training. Our set-encoder, unlike existing methods that operate on
individual images, learns fingerprints from a \textit{set} of images. We
demonstrate improvements in the stability and attributability through
comparisons to state-of-the-art fingerprint methods and ablation studies.
Further, our method employs contrastive training to learn an implicit
similarity between models. We discover latent families of generative models
using this metric in a standard hierarchical clustering algorithm.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - ManiFPT: Defining and Analyzing Fingerprints of Generative Models [16.710998621718193]
We formalize the definition of artifact and fingerprint in generative models.
We propose an algorithm for computing them in practice.
We study the structure of the fingerprints and observe that it is very predictive of the effect of different design choices on the generative process.
arXiv Detail & Related papers (2024-02-16T01:58:35Z) - Model Synthesis for Zero-Shot Model Attribution [26.835046772924258]
generative models are shaping various fields such as art, design, and human-computer interaction.
We propose a model synthesis technique, which generates numerous synthetic models mimicking the fingerprint patterns of real-world generative models.
Our experiments demonstrate that this fingerprint extractor, trained solely on synthetic models, achieves impressive zero-shot generalization on a wide range of real-world generative models.
arXiv Detail & Related papers (2023-07-29T13:00:42Z) - Diffusion Models Beat GANs on Image Classification [37.70821298392606]
Diffusion models have risen to prominence as a state-of-the-art method for image generation, denoising, inpainting, super-resolution, manipulation, etc.
We present our findings that these embeddings are useful beyond the noise prediction task, as they contain discriminative information and can also be leveraged for classification.
We find that with careful feature selection and pooling, diffusion models outperform comparable generative-discriminative methods for classification tasks.
arXiv Detail & Related papers (2023-07-17T17:59:40Z) - MAUVE Scores for Generative Models: Theory and Practice [95.86006777961182]
We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images.
We find that MAUVE can quantify the gaps between the distributions of human-written text and those of modern neural language models.
We demonstrate in the vision domain that MAUVE can identify known properties of generated images on par with or better than existing metrics.
arXiv Detail & Related papers (2022-12-30T07:37:40Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.