Reverse Engineering of Generative Models: Inferring Model
Hyperparameters from Generated Images
- URL: http://arxiv.org/abs/2106.07873v3
- Date: Sun, 30 Jul 2023 00:48:21 GMT
- Title: Reverse Engineering of Generative Models: Inferring Model
Hyperparameters from Generated Images
- Authors: Vishal Asnani, Xi Yin, Tal Hassner, Xiaoming Liu
- Abstract summary: State-of-the-art (SOTA) Generative Models (GMs) can synthesize photo-realistic images that are hard for humans to distinguish from genuine photos.
We propose reverse engineering of GMs to infer model hyper parameters from the images generated by these models.
We show that our fingerprint estimation can be leveraged for deepfake detection and image attribution.
- Score: 36.08924910193875
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: State-of-the-art (SOTA) Generative Models (GMs) can synthesize
photo-realistic images that are hard for humans to distinguish from genuine
photos. Identifying and understanding manipulated media are crucial to mitigate
the social concerns on the potential misuse of GMs. We propose to perform
reverse engineering of GMs to infer model hyperparameters from the images
generated by these models. We define a novel problem, ``model parsing", as
estimating GM network architectures and training loss functions by examining
their generated images -- a task seemingly impossible for human beings. To
tackle this problem, we propose a framework with two components: a Fingerprint
Estimation Network (FEN), which estimates a GM fingerprint from a generated
image by training with four constraints to encourage the fingerprint to have
desired properties, and a Parsing Network (PN), which predicts network
architecture and loss functions from the estimated fingerprints. To evaluate
our approach, we collect a fake image dataset with $100$K images generated by
$116$ different GMs. Extensive experiments show encouraging results in parsing
the hyperparameters of the unseen models. Finally, our fingerprint estimation
can be leveraged for deepfake detection and image attribution, as we show by
reporting SOTA results on both the deepfake detection (Celeb-DF) and image
attribution benchmarks.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images [13.089550724738436]
Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields.
Their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content.
This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier.
arXiv Detail & Related papers (2024-04-19T14:30:41Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Robust Retraining-free GAN Fingerprinting via Personalized Normalization [21.63902009635896]
The proposed method can embed different fingerprints inside the GAN by just changing the input of the ParamGen Nets.
The performance of the proposed method in terms of robustness against both model-level and image-level attacks is superior to the state-of-the-art.
arXiv Detail & Related papers (2023-11-09T16:09:12Z) - Exploring the Robustness of Human Parsers Towards Common Corruptions [99.89886010550836]
We construct three corruption robustness benchmarks, termed LIP-C, ATR-C, and Pascal-Person-Part-C, to assist us in evaluating the risk tolerance of human parsing models.
Inspired by the data augmentation strategy, we propose a novel heterogeneous augmentation-enhanced mechanism to bolster robustness under commonly corrupted conditions.
arXiv Detail & Related papers (2023-09-02T13:32:14Z) - WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models [32.29120988096214]
This paper introduces a novel approach to model fingerprinting that assigns responsibility for the generated images.
Our method modifies generative models based on each user's unique digital fingerprint, imprinting a unique identifier onto the resultant content that can be traced back to the user.
arXiv Detail & Related papers (2023-06-07T19:44:14Z) - Comparative analysis of segmentation and generative models for
fingerprint retrieval task [0.0]
Fingerprints deteriorate in quality if the fingers are dirty, wet, injured or when sensors malfunction.
This paper proposes a deep learning approach to address these issues using Generative (GAN) and models.
In our research, the u-net model performed better than the GAN networks.
arXiv Detail & Related papers (2022-09-13T17:21:14Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Self-supervised GAN Detector [10.963740942220168]
generative models can be abused with malicious purposes, such as fraud, defamation, and fake news.
We propose a novel framework to distinguish the unseen generated images outside of the training settings.
Our proposed method is composed of the artificial fingerprint generator reconstructing the high-quality artificial fingerprints of GAN images.
arXiv Detail & Related papers (2021-11-12T06:19:04Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.