Alteration-free and Model-agnostic Origin Attribution of Generated
Images
- URL: http://arxiv.org/abs/2305.18439v1
- Date: Mon, 29 May 2023 01:35:37 GMT
- Title: Alteration-free and Model-agnostic Origin Attribution of Generated
Images
- Authors: Zhenting Wang, Chen Chen, Yi Zeng, Lingjuan Lyu, Shiqing Ma
- Abstract summary: Concerns have emerged regarding potential misuse of image generation models.
It is necessary to analyze the origin of images by inferring if a specific image was generated by a particular model.
- Score: 28.34437698362946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been a growing attention in image generation models.
However, concerns have emerged regarding potential misuse and intellectual
property (IP) infringement associated with these models. Therefore, it is
necessary to analyze the origin of images by inferring if a specific image was
generated by a particular model, i.e., origin attribution. Existing methods are
limited in their applicability to specific types of generative models and
require additional steps during training or generation. This restricts their
use with pre-trained models that lack these specific operations and may
compromise the quality of image generation. To overcome this problem, we first
develop an alteration-free and model-agnostic origin attribution method via
input reverse-engineering on image generation models, i.e., inverting the input
of a particular model for a specific image. Given a particular model, we first
analyze the differences in the hardness of reverse-engineering tasks for the
generated images of the given model and other images. Based on our analysis, we
propose a method that utilizes the reconstruction loss of reverse-engineering
to infer the origin. Our proposed method effectively distinguishes between
generated images from a specific generative model and other images, including
those generated by different models and real images.
Related papers
- A Simple Approach to Unifying Diffusion-based Conditional Generation [63.389616350290595]
We introduce a simple, unified framework to handle diverse conditional generation tasks.
Our approach enables versatile capabilities via different inference-time sampling schemes.
Our model supports additional capabilities like non-spatially aligned and coarse conditioning.
arXiv Detail & Related papers (2024-10-15T09:41:43Z) - Avoiding Generative Model Writer's Block With Embedding Nudging [8.3196702956302]
We focus on the latent diffusion image generative models and how one can prevent them to generate particular images while generating similar images with limited overhead.
Our method successfully prevents the generation of memorized training images while maintaining comparable image quality and relevance to the unmodified model.
arXiv Detail & Related papers (2024-08-28T00:07:51Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation [52.509092010267665]
We introduce LlamaGen, a new family of image generation models that apply original next-token prediction'' paradigm of large language models to visual generation domain.
It is an affirmative answer to whether vanilla autoregressive models, e.g., Llama, without inductive biases on visual signals can achieve state-of-the-art image generation performance if scaling properly.
arXiv Detail & Related papers (2024-06-10T17:59:52Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - Which Model Generated This Image? A Model-Agnostic Approach for Origin Attribution [23.974575820244944]
In this work, we study the origin attribution of generated images in a practical setting.
The goal is to check if a given image is generated by the source model.
We propose OCC-CLIP, a CLIP-based framework for few-shot one-class classification.
arXiv Detail & Related papers (2024-04-03T12:54:16Z) - DiffGAR: Model-Agnostic Restoration from Generative Artifacts Using
Image-to-Image Diffusion Models [46.46919194633776]
This work aims to develop a plugin post-processing module for diverse generative models.
Unlike traditional degradation patterns, generative artifacts are non-linear and the transformation function is highly complex.
arXiv Detail & Related papers (2022-10-16T16:08:47Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.