Open Set Synthetic Image Source Attribution
- URL: http://arxiv.org/abs/2308.11557v1
- Date: Tue, 22 Aug 2023 16:37:51 GMT
- Title: Open Set Synthetic Image Source Attribution
- Authors: Shengbang Fang, Tai D. Nguyen, Matthew C. Stamm
- Abstract summary: We propose a new metric learning-based approach to identify synthetic images.
Our technique works by learning transferrable embeddings capable of discriminating between generators.
We demonstrate our approach's ability to attribute the source of synthetic images in open-set scenarios.
- Score: 9.179652505898332
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI-generated images have become increasingly realistic and have garnered
significant public attention. While synthetic images are intriguing due to
their realism, they also pose an important misinformation threat. To address
this new threat, researchers have developed multiple algorithms to detect
synthetic images and identify their source generators. However, most existing
source attribution techniques are designed to operate in a closed-set scenario,
i.e. they can only be used to discriminate between known image generators. By
contrast, new image-generation techniques are rapidly emerging. To contend with
this, there is a great need for open-set source attribution techniques that can
identify when synthetic images have originated from new, unseen generators. To
address this problem, we propose a new metric learning-based approach. Our
technique works by learning transferrable embeddings capable of discriminating
between generators, even when they are not seen during training. An image is
first assigned to a candidate generator, then is accepted or rejected based on
its distance in the embedding space from known generators' learned reference
points. Importantly, we identify that initializing our source attribution
embedding network by pretraining it on image camera identification can improve
our embeddings' transferability. Through a series of experiments, we
demonstrate our approach's ability to attribute the source of synthetic images
in open-set scenarios.
Related papers
- Training-free Source Attribution of AI-generated Images via Resynthesis [15.553070492553298]
We present a new training-free one-shot attribution method based on image resynthesis.<n>We also introduce a new dataset for synthetic image attribution consisting of face images from commercial and open-source text-to-image generators.
arXiv Detail & Related papers (2025-10-28T10:39:04Z) - Low Resource Reconstruction Attacks Through Benign Prompts [12.077836270816622]
We devise a new attack that requires low resources, assumes little to no access to the actual training set, and identifies, seemingly, benign prompts that lead to potentially-risky image reconstruction.<n>This highlights the risk that images might even be reconstructed by an uninformed user and unintentionally.
arXiv Detail & Related papers (2025-07-10T17:32:26Z) - Forensic Self-Descriptions Are All You Need for Zero-Shot Detection, Open-Set Source Attribution, and Clustering of AI-generated Images [8.167678851224121]
Traditional methods fail to generalize to unseen generators due to reliance on features specific to known sources during training.
We propose a novel approach that explicitly models forensic microstructures.
This self-description enables us to perform zero-shot detection of synthetic images, open-set source attribution of images, and clustering based on source without prior knowledge.
arXiv Detail & Related papers (2025-03-26T21:34:37Z) - Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images [6.75641797020186]
Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics.
We propose a novel approach based on three blocks called Base Models.
The generalization features extracted from each block are then processed to discriminate the origin of the input image.
arXiv Detail & Related papers (2024-04-24T07:25:36Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Online Detection of AI-Generated Images [17.30253784649635]
We study generalization in this setting, training on N models and testing on the next (N+k)
We extend this approach to pixel prediction, demonstrating strong performance using automatically-generated inpainted data.
In addition, for settings where commercial models are not publicly available for automatic data generation, we evaluate if pixel detectors can be trained solely on whole synthetic images.
arXiv Detail & Related papers (2023-10-23T17:53:14Z) - Generalizable Synthetic Image Detection via Language-guided Contrastive
Learning [22.4158195581231]
malevolent use of synthetic images, such as the dissemination of fake news or the creation of fake profiles, raises significant concerns regarding the authenticity of images.
We propose a simple yet very effective synthetic image detection method via a language-guided contrastive learning and a new formulation of the detection problem.
It is shown that our proposed LanguAge-guided SynThEsis Detection (LASTED) model achieves much improved generalizability to unseen image generation models.
arXiv Detail & Related papers (2023-05-23T08:13:27Z) - Exploring Incompatible Knowledge Transfer in Few-shot Image Generation [107.81232567861117]
Few-shot image generation learns to generate diverse and high-fidelity images from a target domain using a few reference samples.
Existing F SIG methods select, preserve and transfer prior knowledge from a source generator to learn the target generator.
We propose knowledge truncation, which is a complementary operation to knowledge preservation and is implemented by a lightweight pruning-based method.
arXiv Detail & Related papers (2023-04-15T14:57:15Z) - On the detection of synthetic images generated by diffusion models [18.12766911229293]
Methods based on diffusion models (DM) have been gaining the spotlight.
DM enables the creation of text-based visual content.
Malicious users can generate and distribute fake media perfectly adapted to their attacks.
arXiv Detail & Related papers (2022-11-01T18:10:55Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Just Noticeable Difference for Machines to Generate Adversarial Images [0.34376560669160383]
The proposed method is based on a popular concept of experimental psychology called, Just Noticeable Difference.
The adversarial images generated in this study looks more natural compared to the output of state of the art adversarial image generators.
arXiv Detail & Related papers (2020-01-29T19:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.