BOSC: A Backdoor-based Framework for Open Set Synthetic Image Attribution
- URL: http://arxiv.org/abs/2405.11491v1
- Date: Sun, 19 May 2024 09:17:43 GMT
- Title: BOSC: A Backdoor-based Framework for Open Set Synthetic Image Attribution
- Authors: Jun Wang, Benedetta Tondi, Mauro Barni,
- Abstract summary: Synthetic image attribution addresses the problem of tracing back the origin of images produced by generative models.
We propose a framework for open set attribution of synthetic images, named BOSC, that relies on the concept of backdoor attacks.
- Score: 22.81354665006496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic image attribution addresses the problem of tracing back the origin of images produced by generative models. Extensive efforts have been made to explore unique representations of generative models and use them to attribute a synthetic image to the model that produced it. Most of the methods classify the models or the architectures among those in a closed set without considering the possibility that the system is fed with samples produced by unknown architectures. With the continuous progress of AI technology, new generative architectures continuously appear, thus driving the attention of researchers towards the development of tools capable of working in open-set scenarios. In this paper, we propose a framework for open set attribution of synthetic images, named BOSC (Backdoor-based Open Set Classification), that relies on the concept of backdoor attacks to design a classifier with rejection option. BOSC works by purposely injecting class-specific triggers inside a portion of the images in the training set to induce the network to establish a matching between class features and trigger features. The behavior of the trained model with respect to triggered samples is then exploited at test time to perform sample rejection using an ad-hoc score. Experiments show that the proposed method has good performance, always surpassing the state-of-the-art. Robustness against image processing is also very good. Although we designed our method for the task of synthetic image attribution, the proposed framework is a general one and can be used for other image forensic applications.
Related papers
- Are CLIP features all you need for Universal Synthetic Image Origin Attribution? [13.96698277726253]
We propose a framework that incorporates features from large pre-trained foundation models to perform Open-Set origin attribution of synthetic images.
We show that our method leads to remarkable attribution performance, even in the low-data regime.
arXiv Detail & Related papers (2024-08-17T09:54:21Z) - DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images [6.75641797020186]
Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics.
We propose a novel approach based on three blocks called Base Models.
The generalization features extracted from each block are then processed to discriminate the origin of the input image.
arXiv Detail & Related papers (2024-04-24T07:25:36Z) - Which Model Generated This Image? A Model-Agnostic Approach for Origin Attribution [23.974575820244944]
In this work, we study the origin attribution of generated images in a practical setting.
The goal is to check if a given image is generated by the source model.
We propose OCC-CLIP, a CLIP-based framework for few-shot one-class classification.
arXiv Detail & Related papers (2024-04-03T12:54:16Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Improving Few-shot Image Generation by Structural Discrimination and
Textural Modulation [10.389698647141296]
Few-shot image generation aims to produce plausible and diverse images for one category given a few images from this category.
Existing approaches either globally interpolate different images or fuse local representations with pre-defined coefficients.
This paper proposes a novel mechanism to inject external semantic signals into internal local representations.
arXiv Detail & Related papers (2023-08-30T16:10:21Z) - Progressive Open Space Expansion for Open-Set Model Attribution [19.985618498466042]
We focus on a challenging task, namely Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones.
Compared to existing open-set recognition (OSR) tasks, OSMA is more challenging as the distinction between images from known and unknown models may only lie in visually imperceptible traces.
We propose a Progressive Open Space Expansion (POSE) solution, which simulates open-set samples that maintain the same semantics as closed-set samples but embedded with different imperceptible traces.
arXiv Detail & Related papers (2023-03-13T05:53:11Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - DiVAE: Photorealistic Images Synthesis with Denoising Diffusion Decoder [73.1010640692609]
We propose a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
Our model achieves state-of-the-art results and generates more photorealistic images specifically.
arXiv Detail & Related papers (2022-06-01T10:39:12Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.