DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images
- URL: http://arxiv.org/abs/2404.15697v1
- Date: Wed, 24 Apr 2024 07:25:36 GMT
- Title: DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images
- Authors: Orazio Pontorno, Luca Guarnera, Sebastiano Battiato,
- Abstract summary: Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics.
We propose a novel approach based on three blocks called Base Models.
The generalization features extracted from each block are then processed to discriminate the origin of the input image.
- Score: 6.75641797020186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics. The scientific community is working to develop approaches that can discriminate the origin of digital images (real or AI-generated). However, these methodologies face the challenge of generalization, that is, the ability to discern the nature of an image even if it is generated by an architecture not seen during training. This usually leads to a drop in performance. In this context, we propose a novel approach based on three blocks called Base Models, each of which is responsible for extracting the discriminative features of a specific image class (Diffusion Model-generated, GAN-generated, or real) as it is trained by exploiting deliberately unbalanced datasets. The features extracted from each block are then concatenated and processed to discriminate the origin of the input image. Experimental results showed that this approach not only demonstrates good robust capabilities to JPEG compression but also outperforms state-of-the-art methods in several generalization tests. Code, models and dataset are available at https://github.com/opontorno/block-based_deepfake-detection.
Related papers
- Detection of Synthetic Face Images: Accuracy, Robustness, Generalization [1.757194730633422]
We find that a simple model trained on a specific image generator can achieve near-perfect accuracy in separating synthetic and real images.
The model turned out to be vulnerable to adversarial attacks and does not generalize to unseen generators.
arXiv Detail & Related papers (2024-06-25T13:34:50Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - BOSC: A Backdoor-based Framework for Open Set Synthetic Image Attribution [22.81354665006496]
Synthetic image attribution addresses the problem of tracing back the origin of images produced by generative models.
We propose a framework for open set attribution of synthetic images, named BOSC, that relies on the concept of backdoor attacks.
arXiv Detail & Related papers (2024-05-19T09:17:43Z) - Rethinking the Up-Sampling Operations in CNN-based Generative Network
for Generalizable Deepfake Detection [86.97062579515833]
We introduce the concept of Neighboring Pixel Relationships(NPR) as a means to capture and characterize the generalized structural artifacts stemming from up-sampling operations.
A comprehensive analysis is conducted on an open-world dataset, comprising samples generated by tft28 distinct generative models.
This analysis culminates in the establishment of a novel state-of-the-art performance, showcasing a remarkable tft11.6% improvement over existing methods.
arXiv Detail & Related papers (2023-12-16T14:27:06Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Deepfake Network Architecture Attribution [23.375381198124014]
Existing works on fake image attribution perform multi-class classification on several Generative Adversarial Network (GAN) models.
We present the first study on textitDeepfake Network Architecture Attribution to attribute fake images on architecture-level.
arXiv Detail & Related papers (2022-02-28T14:54:30Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Just Noticeable Difference for Machines to Generate Adversarial Images [0.34376560669160383]
The proposed method is based on a popular concept of experimental psychology called, Just Noticeable Difference.
The adversarial images generated in this study looks more natural compared to the output of state of the art adversarial image generators.
arXiv Detail & Related papers (2020-01-29T19:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.