Deepfake Style Transfer Mixture: a First Forensic Ballistics Study on
Synthetic Images
- URL: http://arxiv.org/abs/2203.09928v1
- Date: Fri, 18 Mar 2022 13:11:54 GMT
- Title: Deepfake Style Transfer Mixture: a First Forensic Ballistics Study on
Synthetic Images
- Authors: Luca Guarnera (1 and 2), Oliver Giudice (1 and 3), Sebastiano Battiato
(1 and 2) ((1) University of Catania, (2) iCTLab s.r.l. - Spin-off of
University of Catania, (3) Applied Research Team, IT dept., Banca d'Italia,
Italy)
- Abstract summary: This paper describes a study on detecting how many times a digital image has been processed by a generative architecture for style transfer.
In order to address and study accurately forensic ballistics on deepfake images, some mathematical properties of style-transfer operations were investigated.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most recent style-transfer techniques based on generative architectures are
able to obtain synthetic multimedia contents, or commonly called deepfakes,
with almost no artifacts. Researchers already demonstrated that synthetic
images contain patterns that can determine not only if it is a deepfake but
also the generative architecture employed to create the image data itself.
These traces can be exploited to study problems that have never been addressed
in the context of deepfakes. To this aim, in this paper a first approach to
investigate the image ballistics on deepfake images subject to style-transfer
manipulations is proposed. Specifically, this paper describes a study on
detecting how many times a digital image has been processed by a generative
architecture for style transfer. Moreover, in order to address and study
accurately forensic ballistics on deepfake images, some mathematical properties
of style-transfer operations were investigated.
Related papers
- Synthetic Art Generation and DeepFake Detection A Study on Jamini Roy Inspired Dataset [1.0742675209112622]
This study takes a unique approach by examining diffusion-based generative models in the context of Indian art.
To explore this, we fine-tuned Stable Diffusion 3 and used techniques like ControlNet and IPAdapter to generate realistic images.
We employed various qualitative and quantitative methods, such as Fourier domain assessments and autocorrelation metrics, to uncover subtle differences between synthetic images and authentic pieces.
arXiv Detail & Related papers (2025-03-29T21:12:16Z) - Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection [54.26588902144298]
We propose a knowledge-guided prompt learning method for deepfake facial image detection.
Specifically, we retrieve forgery-related prompts from large language models as expert knowledge to guide the optimization of learnable prompts.
Our proposed approach notably outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-01-01T02:18:18Z) - Solutions to Deepfakes: Can Camera Hardware, Cryptography, and Deep Learning Verify Real Images? [51.3344199560726]
It is imperative to establish methods that can separate real data from synthetic data with high confidence.
This document aims to: present known strategies in detection and cryptography that can be employed to verify which images are real.
arXiv Detail & Related papers (2024-07-04T22:01:21Z) - DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images [6.75641797020186]
Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics.
We propose a novel approach based on three blocks called Base Models.
The generalization features extracted from each block are then processed to discriminate the origin of the input image.
arXiv Detail & Related papers (2024-04-24T07:25:36Z) - Rethinking the Up-Sampling Operations in CNN-based Generative Network
for Generalizable Deepfake Detection [86.97062579515833]
We introduce the concept of Neighboring Pixel Relationships(NPR) as a means to capture and characterize the generalized structural artifacts stemming from up-sampling operations.
A comprehensive analysis is conducted on an open-world dataset, comprising samples generated by tft28 distinct generative models.
This analysis culminates in the establishment of a novel state-of-the-art performance, showcasing a remarkable tft11.6% improvement over existing methods.
arXiv Detail & Related papers (2023-12-16T14:27:06Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Deepfake detection by exploiting surface anomalies: the SurFake approach [29.088218634944116]
This paper investigates how deepfake creation can impact on the characteristics that the whole scene had at the time of the acquisition.
By resorting to the analysis of the characteristics of the surfaces depicted in the image it is possible to obtain a descriptor usable to train a CNN for deepfake detection.
arXiv Detail & Related papers (2023-10-31T16:54:14Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - On the detection of synthetic images generated by diffusion models [18.12766911229293]
Methods based on diffusion models (DM) have been gaining the spotlight.
DM enables the creation of text-based visual content.
Malicious users can generate and distribute fake media perfectly adapted to their attacks.
arXiv Detail & Related papers (2022-11-01T18:10:55Z) - Deepfake Network Architecture Attribution [23.375381198124014]
Existing works on fake image attribution perform multi-class classification on several Generative Adversarial Network (GAN) models.
We present the first study on textitDeepfake Network Architecture Attribution to attribute fake images on architecture-level.
arXiv Detail & Related papers (2022-02-28T14:54:30Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - Leveraging Frequency Analysis for Deep Fake Image Recognition [35.1862941141084]
Deep neural networks can generate images that are astonishingly realistic, so much so that it is often hard for humans to distinguish them from actual photos.
These achievements have been largely made possible by Generative Adversarial Networks (GANs)
In this paper, we show that in frequency space, GAN-generated images exhibit severe artifacts that can be easily identified.
arXiv Detail & Related papers (2020-03-19T11:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.