On the Exploitation of Deepfake Model Recognition
- URL: http://arxiv.org/abs/2204.04513v1
- Date: Sat, 9 Apr 2022 16:48:23 GMT
- Title: On the Exploitation of Deepfake Model Recognition
- Authors: Luca Guarnera (1), Oliver Giudice (2), Matthias Niessner (3),
Sebastiano Battiato (1) ((1) University of Catania, (2) Applied Research
Team, IT dept., Banca d'Italia, Italy, (3) Technical University of Munich,
Germany)
- Abstract summary: The recognition of a specific GAN model that generated the deepfake image is a task not yet completely addressed in the state-of-the-art.
A robust processing pipeline to evaluate the possibility to point-out analytic fingerprints for Deepfake model recognition is presented.
The study takes an important step in countering the Deepfake phenomenon introducing a sort of signature in some sense similar to those employed in the multimedia forensics field.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent advances in Generative Adversarial Networks (GANs), with
special focus to the Deepfake phenomenon there is no a clear understanding
neither in terms of explainability nor of recognition of the involved models.
In particular, the recognition of a specific GAN model that generated the
deepfake image compared to many other possible models created by the same
generative architecture (e.g. StyleGAN) is a task not yet completely addressed
in the state-of-the-art. In this work, a robust processing pipeline to evaluate
the possibility to point-out analytic fingerprints for Deepfake model
recognition is presented. After exploiting the latent space of 50 slightly
different models through an in-depth analysis on the generated images, a proper
encoder was trained to discriminate among these models obtaining a
classification accuracy of over 96%. Once demonstrated the possibility to
discriminate extremely similar images, a dedicated metric exploiting the
insights discovered in the latent space was introduced. By achieving a final
accuracy of more than 94% for the Model Recognition task on images generated by
models not employed in the training phase, this study takes an important step
in countering the Deepfake phenomenon introducing a sort of signature in some
sense similar to those employed in the multimedia forensics field (e.g. for
camera source identification task, image ballistics task, etc).
Related papers
- FakeScope: Large Multimodal Expert Model for Transparent AI-Generated Image Forensics [66.14786900470158]
We propose FakeScope, an expert multimodal model (LMM) tailored for AI-generated image forensics.
FakeScope identifies AI-synthetic images with high accuracy and provides rich, interpretable, and query-driven forensic insights.
FakeScope achieves state-of-the-art performance in both closed-ended and open-ended forensic scenarios.
arXiv Detail & Related papers (2025-03-31T16:12:48Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - Rethinking the Up-Sampling Operations in CNN-based Generative Network
for Generalizable Deepfake Detection [86.97062579515833]
We introduce the concept of Neighboring Pixel Relationships(NPR) as a means to capture and characterize the generalized structural artifacts stemming from up-sampling operations.
A comprehensive analysis is conducted on an open-world dataset, comprising samples generated by tft28 distinct generative models.
This analysis culminates in the establishment of a novel state-of-the-art performance, showcasing a remarkable tft11.6% improvement over existing methods.
arXiv Detail & Related papers (2023-12-16T14:27:06Z) - An Ambiguity Measure for Recognizing the Unknowns in Deep Learning [0.0]
We study the understanding of deep neural networks from the scope in which they are trained on.
We propose a measure for quantifying the ambiguity of inputs for any given model.
arXiv Detail & Related papers (2023-12-11T02:57:12Z) - Hierarchical Uncertainty Estimation for Medical Image Segmentation
Networks [1.9564356751775307]
Uncertainty exists in both images (noise) and manual annotations (human errors and bias) used for model training.
We propose a simple yet effective method for estimating uncertainties at multiple levels.
We demonstrate that a deep learning segmentation network such as U-net, can achieve a high segmentation performance.
arXiv Detail & Related papers (2023-08-16T16:09:23Z) - Level Up the Deepfake Detection: a Method to Effectively Discriminate
Images Generated by GAN Architectures and Diffusion Models [0.0]
The deepfake detection and recognition task was investigated by collecting a dedicated dataset of pristine images and fake ones.
A hierarchical multi-level approach was introduced to solve three different deepfake detection and recognition tasks.
Experimental results demonstrated, in each case, more than 97% classification accuracy, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2023-03-01T16:01:46Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - An application of a pseudo-parabolic modeling to texture image
recognition [0.0]
We present a novel methodology for texture image recognition using a partial differential equation modeling.
We employ the pseudo-parabolic Buckley-Leverett equation to provide a dynamics to the digital image representation and collect local descriptors from those images evolving in time.
arXiv Detail & Related papers (2021-02-09T18:08:42Z) - DeepFake Detection by Analyzing Convolutional Traces [0.0]
We focus on the analysis of Deepfakes of human faces with the objective of creating a new detection method.
The proposed technique, by means of an Expectation Maximization (EM) algorithm, extracts a set of local features specifically addressed to model the underlying convolutional generative process.
Results demonstrated the effectiveness of the technique in distinguishing the different architectures and the corresponding generation process.
arXiv Detail & Related papers (2020-04-22T09:02:55Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.