Deepfake Detection: A Comparative Analysis
- URL: http://arxiv.org/abs/2308.03471v1
- Date: Mon, 7 Aug 2023 10:57:20 GMT
- Title: Deepfake Detection: A Comparative Analysis
- Authors: Sohail Ahmed Khan and Duc-Tien Dang-Nguyen
- Abstract summary: We evaluate eight supervised deep learning architectures and two transformer-based models pre-trained using self-supervised strategies on four benchmarks.
Our analysis includes intra-dataset and inter-dataset evaluations, examining the best performing models, generalisation capabilities, and impact of augmentations.
- Score: 2.644723682054489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper present a comprehensive comparative analysis of supervised and
self-supervised models for deepfake detection. We evaluate eight supervised
deep learning architectures and two transformer-based models pre-trained using
self-supervised strategies (DINO, CLIP) on four benchmarks (FakeAVCeleb,
CelebDF-V2, DFDC, and FaceForensics++). Our analysis includes intra-dataset and
inter-dataset evaluations, examining the best performing models, generalisation
capabilities, and impact of augmentations. We also investigate the trade-off
between model size and performance. Our main goal is to provide insights into
the effectiveness of different deep learning architectures (transformers,
CNNs), training strategies (supervised, self-supervised), and deepfake
detection benchmarks. These insights can help guide the development of more
accurate and reliable deepfake detection systems, which are crucial in
mitigating the harmful impact of deepfakes on individuals and society.
Related papers
- Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - How Close are Other Computer Vision Tasks to Deepfake Detection? [42.79190870582115]
We present a new measurement, "model separability," for assessing a model's raw capacity to separate data in an unsupervised manner.
Our analysis shows that pre-trained face recognition models are more closely related to deepfake detection than other models.
We found that self-supervised models deliver the best results, but there is a risk of overfitting.
arXiv Detail & Related papers (2023-10-02T06:32:35Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
Perturbations against Interpretable Deep Learning [16.13790238416691]
This work introduces two attacks, AdvEdge and AdvEdge$+$, that deceive both the target deep learning model and the coupled interpretation model.
Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters.
arXiv Detail & Related papers (2022-11-29T04:45:10Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Adversarial Robustness of Deep Learning: Theory, Algorithms, and
Applications [27.033174829788404]
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning.
We will highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs)
We will also introduce some effective countermeasures to improve the robustness of deep learning models.
arXiv Detail & Related papers (2021-08-24T00:08:33Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - A Comprehensive Evaluation Framework for Deep Model Robustness [44.20580847861682]
Deep neural networks (DNNs) have achieved remarkable performance across a wide area of applications.
They are vulnerable to adversarial examples, which motivates the adversarial defense.
This paper presents a model evaluation framework containing a comprehensive, rigorous, and coherent set of evaluation metrics.
arXiv Detail & Related papers (2021-01-24T01:04:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.