Beyond Detection: Visual Realism Assessment of Deepfakes
- URL: http://arxiv.org/abs/2306.05985v1
- Date: Fri, 9 Jun 2023 15:53:01 GMT
- Title: Beyond Detection: Visual Realism Assessment of Deepfakes
- Authors: Luka Dragar, Peter Peer, Vitomir \v{S}truc, Borut Batagelj
- Abstract summary: We utilize an ensemble of two Convolutional Neural Network (CNN) models: Eva and ConvNext.
We aim to predict Mean Opinion Scores (MOS) from DeepFake videos based on features extracted from sequences of frames.
Our method secured the third place in the recent DFGC on Visual Realism Assessment held in conjunction with the 2023 International Joint Conference on Biometrics.
- Score: 1.0832844764942349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the era of rapid digitalization and artificial intelligence advancements,
the development of DeepFake technology has posed significant security and
privacy concerns. This paper presents an effective measure to assess the visual
realism of DeepFake videos. We utilize an ensemble of two Convolutional Neural
Network (CNN) models: Eva and ConvNext. These models have been trained on the
DeepFake Game Competition (DFGC) 2022 dataset and aim to predict Mean Opinion
Scores (MOS) from DeepFake videos based on features extracted from sequences of
frames. Our method secured the third place in the recent DFGC on Visual Realism
Assessment held in conjunction with the 2023 International Joint Conference on
Biometrics (IJCB 2023). We provide an over\-view of the models, data
preprocessing, and training procedures. We also report the performance of our
models against the competition's baseline model and discuss the implications of
our findings.
Related papers
- SDFR: Synthetic Data for Face Recognition Competition [51.9134406629509]
Large-scale face recognition datasets are collected by crawling the Internet and without individuals' consent, raising legal, ethical, and privacy concerns.
Recently several works proposed generating synthetic face recognition datasets to mitigate concerns in web-crawled face recognition datasets.
This paper presents the summary of the Synthetic Data for Face Recognition (SDFR) Competition held in conjunction with the 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024)
The SDFR competition was split into two tasks, allowing participants to train face recognition systems using new synthetic datasets and/or existing ones.
arXiv Detail & Related papers (2024-04-06T10:30:31Z) - Unmasking Deepfake Faces from Videos Using An Explainable Cost-Sensitive
Deep Learning Approach [0.0]
Deepfake technology is widely used, which has led to serious worries about the authenticity of digital media.
This study employs a resource-effective and transparent cost-sensitive deep learning method to effectively detect deepfake faces in videos.
arXiv Detail & Related papers (2023-12-17T14:57:10Z) - Large-scale Robustness Analysis of Video Action Recognition Models [10.017292176162302]
We study robustness of six state-of-the-art action recognition models against 90 different perturbations.
The study reveals some interesting findings, 1) transformer based models are consistently more robust compared to CNN based models, 2) Pretraining improves robustness for Transformer based models more than CNN based models, and 3) All of the studied models are robust to temporal perturbations for all datasets but SSv2.
arXiv Detail & Related papers (2022-07-04T13:29:34Z) - Two-Stream Consensus Network: Submission to HACS Challenge 2021
Weakly-Supervised Learning Track [78.64815984927425]
The goal of weakly-supervised temporal action localization is to temporally locate and classify action of interest in untrimmed videos.
We adopt the two-stream consensus network (TSCN) as the main framework in this challenge.
Our solution ranked 2rd in this challenge, and we hope our method can serve as a baseline for future academic research.
arXiv Detail & Related papers (2021-06-21T03:36:36Z) - Modeling Object Dissimilarity for Deep Saliency Prediction [86.14710352178967]
We introduce a detection-guided saliency prediction network that explicitly models the differences between multiple objects.
Our approach is general, allowing us to fuse our object dissimilarities with features extracted by any deep saliency prediction network.
arXiv Detail & Related papers (2021-04-08T16:10:37Z) - Deepfake Detection Scheme Based on Vision Transformer and Distillation [4.716110829725784]
We propose a Vision Transformer model with distillation methodology for detecting fake videos.
We verify that the proposed scheme with patch embedding as input outperforms the state-of-the-art using the combined CNN features.
arXiv Detail & Related papers (2021-04-03T09:13:05Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - SimAug: Learning Robust Representations from Simulation for Trajectory
Prediction [78.91518036949918]
We propose a novel approach to learn robust representation through augmenting the simulation training data.
We show that SimAug achieves promising results on three real-world benchmarks using zero real training data.
arXiv Detail & Related papers (2020-04-04T21:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.