Deepfake Detection: A Comprehensive Study from the Reliability
Perspective
- URL: http://arxiv.org/abs/2211.10881v1
- Date: Sun, 20 Nov 2022 06:31:23 GMT
- Title: Deepfake Detection: A Comprehensive Study from the Reliability
Perspective
- Authors: Tianyi Wang and Kam Pui Chow and Xiaojun Chang and Yinglong Wang
- Abstract summary: The mushroomed Deepfake synthetic materials circulated on the internet have raised serious social impact.
This paper defines the research challenges of Deepfake detection in three aspects, namely, transferability, interpretability, and reliability.
- Score: 46.15242479794739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The mushroomed Deepfake synthetic materials circulated on the internet have
raised serious social impact to politicians, celebrities, and every human being
on earth. In this paper, we provide a thorough review of the existing models
following the development history of the Deepfake detection studies and define
the research challenges of Deepfake detection in three aspects, namely,
transferability, interpretability, and reliability. While the transferability
and interpretability challenges have both been frequently discussed and
attempted to solve with quantitative evaluations, the reliability issue has
been barely considered, leading to the lack of reliable evidence in real-life
usages and even for prosecutions on Deepfake related cases in court. We
therefore conduct a model reliability study scheme using statistical random
sampling knowledge and the publicly available benchmark datasets to
qualitatively validate the detection performance of the existing models on
arbitrary Deepfake candidate suspects. A barely remarked systematic data
pre-processing procedure is demonstrated along with the fair training and
testing experiments on the existing detection models. Case studies are further
executed to justify the real-life Deepfake cases including different groups of
victims with the help of reliably qualified detection models. The model
reliability study provides a workflow for the detection models to act as or
assist evidence for Deepfake forensic investigation in court once approved by
authentication experts or institutions.
Related papers
- Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
Different from existing forgery detection datasets, the self-construct FairFD dataset contains a balanced racial ratio and diverse forgery generation images with the largest-scale subjects.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Towards a Framework for Deep Learning Certification in Safety-Critical Applications Using Inherently Safe Design and Run-Time Error Detection [0.0]
We consider real-world problems arising in aviation and other safety-critical areas, and investigate their requirements for a certified model.
We establish a new framework towards deep learning certification based on (i) inherently safe design, and (ii) run-time error detection.
arXiv Detail & Related papers (2024-03-12T11:38:45Z) - Demonstrative Evidence and the Use of Algorithms in Jury Trials [0.0]
We investigate how the use of bullet comparison algorithms and demonstrative evidence may affect juror perceptions of reliability, credibility, and understanding of expert witnesses and presented evidence.
We find that individuals overwhelmingly provided high Likert scale ratings in reliability, credibility, and scientificity regardless of experimental condition.
arXiv Detail & Related papers (2023-11-18T04:21:01Z) - GazeForensics: DeepFake Detection via Gaze-guided Spatial Inconsistency
Learning [63.547321642941974]
We introduce GazeForensics, an innovative DeepFake detection method that utilizes gaze representation obtained from a 3D gaze estimation model.
Experiment results reveal that our proposed GazeForensics outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2023-11-13T04:48:33Z) - Improving Cross-dataset Deepfake Detection with Deep Information
Decomposition [57.284370468207214]
Deepfake technology poses a significant threat to security and social trust.
Existing detection methods suffer from sharp performance degradation when faced with cross-dataset scenarios.
We propose a deep information decomposition (DID) framework in this paper.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - The Challenges of Machine Learning for Trust and Safety: A Case Study on Misinformation Detection [0.8057006406834466]
We examine the disconnect between scholarship and practice in applying machine learning to trust and safety problems.
We survey literature on automated detection of misinformation across a corpus of 248 well-cited papers in the field.
We conclude that the current state-of-the-art in fully-automated detection has limited efficacy in detecting human-generated misinformation.
arXiv Detail & Related papers (2023-08-23T15:52:20Z) - DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection [55.70982767084996]
A critical yet frequently overlooked challenge in the field of deepfake detection is the lack of a standardized, unified, comprehensive benchmark.
We present the first comprehensive benchmark for deepfake detection, called DeepfakeBench, which offers three key contributions.
DeepfakeBench contains 15 state-of-the-art detection methods, 9CL datasets, a series of deepfake detection evaluation protocols and analysis tools, as well as comprehensive evaluations.
arXiv Detail & Related papers (2023-07-04T01:34:41Z) - A Call to Reflect on Evaluation Practices for Failure Detection in Image
Classification [0.491574468325115]
We present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions.
The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation.
arXiv Detail & Related papers (2022-11-28T12:25:27Z) - A Continual Deepfake Detection Benchmark: Dataset, Methods, and
Essentials [97.69553832500547]
This paper suggests a continual deepfake detection benchmark (CDDB) over a new collection of deepfakes from both known and unknown generative models.
We exploit multiple approaches to adapt multiclass incremental learning methods, commonly used in the continual visual recognition, to the continual deepfake detection problem.
arXiv Detail & Related papers (2022-05-11T13:07:19Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.