A Novel Framework for Assessment of Learning-based Detectors in
Realistic Conditions with Application to Deepfake Detection
- URL: http://arxiv.org/abs/2203.11797v1
- Date: Tue, 22 Mar 2022 15:03:56 GMT
- Title: A Novel Framework for Assessment of Learning-based Detectors in
Realistic Conditions with Application to Deepfake Detection
- Authors: Yuhang Lu, Ruizhi Luo, Touradj Ebrahimi
- Abstract summary: This paper proposes a rigorous framework to assess performance of learning-based detectors in more realistic situations.
Inspired by the assessment results, a data augmentation strategy based on natural image degradation process is designed.
- Score: 11.287342793740876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks have shown remarkable results on multiple
detection tasks. Despite the significant progress, the performance of such
detectors are often assessed in public benchmarks under non-realistic
conditions. Specifically, impact of conventional distortions and processing
operations such as compression, noise, and enhancement are not sufficiently
studied. This paper proposes a rigorous framework to assess performance of
learning-based detectors in more realistic situations. An illustrative example
is shown under deepfake detection context. Inspired by the assessment results,
a data augmentation strategy based on natural image degradation process is
designed, which significantly improves the generalization ability of two
deepfake detectors.
Related papers
- Improving Cross-dataset Deepfake Detection with Deep Information
Decomposition [57.284370468207214]
Deepfake technology poses a significant threat to security and social trust.
Existing detection methods suffer from sharp performance degradation when faced with cross-dataset scenarios.
We propose a deep information decomposition (DID) framework in this paper.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - Assessment Framework for Deepfake Detection in Real-world Situations [13.334500258498798]
Deep learning-based deepfake detection methods have exhibited remarkable performance.
The impact of various image and video processing operations and typical workflow distortions on detection accuracy has not been systematically measured.
A more reliable assessment framework is proposed to evaluate the performance of learning-based deepfake detectors in more realistic settings.
arXiv Detail & Related papers (2023-04-12T19:09:22Z) - Impact of Video Processing Operations in Deepfake Detection [13.334500258498798]
Digital face manipulation in video has attracted extensive attention due to the increased risk to public trust.
Deep learning-based deepfake detection methods have been developed and have shown impressive results.
The performance of these detectors is often evaluated using benchmarks that hardly reflect real-world situations.
arXiv Detail & Related papers (2023-03-30T09:24:17Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - A New Approach to Improve Learning-based Deepfake Detection in Realistic
Conditions [13.334500258498798]
Deep convolutional neural networks have achieved exceptional results on multiple detection and recognition tasks.
The impact of conventional distortions and processing operations found in imaging such as compression, noise, and enhancement are not sufficiently studied.
This paper proposes a more effective data augmentation scheme based on real-world image degradation process.
arXiv Detail & Related papers (2022-03-22T15:16:54Z) - Impact of Benign Modifications on Discriminative Performance of Deepfake
Detectors [11.881119750753648]
A large number of deepfake detectors have been proposed recently in order to identify such content.
Deepfakes are increasingly popular in both good faith applications such as in entertainment and maliciously intended manipulations such as in image and video forgery.
This paper proposes a more rigorous and systematic framework to assess the performance of deepfake detectors in more realistic situations.
arXiv Detail & Related papers (2021-11-14T22:50:39Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery
Detection [118.37239586697139]
LipForensics is a detection approach capable of both generalising manipulations and withstanding various distortions.
It consists in first pretraining a-temporal network to perform visual speech recognition (lipreading)
A temporal network is subsequently finetuned on fixed mouth embeddings of real and forged data in order to detect fake videos based on mouth movements without over-fitting to low-level, manipulation-specific artefacts.
arXiv Detail & Related papers (2020-12-14T15:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.