How Close are Other Computer Vision Tasks to Deepfake Detection?
- URL: http://arxiv.org/abs/2310.00922v1
- Date: Mon, 2 Oct 2023 06:32:35 GMT
- Title: How Close are Other Computer Vision Tasks to Deepfake Detection?
- Authors: Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
- Abstract summary: We present a new measurement, "model separability," for assessing a model's raw capacity to separate data in an unsupervised manner.
Our analysis shows that pre-trained face recognition models are more closely related to deepfake detection than other models.
We found that self-supervised models deliver the best results, but there is a risk of overfitting.
- Score: 42.79190870582115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we challenge the conventional belief that supervised
ImageNet-trained models have strong generalizability and are suitable for use
as feature extractors in deepfake detection. We present a new measurement,
"model separability," for visually and quantitatively assessing a model's raw
capacity to separate data in an unsupervised manner. We also present a
systematic benchmark for determining the correlation between deepfake detection
and other computer vision tasks using pre-trained models. Our analysis shows
that pre-trained face recognition models are more closely related to deepfake
detection than other models. Additionally, models trained using self-supervised
methods are more effective in separation than those trained using supervised
methods. After fine-tuning all models on a small deepfake dataset, we found
that self-supervised models deliver the best results, but there is a risk of
overfitting. Our results provide valuable insights that should help researchers
and practitioners develop more effective deepfake detection models.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Forte : Finding Outliers with Representation Typicality Estimation [0.14061979259370275]
Generative models can now produce synthetic data which is virtually indistinguishable from the real data used to train it.
Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors.
We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation.
arXiv Detail & Related papers (2024-10-02T08:26:37Z) - Masked Conditional Diffusion Model for Enhancing Deepfake Detection [20.018495944984355]
We propose a Masked Conditional Diffusion Model (MCDM) for enhancing deepfake detection.
It generates a variety of forged faces from a masked pristine one, encouraging the deepfake detection model to learn generic and robust representations.
arXiv Detail & Related papers (2024-02-01T12:06:55Z) - Deepfake Detection: A Comparative Analysis [2.644723682054489]
We evaluate eight supervised deep learning architectures and two transformer-based models pre-trained using self-supervised strategies on four benchmarks.
Our analysis includes intra-dataset and inter-dataset evaluations, examining the best performing models, generalisation capabilities, and impact of augmentations.
arXiv Detail & Related papers (2023-08-07T10:57:20Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Effective Robustness against Natural Distribution Shifts for Models with
Different Training Data [113.21868839569]
"Effective robustness" measures the extra out-of-distribution robustness beyond what can be predicted from the in-distribution (ID) performance.
We propose a new evaluation metric to evaluate and compare the effective robustness of models trained on different data.
arXiv Detail & Related papers (2023-02-02T19:28:41Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.