Enhancing Abnormality Identification: Robust Out-of-Distribution Strategies for Deepfake Detection
- URL: http://arxiv.org/abs/2506.02857v1
- Date: Tue, 03 Jun 2025 13:24:33 GMT
- Title: Enhancing Abnormality Identification: Robust Out-of-Distribution Strategies for Deepfake Detection
- Authors: Luca Maiano, Fabrizio Casadei, Irene Amerini,
- Abstract summary: We propose two novel Out-Of-Distribution (OOD) detection approaches.<n>The first approach is trained to reconstruct the input image, while the second incorporates an attention mechanism for detecting OODs.<n>Our method achieves promising results in deepfake detection and ranks among the top-performing configurations on the benchmark.
- Score: 2.4851820343103035
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Detecting deepfakes has become a critical challenge in Computer Vision and Artificial Intelligence. Despite significant progress in detection techniques, generalizing them to open-set scenarios continues to be a persistent difficulty. Neural networks are often trained on the closed-world assumption, but with new generative models constantly evolving, it is inevitable to encounter data generated by models that are not part of the training distribution. To address these challenges, in this paper, we propose two novel Out-Of-Distribution (OOD) detection approaches. The first approach is trained to reconstruct the input image, while the second incorporates an attention mechanism for detecting OODs. Our experiments validate the effectiveness of the proposed approaches compared to existing state-of-the-art techniques. Our method achieves promising results in deepfake detection and ranks among the top-performing configurations on the benchmark, demonstrating their potential for robust, adaptable solutions in dynamic, real-world applications.
Related papers
- Unmasking Synthetic Realities in Generative AI: A Comprehensive Review of Adversarially Robust Deepfake Detection Systems [4.359154048799454]
Deepfake proliferation-synthetic media poses challenges to digital security, misinformation mitigation, and identity preservation.<n>This systematic review evaluates state-of-the-art deepfake detection methodologies, emphasizing reproducible implementations for transparency and validation.<n>We delineate two core paradigms: (1) detection of fully synthetic media leveraging statistical anomalies and hierarchical feature extraction, and (2) localization of manipulated regions within authentic content employing multi-modal cues such as visual artifacts and temporal inconsistencies.
arXiv Detail & Related papers (2025-07-24T22:05:52Z) - Robust AI-Generated Face Detection with Imbalanced Data [10.360215701635674]
Current deepfake detection techniques have evolved from CNN-based methods focused on local artifacts to more advanced approaches using vision transformers and multimodal models like CLIP.<n>Despite recent progress, state-of-the-art deepfake detectors still face major challenges in handling distribution shifts from emerging generative models.<n>We propose a framework that combines dynamic loss reweighting and ranking-based optimization, which achieves superior generalization and performance under imbalanced dataset conditions.
arXiv Detail & Related papers (2025-05-04T17:02:10Z) - Robust Distribution Alignment for Industrial Anomaly Detection under Distribution Shift [51.24522135151649]
Anomaly detection plays a crucial role in quality control for industrial applications.<n>Existing methods attempt to address domain shifts by training generalizable models.<n>Our proposed method demonstrates superior results compared with state-of-the-art anomaly detection and domain adaptation methods.
arXiv Detail & Related papers (2025-03-19T05:25:52Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Forward-Forward Learning achieves Highly Selective Latent Representations for Out-of-Distribution Detection in Fully Spiking Neural Networks [6.7236795813629]
Spiking Neural Networks (SNNs), inspired by biological systems, offer a promising avenue for overcoming limitations.<n>In this work, we explore the potential of the spiking Forward-Forward Algorithm (FFA) to address these challenges.<n>We propose a novel, gradient-free attribution method to detect features that drive a sample away from class distributions.
arXiv Detail & Related papers (2024-07-19T08:08:17Z) - Deep Anomaly Detection in Text [3.4265828682659705]
This thesis aims to develop a method for detecting anomalies by exploiting pretext tasks tailored for text corpora.
This approach greatly improves the state-of-the-art on two datasets, 20Newsgroups, and AG News, for both semi-supervised and unsupervised anomaly detection.
arXiv Detail & Related papers (2023-12-14T22:04:43Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Detecting Out-of-distribution Objects Using Neuron Activation Patterns [0.0]
We introduce Neuron Activation PaTteRns for out-of-distribution samples detection in Object detectioN (NAPTRON)
Our approach outperforms state-of-the-art methods, without the need to affect in-distribution (ID) performance.
We have created the largest open-source benchmark for OOD object detection.
arXiv Detail & Related papers (2023-07-31T06:41:26Z) - A Robust Likelihood Model for Novelty Detection [8.766411351797883]
Current approaches to novelty or anomaly detection are based on deep neural networks.
We propose a new prior that aims at learning a robust likelihood for the novelty test, as a defense against attacks.
We also integrate the same prior with a state-of-the-art novelty detection approach.
arXiv Detail & Related papers (2023-06-06T01:02:31Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.