Local Relation Learning for Face Forgery Detection
- URL: http://arxiv.org/abs/2105.02577v1
- Date: Thu, 6 May 2021 10:44:32 GMT
- Title: Local Relation Learning for Face Forgery Detection
- Authors: Shen Chen, Taiping Yao, Yang Chen, Shouhong Ding, Jilin Li, Rongrong
Ji
- Abstract summary: We propose a novel perspective of face forgery detection via local relation learning.
Specifically, we propose a Multi-scale Patch Similarity Module (MPSM), which measures the similarity between features of local regions.
We also propose an RGB-Frequency Attention Module (RFAM) to fuse information in both RGB and frequency domains for more comprehensive local feature representation.
- Score: 73.73130683091154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of facial manipulation techniques, face forgery
detection has received considerable attention in digital media forensics due to
security concerns. Most existing methods formulate face forgery detection as a
classification problem and utilize binary labels or manipulated region masks as
supervision. However, without considering the correlation between local
regions, these global supervisions are insufficient to learn a generalized
feature and prone to overfitting. To address this issue, we propose a novel
perspective of face forgery detection via local relation learning.
Specifically, we propose a Multi-scale Patch Similarity Module (MPSM), which
measures the similarity between features of local regions and forms a robust
and generalized similarity pattern. Moreover, we propose an RGB-Frequency
Attention Module (RFAM) to fuse information in both RGB and frequency domains
for more comprehensive local feature representation, which further improves the
reliability of the similarity pattern. Extensive experiments show that the
proposed method consistently outperforms the state-of-the-arts on widely-used
benchmarks. Furthermore, detailed visualization shows the robustness and
interpretability of our method.
Related papers
- Mixture-of-Noises Enhanced Forgery-Aware Predictor for Multi-Face Manipulation Detection and Localization [52.87635234206178]
This paper proposes a new framework, namely MoNFAP, specifically tailored for multi-face manipulation detection and localization.
The framework incorporates two novel modules: the Forgery-aware Unified Predictor (FUP) Module and the Mixture-of-Noises Module (MNM)
arXiv Detail & Related papers (2024-08-05T08:35:59Z) - Exploiting Facial Relationships and Feature Aggregation for Multi-Face
Forgery Detection [21.976412231332798]
existing methods predominantly concentrate on single-face manipulation detection, leaving the more intricate and realistic realm of multi-face forgeries relatively unexplored.
This paper proposes a novel framework explicitly tailored for multi-face forgery detection, filling a critical gap in the current research.
Our experimental results on two publicly available multi-face forgery datasets demonstrate that the proposed approach achieves state-of-the-art performance in multi-face forgery detection scenarios.
arXiv Detail & Related papers (2023-10-07T15:09:18Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - Detect Any Deepfakes: Segment Anything Meets Face Forgery Detection and
Localization [30.317619885984005]
We introduce the well-trained vision segmentation foundation model, i.e., Segment Anything Model (SAM) in face forgery detection and localization.
Based on SAM, we propose the Detect Any Deepfakes (DADF) framework with the Multiscale Adapter.
The proposed framework seamlessly integrates end-to-end forgery localization and detection optimization.
arXiv Detail & Related papers (2023-06-29T16:25:04Z) - Multi-spectral Class Center Network for Face Manipulation Detection and Localization [52.569170436393165]
We propose a novel Multi-Spectral Class Center Network (MSCCNet) for face manipulation detection and localization.
Based on the features of different frequency bands, the MSCC module collects multi-spectral class centers and computes pixel-to-class relations.
Applying multi-spectral class-level representations suppresses the semantic information of the visual concepts which is insensitive to manipulated regions of forgery images.
arXiv Detail & Related papers (2023-05-18T08:09:20Z) - Hierarchical Forgery Classifier On Multi-modality Face Forgery Clues [61.37306431455152]
We propose a novel Hierarchical Forgery for Multi-modality Face Forgery Detection (HFC-MFFD)
The HFC-MFFD learns robust patches-based hybrid representation to enhance forgery authentication in multiple-modality scenarios.
The specific hierarchical face forgery is proposed to alleviate the class imbalance problem and further boost detection performance.
arXiv Detail & Related papers (2022-12-30T10:54:29Z) - Cross-Domain Local Characteristic Enhanced Deepfake Video Detection [18.430287055542315]
Deepfake detection has attracted increasing attention due to security concerns.
Many detectors cannot achieve accurate results when detecting unseen manipulations.
We propose a novel pipeline, Cross-Domain Local Forensics, for more general deepfake video detection.
arXiv Detail & Related papers (2022-11-07T07:44:09Z) - MC-LCR: Multi-modal contrastive classification by locally correlated
representations for effective face forgery detection [11.124150983521158]
We propose a novel framework named Multi-modal Contrastive Classification by Locally Correlated Representations.
Our MC-LCR aims to amplify implicit local discrepancies between authentic and forged faces from both spatial and frequency domains.
We achieve state-of-the-art performance and demonstrate the robustness and generalization of our method.
arXiv Detail & Related papers (2021-10-07T09:24:12Z) - Generalizing Face Forgery Detection with High-frequency Features [63.33397573649408]
Current CNN-based detectors tend to overfit to method-specific color textures and thus fail to generalize.
We propose to utilize the high-frequency noises for face forgery detection.
The first is the multi-scale high-frequency feature extraction module that extracts high-frequency noises at multiple scales.
The second is the residual-guided spatial attention module that guides the low-level RGB feature extractor to concentrate more on forgery traces from a new perspective.
arXiv Detail & Related papers (2021-03-23T08:19:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.