Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection
- URL: http://arxiv.org/abs/2501.00700v2
- Date: Tue, 14 Jan 2025 09:04:35 GMT
- Title: Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection
- Authors: Hao Wang, Cheng Deng, Zhidong Zhao,
- Abstract summary: We propose a knowledge-guided prompt learning method for deepfake facial image detection.
Specifically, we retrieve forgery-related prompts from large language models as expert knowledge to guide the optimization of learnable prompts.
Our proposed approach notably outperforms state-of-the-art methods.
- Score: 54.26588902144298
- License:
- Abstract: Recent generative models demonstrate impressive performance on synthesizing photographic images, which makes humans hardly to distinguish them from pristine ones, especially on realistic-looking synthetic facial images. Previous works mostly focus on mining discriminative artifacts from vast amount of visual data. However, they usually lack the exploration of prior knowledge and rarely pay attention to the domain shift between training categories (e.g., natural and indoor objects) and testing ones (e.g., fine-grained human facial images), resulting in unsatisfactory detection performance. To address these issues, we propose a novel knowledge-guided prompt learning method for deepfake facial image detection. Specifically, we retrieve forgery-related prompts from large language models as expert knowledge to guide the optimization of learnable prompts. Besides, we elaborate test-time prompt tuning to alleviate the domain shift, achieving significant performance improvement and boosting the application in real-world scenarios. Extensive experiments on DeepFakeFaceForensics dataset show that our proposed approach notably outperforms state-of-the-art methods.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - LightFFDNets: Lightweight Convolutional Neural Networks for Rapid Facial Forgery Detection [0.0]
This study focuses on image processing-based forgery detection using Fake-Vs-Real-Faces (Hard) [10] and 140k Real and Fake Faces [61] data sets.
Two lightweight deep learning models are proposed to conduct forgery detection using these images.
It's shown that the proposed lightweight deep learning models detect forgeries of facial imagery accurately, and computationally efficiently.
arXiv Detail & Related papers (2024-11-18T18:44:10Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - A survey on facial image deblurring [3.6775758132528877]
When the facial image is blurred, it has a great impact on high-level vision tasks such as face recognition.
This paper surveys and summarizes recently published methods for facial image deblurring, most of which are based on deep learning.
We show the performance of classical methods on datasets and metrics and give a brief discussion on the differences of model-based and learning-based methods.
arXiv Detail & Related papers (2023-02-10T02:24:56Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - From A Glance to "Gotcha": Interactive Facial Image Retrieval with
Progressive Relevance Feedback [72.29919762941029]
We propose an end-to-end framework to retrieve facial images with relevance feedback progressively provided by the witness.
With no need of any extra annotations, our model can be applied at the cost of a little response effort.
arXiv Detail & Related papers (2020-07-30T18:46:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.