Exploring Decision-based Black-box Attacks on Face Forgery Detection
- URL: http://arxiv.org/abs/2310.12017v1
- Date: Wed, 18 Oct 2023 14:49:54 GMT
- Title: Exploring Decision-based Black-box Attacks on Face Forgery Detection
- Authors: Zhaoyu Chen, Bo Li, Kaixun Jiang, Shuang Wu, Shouhong Ding, Wenqiang
Zhang
- Abstract summary: Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
- Score: 53.181920529225906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face forgery generation technologies generate vivid faces, which have raised
public concerns about security and privacy. Many intelligent systems, such as
electronic payment and identity verification, rely on face forgery detection.
Although face forgery detection has successfully distinguished fake faces,
recent studies have demonstrated that face forgery detectors are very
vulnerable to adversarial examples. Meanwhile, existing attacks rely on network
architectures or training datasets instead of the predicted labels, which leads
to a gap in attacking deployed applications. To narrow this gap, we first
explore the decision-based attacks on face forgery detection. However, applying
existing decision-based attacks directly suffers from perturbation
initialization failure and low image quality. First, we propose cross-task
perturbation to handle initialization failures by utilizing the high
correlation of face features on different tasks. Then, inspired by using
frequency cues by face forgery detection, we propose the frequency
decision-based attack. We add perturbations in the frequency domain and then
constrain the visual quality in the spatial domain. Finally, extensive
experiments demonstrate that our method achieves state-of-the-art attack
performance on FaceForensics++, CelebDF, and industrial APIs, with high query
efficiency and guaranteed image quality. Further, the fake faces by our method
can pass face forgery detection and face recognition, which exposes the
security problems of face forgery detectors.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Learning Expressive And Generalizable Motion Features For Face Forgery
Detection [52.54404879581527]
We propose an effective sequence-based forgery detection framework based on an existing video classification method.
To make the motion features more expressive for manipulation detection, we propose an alternative motion consistency block.
We make a general video classification network achieve promising results on three popular face forgery datasets.
arXiv Detail & Related papers (2024-03-08T09:25:48Z) - Deep Learning based CNN Model for Classification and Detection of
Individuals Wearing Face Mask [0.0]
This project utilizes deep learning to create a model that can detect face masks in real-time streaming video as well as images.
The primary focus of this research is to enhance security, particularly in sensitive areas.
The research unfolds in three stages: image pre-processing, image cropping, and image classification.
arXiv Detail & Related papers (2023-11-17T09:24:04Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - Differential Anomaly Detection for Facial Images [15.54185745912878]
Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation.
Most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time.
We introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images.
arXiv Detail & Related papers (2021-10-07T13:45:13Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.