Imperceptible Face Forgery Attack via Adversarial Semantic Mask
- URL: http://arxiv.org/abs/2406.10887v1
- Date: Sun, 16 Jun 2024 10:38:11 GMT
- Title: Imperceptible Face Forgery Attack via Adversarial Semantic Mask
- Authors: Decheng Liu, Qixuan Su, Chunlei Peng, Nannan Wang, Xinbo Gao,
- Abstract summary: We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
- Score: 59.23247545399068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the great development of generative model techniques, face forgery detection draws more and more attention in the related field. Researchers find that existing face forgery models are still vulnerable to adversarial examples with generated pixel perturbations in the global image. These generated adversarial samples still can't achieve satisfactory performance because of the high detectability. To address these problems, we propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility. Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness. The designed adaptive semantic mask selection strategy can effectively leverage the class activation values of different semantic regions, and further ensure better attack transferability and stealthiness. Extensive experiments on the public face forgery dataset prove the proposed method achieves superior performance compared with several representative adversarial attack methods. The code is publicly available at https://github.com/clawerO-O/ASMA.
Related papers
- Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Evading Forensic Classifiers with Attribute-Conditioned Adversarial
Faces [6.105361899083232]
We show that it is possible to successfully generate adversarial fake faces with a specified set of attributes.
We propose a framework to search for adversarial latent codes within the feature space of StyleGAN.
We also propose a meta-learning based optimization strategy to achieve transferable performance on unknown target models.
arXiv Detail & Related papers (2023-06-22T17:59:55Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - Improving Transferability of Adversarial Patches on Face Recognition
with Generative Models [43.51625789744288]
We evaluate the robustness of face recognition models using adversarial patches based on transferability.
We show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability.
arXiv Detail & Related papers (2021-06-29T02:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.