Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems
- URL: http://arxiv.org/abs/2303.11625v1
- Date: Tue, 21 Mar 2023 06:48:14 GMT
- Title: Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems
- Authors: Yao Zhu, Yuefeng Chen, Xiaodan Li, Rong Zhang, Xiang Tian, Bolun
Zheng, Yaowu Chen
- Abstract summary: Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
- Score: 19.259372985094235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of deep learning technology, the facial manipulation
system has become powerful and easy to use. Such systems can modify the
attributes of the given facial images, such as hair color, gender, and age.
Malicious applications of such systems pose a serious threat to individuals'
privacy and reputation. Existing studies have proposed various approaches to
protect images against facial manipulations. Passive defense methods aim to
detect whether the face is real or fake, which works for posterior forensics
but can not prevent malicious manipulation. Initiative defense methods protect
images upfront by injecting adversarial perturbations into images to disrupt
facial manipulation systems but can not identify whether the image is fake. To
address the limitation of existing methods, we propose a novel two-tier
protection method named Information-containing Adversarial Perturbation (IAP),
which provides more comprehensive protection for {facial images}. We use an
encoder to map a facial image and its identity message to a cross-model
adversarial example which can disrupt multiple facial manipulation systems to
achieve initiative protection. Recovering the message in adversarial examples
with a decoder serves passive protection, contributing to provenance tracking
and fake image detection. We introduce a feature-level correlation measurement
that is more suitable to measure the difference between the facial images than
the commonly used mean squared error. Moreover, we propose a spectral diffusion
method to spread messages to different frequency channels, thereby improving
the robustness of the message against facial manipulation. Extensive
experimental results demonstrate that our proposed IAP can recover the messages
from the adversarial examples with high average accuracy and effectively
disrupt the facial manipulation systems.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z) - Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario [50.36920272392624]
Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
arXiv Detail & Related papers (2020-03-18T03:04:03Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.