TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations
- URL: http://arxiv.org/abs/2112.09151v1
- Date: Thu, 16 Dec 2021 19:00:43 GMT
- Title: TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations
- Authors: Shivangi Aneja, Lev Markhasin, Matthias Niessner
- Abstract summary: Face image manipulation methods can raise concerns by affecting an individual's privacy or spreading disinformation.
In this work, we propose a proactive defense to prevent face manipulation from happening in the first place.
We introduce a novel data-driven approach that produces image-specific perturbations which are embedded in the original images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face image manipulation methods, despite having many beneficial applications
in computer graphics, can also raise concerns by affecting an individual's
privacy or spreading disinformation. In this work, we propose a proactive
defense to prevent face manipulation from happening in the first place. To this
end, we introduce a novel data-driven approach that produces image-specific
perturbations which are embedded in the original images. The key idea is that
these protected images prevent face manipulation by causing the manipulation
model to produce a predefined manipulation target (uniformly colored output
image in our case) instead of the actual manipulation. Compared to traditional
adversarial attacks that optimize noise patterns for each image individually,
our generalized model only needs a single forward pass, thus running orders of
magnitude faster and allowing for easy integration in image processing stacks,
even on resource-constrained devices like smartphones. In addition, we propose
to leverage a differentiable compression approximation, hence making generated
perturbations robust to common image compression. We further show that a
generated perturbation can simultaneously prevent against multiple manipulation
methods.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Unsegment Anything by Simulating Deformation [67.10966838805132]
"Anything Unsegmentable" is a task to grant any image "the right to be unsegmented"
We aim to achieve transferable adversarial attacks against all prompt-based segmentation models.
Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks.
arXiv Detail & Related papers (2024-04-03T09:09:42Z) - FoolSDEdit: Deceptively Steering Your Edits Towards Targeted
Attribute-aware Distribution [34.3949228829163]
We build an adversarial attack forcing SDEdit to generate a specific data distribution aligned with a specified attribute.
We propose the Targeted Attribute Generative Attack (TAGA), using an attribute-aware objective function and optimizing the adversarial noise added to the input stroke painting.
Experiments show our method compelling SDEdit to generate a targeted attribute-aware data distribution, significantly outperforming baselines.
arXiv Detail & Related papers (2024-02-06T04:56:43Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Initiative Defense against Facial Manipulation [82.96864888025797]
We propose a novel framework of initiative defense to degrade the performance of facial manipulation models controlled by malicious users.
We first imitate the target manipulation model with a surrogate model, and then devise a poison perturbation generator to obtain the desired venom.
arXiv Detail & Related papers (2021-12-19T09:42:28Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.