FaceHack: Triggering backdoored facial recognition systems using facial
characteristics
- URL: http://arxiv.org/abs/2006.11623v1
- Date: Sat, 20 Jun 2020 17:39:23 GMT
- Title: FaceHack: Triggering backdoored facial recognition systems using facial
characteristics
- Authors: Esha Sarkar, Hadjer Benkraouda, Michail Maniatakos
- Abstract summary: Recent advances in Machine Learning have opened up new avenues for its extensive use in real-world applications.
Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial recognition systems, are susceptible to backdoor attacks.
In this work, we demonstrate that specific changes to facial characteristics may also be used to trigger malicious behavior in an ML model.
- Score: 16.941198804770607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Machine Learning (ML) have opened up new avenues for its
extensive use in real-world applications. Facial recognition, specifically, is
used from simple friend suggestions in social-media platforms to critical
security applications for biometric validation in automated immigration at
airports. Considering these scenarios, security vulnerabilities to such ML
algorithms pose serious threats with severe outcomes. Recent work demonstrated
that Deep Neural Networks (DNNs), typically used in facial recognition systems,
are susceptible to backdoor attacks; in other words,the DNNs turn malicious in
the presence of a unique trigger. Adhering to common characteristics for being
unnoticeable, an ideal trigger is small, localized, and typically not a part of
the main im-age. Therefore, detection mechanisms have focused on detecting
these distinct trigger-based outliers statistically or through their
reconstruction. In this work, we demonstrate that specific changes to facial
characteristics may also be used to trigger malicious behavior in an ML model.
The changes in the facial attributes maybe embedded artificially using
social-media filters or introduced naturally using movements in facial muscles.
By construction, our triggers are large, adaptive to the input, and spread over
the entire image. We evaluate the success of the attack and validate that it
does not interfere with the performance criteria of the model. We also
substantiate the undetectability of our triggers by exhaustively testing them
with state-of-the-art defenses.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup Transfer [6.6251662169603005]
We propose a novel feature backdoor attack against face recognition via makeup transfer, dubbed MakeupAttack.
In our attack, we design an iterative training paradigm to learn the subtle features of the proposed makeup-style trigger.
The results demonstrate that our proposed attack method can bypass existing state-of-the-art defenses while maintaining effectiveness, robustness, naturalness, and stealthiness, without compromising model performance.
arXiv Detail & Related papers (2024-08-22T11:39:36Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Improving Transferability of Adversarial Patches on Face Recognition
with Generative Models [43.51625789744288]
We evaluate the robustness of face recognition models using adversarial patches based on transferability.
We show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability.
arXiv Detail & Related papers (2021-06-29T02:13:05Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.