Detection of Makeup Presentation Attacks based on Deep Face
Representations
- URL: http://arxiv.org/abs/2006.05074v2
- Date: Tue, 19 Jan 2021 11:19:14 GMT
- Title: Detection of Makeup Presentation Attacks based on Deep Face
Representations
- Authors: Christian Rathgeb, Pawel Drozdowski, Christoph Busch
- Abstract summary: The application of makeup can be abused to launch so-called makeup presentation attacks.
It is shown that makeup presentation attacks might seriously impact the security of the face recognition system.
We propose an attack detection scheme which distinguishes makeup presentation attacks from genuine authentication attempts.
- Score: 16.44565034551196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial cosmetics have the ability to substantially alter the facial
appearance, which can negatively affect the decisions of a face recognition. In
addition, it was recently shown that the application of makeup can be abused to
launch so-called makeup presentation attacks. In such attacks, the attacker
might apply heavy makeup in order to achieve the facial appearance of a target
subject for the purpose of impersonation. In this work, we assess the
vulnerability of a COTS face recognition system to makeup presentation attacks
employing the publicly available Makeup Induced Face Spoofing (MIFS) database.
It is shown that makeup presentation attacks might seriously impact the
security of the face recognition system. Further, we propose an attack
detection scheme which distinguishes makeup presentation attacks from genuine
authentication attempts by analysing differences in deep face representations
obtained from potential makeup presentation attacks and corresponding target
face images. The proposed detection system employs a machine learning-based
classifier, which is trained with synthetically generated makeup presentation
attacks utilizing a generative adversarial network for facial makeup transfer
in conjunction with image warping. Experimental evaluations conducted using the
MIFS database reveal a detection equal error rate of 0.7% for the task of
separating genuine authentication attempts from makeup presentation attacks.
Related papers
- Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - A Novel Active Solution for Two-Dimensional Face Presentation Attack
Detection [0.0]
We study state-of-the-art to cover the challenges and solutions related to presentation attack detection.
A presentation attack is an attempt to present a non-live face, such as a photo, video, mask, and makeup, to the camera.
We introduce an efficient active presentation attack detection approach that overcomes weaknesses in the existing literature.
arXiv Detail & Related papers (2022-12-14T00:30:09Z) - Is Face Recognition Safe from Realizable Attacks? [1.7214499647717132]
Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well.
Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces.
We propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems.
arXiv Detail & Related papers (2022-10-15T03:52:53Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Introduction to Presentation Attack Detection in Face Biometrics and
Recent Advances [21.674697346594158]
The next pages present the different presentation attacks that a face recognition system can confront.
We make an introduction of the current status of face recognition, its level of deployment, and its challenges.
We review different types of presentation attack methods, from simpler to more complex ones, and in which cases they could be effective.
arXiv Detail & Related papers (2021-11-23T11:19:22Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Cosmetic-Aware Makeup Cleanser [109.41917954315784]
Face verification aims at determining whether a pair of face images belongs to the same identity.
Recent studies have revealed the negative impact of facial makeup on the verification performance.
This paper proposes a semanticaware makeup cleanser (SAMC) to remove facial makeup under different poses and expressions.
arXiv Detail & Related papers (2020-04-20T09:18:23Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.