TetraLoss: Improving the Robustness of Face Recognition against Morphing
Attacks
- URL: http://arxiv.org/abs/2401.11598v1
- Date: Sun, 21 Jan 2024 21:04:05 GMT
- Title: TetraLoss: Improving the Robustness of Face Recognition against Morphing
Attacks
- Authors: Mathias Ibsen, L\'azaro J. Gonz\'alez-Soler, Christian Rathgeb,
Christoph Busch
- Abstract summary: Face recognition systems are widely deployed in high-security applications.
Digital manipulations, such as face morphing, pose a security threat to face recognition systems.
We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
- Score: 7.092869001331781
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition systems are widely deployed in high-security applications
such as for biometric verification at border controls. Despite their high
accuracy on pristine data, it is well-known that digital manipulations, such as
face morphing, pose a security threat to face recognition systems. Malicious
actors can exploit the facilities offered by the identity document issuance
process to obtain identity documents containing morphed images. Thus, subjects
who contributed to the creation of the morphed image can with high probability
use the identity document to bypass automated face recognition systems. In
recent years, no-reference (i.e., single image) and differential morphing
attack detectors have been proposed to tackle this risk. These systems are
typically evaluated in isolation from the face recognition system that they
have to operate jointly with and do not consider the face recognition process.
Contrary to most existing works, we present a novel method for adapting deep
learning-based face recognition systems to be more robust against face morphing
attacks. To this end, we introduce TetraLoss, a novel loss function that learns
to separate morphed face images from its contributing subjects in the embedding
space while still preserving high biometric verification performance. In a
comprehensive evaluation, we show that the proposed method can significantly
enhance the original system while also significantly outperforming other tested
baseline methods.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Quadruplet Loss For Improving the Robustness to Face Morphing Attacks [0.0]
Face Recognition Systems are vulnerable to sophisticated attacks, notably face morphing techniques.
We introduce a novel quadruplet loss function for increasing the robustness of face recognition systems against morphing attacks.
arXiv Detail & Related papers (2024-02-22T16:10:39Z) - Unrecognizable Yet Identifiable: Image Distortion with Preserved Embeddings [22.338328674283062]
We introduce an innovative image transformation technique that renders facial images unrecognizable to the eye while maintaining their identifiability by neural network models.
The proposed methodology can be used in various artificial intelligence applications to distort the visual data and keep the derived features close.
We show that it is possible to build the distortion that changes the image content by more than 70% while maintaining the same recognition accuracy.
arXiv Detail & Related papers (2024-01-26T18:20:53Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Analyzing eyebrow region for morphed image detection [4.879461135691896]
The proposed method is based on analyzing the frequency content of the eyebrow region.
The findings suggest that the proposed method can serve as a valuable tool in morphed image detection.
arXiv Detail & Related papers (2023-10-30T06:11:27Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Analysis of Recent Trends in Face Recognition Systems [0.0]
Due to inter-class similarities and intra-class variations, face recognition systems generate false match and false non-match errors respectively.
Recent research focuses on improving the robustness of extracted features and the pre-processing algorithms to enhance recognition accuracy.
arXiv Detail & Related papers (2023-04-23T18:55:45Z) - Harnessing Unrecognizable Faces for Face Recognition [87.80037162457427]
We propose a measure of recognizability of a face image, implemented by a deep neural network trained using mostly recognizable identities.
We show that accounting for recognizability reduces error rate of single-image face recognition by 58% at FAR=1e-5.
arXiv Detail & Related papers (2021-06-08T05:25:03Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.