Towards Protecting Face Embeddings in Mobile Face Verification Scenarios
- URL: http://arxiv.org/abs/2110.00434v1
- Date: Fri, 1 Oct 2021 14:13:23 GMT
- Title: Towards Protecting Face Embeddings in Mobile Face Verification Scenarios
- Authors: Vedrana Krivoku\'ca Hahn and S\'ebastien Marcel
- Abstract summary: PolyProtect is a method for protecting the sensitive face embeddings that are used to represent people's faces in neural-network-based face verification systems.
PolyProtect is evaluated on two open-source face verification systems in a mobile application context.
Results indicate that PolyProtect can be tuned to achieve a satisfactory trade-off between the recognition accuracy of the PolyProtected face verification system and the irreversibility of the PolyProtected templates.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes PolyProtect, a method for protecting the sensitive face
embeddings that are used to represent people's faces in neural-network-based
face verification systems. PolyProtect transforms a face embedding to a more
secure template, using a mapping based on multivariate polynomials
parameterised by user-specific coefficients and exponents. In this work,
PolyProtect is evaluated on two open-source face verification systems in a
mobile application context, under the toughest threat model that assumes a
fully-informed attacker with complete knowledge of the system and all its
parameters. Results indicate that PolyProtect can be tuned to achieve a
satisfactory trade-off between the recognition accuracy of the PolyProtected
face verification system and the irreversibility of the PolyProtected
templates. Furthermore, PolyProtected templates are shown to be effectively
unlinkable, especially if the user-specific parameters employed in the
PolyProtect mapping are selected in a non-naive manner. The evaluation is
conducted using practical methodologies with tangible results, to present
realistic insight into the method's robustness as a face embedding protection
scheme in practice. The code to fully reproduce this work is available at:
https://gitlab.idiap.ch/bob/bob.paper.polyprotect_2021.
Related papers
- SlerpFace: Face Template Protection via Spherical Linear Interpolation [35.74859369424896]
This paper identifies an emerging privacy attack form utilizing diffusion models that could nullify prior protection.
The attack can synthesize high-quality, identity-preserving face images from templates, revealing persons' appearance.
Based on studies of the diffusion model's generative capability, this paper proposes a defense to the attack, by rotating templates to a noise-like distribution.
The proposed techniques are concretized as a novel face template protection technique, SlerpFace.
arXiv Detail & Related papers (2024-07-03T12:07:36Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Enhancing Privacy in Face Analytics Using Fully Homomorphic Encryption [8.742970921484371]
We propose a novel technique that combines Fully Homomorphic Encryption (FHE) with an existing template protection scheme known as PolyProtect.
Our proposed approach ensures irreversibility and unlinkability, effectively preventing the leakage of soft biometric embeddings.
arXiv Detail & Related papers (2024-04-24T23:56:03Z) - Reversing Deep Face Embeddings with Probable Privacy Protection [6.492755549391469]
State-of-the-art face image reconstruction approach has been evaluated on protected face embeddings to break soft biometric privacy protection.
Results show that biometric privacy-enhanced face embeddings can be reconstructed with an accuracy of up to approximately 98%.
arXiv Detail & Related papers (2023-10-04T17:48:23Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.