FaceCloak: Learning to Protect Face Templates
- URL: http://arxiv.org/abs/2504.06131v1
- Date: Tue, 08 Apr 2025 15:23:21 GMT
- Title: FaceCloak: Learning to Protect Face Templates
- Authors: Sudipta Banerjee, Anubhav Jain, Chinmay Hegde, Nasir Memon,
- Abstract summary: FaceCloak is a neural network framework that protects face templates by generating smart, renewable binary cloaks.<n>Our method proactively thwarts inversion attacks by cloaking face templates with unique disruptors synthesized from a single face template on the fly.
- Score: 17.481346681603814
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generative models can reconstruct face images from encoded representations (templates) bearing remarkable likeness to the original face raising security and privacy concerns. We present FaceCloak, a neural network framework that protects face templates by generating smart, renewable binary cloaks. Our method proactively thwarts inversion attacks by cloaking face templates with unique disruptors synthesized from a single face template on the fly while provably retaining biometric utility and unlinkability. Our cloaked templates can suppress sensitive attributes while generalizing to novel feature extraction schemes and outperforms leading baselines in terms of biometric matching and resiliency to reconstruction attacks. FaceCloak-based matching is extremely fast (inference time cost=0.28ms) and light-weight (0.57MB).
Related papers
- NullSwap: Proactive Identity Cloaking Against Deepfake Face Swapping [8.284351945561099]
We analyze the essence of Deepfake face swapping and argue the necessity of protecting source identities rather than target images.
We propose NullSwap, a novel proactive defense approach that cloaks source image identities and nullifies face swapping under a pure black-box scenario.
Experiments demonstrate the outstanding ability of our approach to fool various identity recognition models.
arXiv Detail & Related papers (2025-03-24T13:49:39Z) - iFADIT: Invertible Face Anonymization via Disentangled Identity Transform [51.123936665445356]
Face anonymization aims to conceal the visual identity of a face to safeguard the individual's privacy.<n>This paper proposes a novel framework named iFADIT, an acronym for Invertible Face Anonymization via Disentangled Identity Transform.
arXiv Detail & Related papers (2025-01-08T10:08:09Z) - Local Features Meet Stochastic Anonymization: Revolutionizing Privacy-Preserving Face Recognition for Black-Box Models [54.88064975480573]
The task of privacy-preserving face recognition (PPFR) currently faces two major unsolved challenges.<n>By disrupting global features while enhancing local features, we achieve effective recognition even in black-box environments.<n>Our method achieves an average recognition accuracy of 94.21% on black-box models, outperforming existing methods in both privacy protection and anti-reconstruction capabilities.
arXiv Detail & Related papers (2024-12-11T10:49:15Z) - SlerpFace: Face Template Protection via Spherical Linear Interpolation [35.74859369424896]
This paper identifies an emerging privacy attack form utilizing diffusion models that could nullify prior protection.<n>The attack can synthesize high-quality, identity-preserving face images from templates, revealing persons' appearance.<n>The proposed techniques are concretized as a novel face template protection technique, SlerpFace.
arXiv Detail & Related papers (2024-07-03T12:07:36Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - PuFace: Defending against Facial Cloaking Attacks for Facial Recognition Models [1.455585466338228]
Recently proposed facial cloaking attacks add invisible perturbation (cloaks) to facial images to protect users from being recognized by unauthorized facial recognition models.
This paper introduces PuFace, an image purification system leveraging the generalization ability of neural networks to diminish the impact of cloaks.
Our empirical experiment shows PuFace can effectively defend against two state-of-the-art facial cloaking attacks and reduces the attack success rate from 69.84% to 7.61% on average.
arXiv Detail & Related papers (2024-06-04T12:19:09Z) - Enhancing Privacy in Face Analytics Using Fully Homomorphic Encryption [8.742970921484371]
We propose a novel technique that combines Fully Homomorphic Encryption (FHE) with an existing template protection scheme known as PolyProtect.
Our proposed approach ensures irreversibility and unlinkability, effectively preventing the leakage of soft biometric embeddings.
arXiv Detail & Related papers (2024-04-24T23:56:03Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Privacy-preserving Adversarial Facial Features [31.885215405010687]
We propose an adversarial features-based face privacy protection approach to generate privacy-preserving adversarial features.
We show that AdvFace outperforms the state-of-the-art face privacy-preserving methods in defending against reconstruction attacks.
arXiv Detail & Related papers (2023-05-08T08:52:08Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.