A secure and private ensemble matcher using multi-vault obfuscated templates
- URL: http://arxiv.org/abs/2404.05205v2
- Date: Mon, 12 Aug 2024 14:42:48 GMT
- Title: A secure and private ensemble matcher using multi-vault obfuscated templates
- Authors: Babak Poorebrahim Gilkalaye, Shubhabrata Mukherjee, Reza Derakhshani,
- Abstract summary: Generative AI has revolutionized modern machine learning by providing unprecedented realism, diversity, and efficiency in data generation.
Biometric template security and secure matching are among the most sought-after features of modern biometric systems.
This paper proposes a novel obfuscation method using Generative AI to enhance biometric template security.
- Score: 1.3518297878940662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI has revolutionized modern machine learning by providing unprecedented realism, diversity, and efficiency in data generation. This technology holds immense potential for biometrics, including for securing sensitive and personally identifiable information. Given the irrevocability of biometric samples and mounting privacy concerns, biometric template security and secure matching are among the most sought-after features of modern biometric systems. This paper proposes a novel obfuscation method using Generative AI to enhance biometric template security. Our approach utilizes synthetic facial images generated by a Generative Adversarial Network (GAN) as "random chaff points" within a secure vault system. Our method creates n sub-templates from the original template, each obfuscated with m GAN chaff points. During verification, s closest vectors to the biometric query are retrieved from each vault and combined to generate hash values, which are then compared with the stored hash value. Thus, our method safeguards user identities during the training and deployment phases by employing the GAN-generated synthetic images. Our protocol was tested using the AT&T, GT, and LFW face datasets, achieving ROC areas under the curve of 0.99, 0.99, and 0.90, respectively. Our results demonstrate that the proposed method can maintain high accuracy and reasonable computational complexity comparable to those unprotected template methods while significantly enhancing security and privacy, underscoring the potential of Generative AI in developing proactive defensive strategies for biometric systems.
Related papers
- Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Enhancing Privacy in Face Analytics Using Fully Homomorphic Encryption [8.742970921484371]
We propose a novel technique that combines Fully Homomorphic Encryption (FHE) with an existing template protection scheme known as PolyProtect.
Our proposed approach ensures irreversibility and unlinkability, effectively preventing the leakage of soft biometric embeddings.
arXiv Detail & Related papers (2024-04-24T23:56:03Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Securing Deep Generative Models with Universal Adversarial Signature [69.51685424016055]
Deep generative models pose threats to society due to their potential misuse.
In this paper, we propose to inject a universal adversarial signature into an arbitrary pre-trained generative model.
The proposed method is validated on the FFHQ and ImageNet datasets with various state-of-the-art generative models.
arXiv Detail & Related papers (2023-05-25T17:59:01Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Perfectly Secure Steganography Using Minimum Entropy Coupling [60.154855689780796]
We show that a steganography procedure is perfectly secure under Cachin 1998's information-theoretic model of steganography.
We also show that, among perfectly secure procedures, a procedure maximizes information throughput if and only if it is induced by a minimum entropy coupling.
arXiv Detail & Related papers (2022-10-24T17:40:07Z) - MLP-Hash: Protecting Face Templates via Hashing of Randomized
Multi-Layer Perceptron [4.956977275061966]
Face recognition systems have privacy-sensitive features which are stored in the system's database.
We propose a new cancelable template protection method, dubbed templates-hash, which generates protected by passing the extracted features through a user-specific randomly-weighted perceptron.
Our experiments with SOTA face recognition systems show that our method has competitive performance with the BioHashing and IoM Hashing.
arXiv Detail & Related papers (2022-04-23T11:18:22Z) - Security and Privacy Enhanced Gait Authentication with Random
Representation Learning and Digital Lockers [3.3549957463189095]
Gait data captured by inertial sensors have demonstrated promising results on user authentication.
Most existing approaches stored the enrolled gait pattern insecurely for matching with the pattern, thus, posed critical security and privacy issues.
We present a gait cryptosystem that generates from gait data the random key for user authentication, meanwhile, secures the gait pattern.
arXiv Detail & Related papers (2021-08-05T06:34:42Z) - Feature Fusion Methods for Indexing and Retrieval of Biometric Data:
Application to Face Recognition with Privacy Protection [15.834050000008878]
The proposed method reduces the computational workload associated with a biometric identification transaction by 90%.
The method guarantees unlinkability, irreversibility, and renewability of the protected biometric data.
arXiv Detail & Related papers (2021-07-27T08:53:29Z) - Deep Hashing for Secure Multimodal Biometrics [1.7188280334580195]
We present a framework for feature-level fusion that generates a secure multimodal template from each user's face and iris biometrics.
We employ a hybrid secure architecture by combining cancelable biometrics with secure sketch techniques.
The proposed approach also provides cancelability and unlinkability of the templates along with improved privacy of the biometric data.
arXiv Detail & Related papers (2020-12-29T14:15:05Z) - BERT-ATTACK: Adversarial Attack Against BERT Using BERT [77.82947768158132]
Adrial attacks for discrete data (such as texts) are more challenging than continuous data (such as images)
We propose textbfBERT-Attack, a high-quality and effective method to generate adversarial samples.
Our method outperforms state-of-the-art attack strategies in both success rate and perturb percentage.
arXiv Detail & Related papers (2020-04-21T13:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.