A Dual-Level Cancelable Framework for Palmprint Verification and Hack-Proof Data Storage
- URL: http://arxiv.org/abs/2403.02680v1
- Date: Tue, 5 Mar 2024 06:09:35 GMT
- Title: A Dual-Level Cancelable Framework for Palmprint Verification and Hack-Proof Data Storage
- Authors: Ziyuan Yang, Ming Kang, Andrew Beng Jin Teoh, Chengrui Gao, Wen Chen, Bob Zhang, Yi Zhang,
- Abstract summary: Existing systems often use cancelable technologies to protect templates, but these technologies ignore the potential risk of data leakage.
We propose a dual-level cancelable palmprint verification framework in this paper.
- Score: 28.712971971947518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, palmprints have been widely used for individual verification. The rich privacy information in palmprint data necessitates its protection to ensure security and privacy without sacrificing system performance. Existing systems often use cancelable technologies to protect templates, but these technologies ignore the potential risk of data leakage. Upon breaching the system and gaining access to the stored database, a hacker could easily manipulate the stored templates, compromising the security of the verification system. To address this issue, we propose a dual-level cancelable palmprint verification framework in this paper. Specifically, the raw template is initially encrypted using a competition hashing network with a first-level token, facilitating the end-to-end generation of cancelable templates. Different from previous works, the protected template undergoes further encryption to differentiate the second-level protected template from the first-level one. The system specifically creates a negative database (NDB) with the second-level token for dual-level protection during the enrollment stage. Reversing the NDB is NP-hard and a fine-grained algorithm for NDB generation is introduced to manage the noise and specified bits. During the verification stage, we propose an NDB matching algorithm based on matrix operation to accelerate the matching process of previous NDB methods caused by dictionary-based matching rules. This approach circumvents the need to store templates identical to those utilized for verification, reducing the risk of potential data leakage. Extensive experiments conducted on public palmprint datasets have confirmed the effectiveness and generality of the proposed framework. Upon acceptance of the paper, the code will be accessible at https://github.com/Deep-Imaging-Group/NPR.
Related papers
- Supervised and Unsupervised Alignments for Spoofing Behavioral Biometrics [7.021534792043867]
Biometric recognition systems are based on intrinsic properties of their users, usually encoded in high dimension representations called embeddings.
We perform spoofing attacks on two behavioral biometric systems using a set of alignment techniques.
arXiv Detail & Related papers (2024-08-14T20:46:59Z) - BioDeepHash: Mapping Biometrics into a Stable Code [3.467070674182551]
We propose a framework called BioDeepHash based on deep hashing and cryptographic hashing.
Our framework achieves not storing any data that would leak part of the original biometric data.
arXiv Detail & Related papers (2024-08-07T11:37:02Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Latent Guard: a Safety Framework for Text-to-image Generation [64.49596711025993]
Existing safety measures are either based on text blacklists, which can be easily circumvented, or harmful content classification.
We propose Latent Guard, a framework designed to improve safety measures in text-to-image generation.
Inspired by blacklist-based approaches, Latent Guard learns a latent space on top of the T2I model's text encoder, where it is possible to check the presence of harmful concepts.
arXiv Detail & Related papers (2024-04-11T17:59:52Z) - InferDPT: Privacy-Preserving Inference for Black-box Large Language Model [66.07752875835506]
InferDPT is the first practical framework for the privacy-preserving Inference of black-box LLMs.
RANTEXT is a novel differential privacy mechanism integrated into the perturbation module of InferDPT.
arXiv Detail & Related papers (2023-10-18T18:00:11Z) - Privacy-Preserving Credit Card Fraud Detection using Homomorphic
Encryption [0.0]
This paper proposes a system for private fraud detection on encrypted transactions using homomorphic encryption.
Two models, XGBoost and a feedforward neural network, are trained as fraud detectors on data.
XGBoost model has better performance, with an inference as low as 6ms, compared to 296ms for the neural network.
arXiv Detail & Related papers (2022-11-12T14:28:17Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Quantum Proofs of Deletion for Learning with Errors [91.3755431537592]
We construct the first fully homomorphic encryption scheme with certified deletion.
Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors distribution in the form of a quantum state was deleted.
arXiv Detail & Related papers (2022-03-03T10:07:32Z) - Security and Privacy Enhanced Gait Authentication with Random
Representation Learning and Digital Lockers [3.3549957463189095]
Gait data captured by inertial sensors have demonstrated promising results on user authentication.
Most existing approaches stored the enrolled gait pattern insecurely for matching with the pattern, thus, posed critical security and privacy issues.
We present a gait cryptosystem that generates from gait data the random key for user authentication, meanwhile, secures the gait pattern.
arXiv Detail & Related papers (2021-08-05T06:34:42Z) - Feature Fusion Methods for Indexing and Retrieval of Biometric Data:
Application to Face Recognition with Privacy Protection [15.834050000008878]
The proposed method reduces the computational workload associated with a biometric identification transaction by 90%.
The method guarantees unlinkability, irreversibility, and renewability of the protected biometric data.
arXiv Detail & Related papers (2021-07-27T08:53:29Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.