Exploit the Leak: Understanding Risks in Biometric Matchers
- URL: http://arxiv.org/abs/2307.13717v5
- Date: Tue, 30 Jul 2024 08:47:54 GMT
- Title: Exploit the Leak: Understanding Risks in Biometric Matchers
- Authors: Axel Durbet, Kevin Thiry-Atighehchi, Dorine Chagnon, Paul-Marie Grollemund,
- Abstract summary: In a biometric authentication or identification system, the matcher compares a stored and a fresh template to determine whether there is a match.
For better compliance with privacy legislation, the matcher can be built upon a privacy-preserving distance.
This paper provides an analysis of information leakage during distance evaluation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a biometric authentication or identification system, the matcher compares a stored and a fresh template to determine whether there is a match. This assessment is based on both a similarity score and a predefined threshold. For better compliance with privacy legislation, the matcher can be built upon a privacy-preserving distance. Beyond the binary output (`yes' or `no'), most schemes may perform more precise computations, e.g., the value of the distance. Such precise information is prone to leakage even when not returned by the system. This can occur due to a malware infection or the use of a weakly privacy-preserving distance, exemplified by side channel attacks or partially obfuscated designs. This paper provides an analysis of information leakage during distance evaluation. We provide a catalog of information leakage scenarios with their impacts on data privacy. Each scenario gives rise to unique attacks with impacts quantified in terms of computational costs, thereby providing a better understanding of the security level.
Related papers
- Private Counterfactual Retrieval [34.11302393278422]
Transparency and explainability are two extremely important aspects to be considered when employing black-box machine learning models.
Providing counterfactual explanations is one way of catering this requirement.
We propose multiple schemes inspired by private information retrieval (PIR) techniques.
arXiv Detail & Related papers (2024-10-17T17:45:07Z) - Is merging worth it? Securely evaluating the information gain for causal dataset acquisition [9.373086204998348]
We introduce the first cryptographically secure information-theoretic approach for quantifying the value of a merge.
We do this by evaluating the Expected Information Gain (EIG) and utilising multi-party computation to ensure it can be securely computed without revealing any raw data.
arXiv Detail & Related papers (2024-09-11T12:17:01Z) - Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - How Much Does Each Datapoint Leak Your Privacy? Quantifying the
Per-datum Membership Leakage [13.097161185372153]
We study the per-datum Membership Inference Attacks (MIAs), where an attacker aims to infer whether a fixed target datum has been included in the input dataset of an algorithm and thus, violates privacy.
We quantify the per-datum membership leakage for the empirical mean, and show that it depends on the Mahalanobis distance between the target datum and the data-generating distribution.
Our experiments demonstrate the impacts of the leakage score, the sub-sampling ratio and the noise scale on the per-datum membership leakage as indicated by the theory.
arXiv Detail & Related papers (2024-02-15T16:30:55Z) - Protect Your Score: Contact Tracing With Differential Privacy Guarantees [68.53998103087508]
We argue that privacy concerns currently hold deployment back.
We propose a contact tracing algorithm with differential privacy guarantees against this attack.
Especially for realistic test scenarios, we achieve a two to ten-fold reduction in the infection rate of the virus.
arXiv Detail & Related papers (2023-12-18T11:16:33Z) - Untargeted Near-collision Attacks on Biometrics: Real-world Bounds and
Theoretical Limits [0.0]
We focus on untargeted attacks that can be carried out both online and offline, and in both identification and verification modes.
We use the False Match Rate (FMR) and the False Positive Identification Rate (FPIR) to address the security of these systems.
The study of this metric space, and system parameters, gives us the complexity of untargeted attacks and the probability of a near-collision.
arXiv Detail & Related papers (2023-04-04T07:17:31Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Mitigating Leakage in Federated Learning with Trusted Hardware [0.0]
In federated learning, multiple parties collaborate in order to train a global model over their respective datasets.
Some partial information may still be leaked across parties if this is done non-judiciously.
We propose two secure versions relying on trusted execution environments.
arXiv Detail & Related papers (2020-11-10T07:22:51Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.