On the Adversarial Inversion of Deep Biometric Representations
- URL: http://arxiv.org/abs/2304.05561v1
- Date: Wed, 12 Apr 2023 01:47:11 GMT
- Title: On the Adversarial Inversion of Deep Biometric Representations
- Authors: Gioacchino Tangari and Shreesh Keskar and Hassan Jameel Asghar and
Dali Kaafar
- Abstract summary: Biometric authentication service providers often claim that it is not possible to reverse-engineer a user's raw biometric sample.
In this paper, we investigate this claim on the specific example of deep neural network (DNN) embeddings.
We propose a two-pronged attack that first infers the original DNN by exploiting the model footprint on the embedding.
- Score: 3.804240190982696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Biometric authentication service providers often claim that it is not
possible to reverse-engineer a user's raw biometric sample, such as a
fingerprint or a face image, from its mathematical (feature-space)
representation. In this paper, we investigate this claim on the specific
example of deep neural network (DNN) embeddings. Inversion of DNN embeddings
has been investigated for explaining deep image representations or synthesizing
normalized images. Existing studies leverage full access to all layers of the
original model, as well as all possible information on the original dataset.
For the biometric authentication use case, we need to investigate this under
adversarial settings where an attacker has access to a feature-space
representation but no direct access to the exact original dataset nor the
original learned model. Instead, we assume varying degree of attacker's
background knowledge about the distribution of the dataset as well as the
original learned model (architecture and training process). In these cases, we
show that the attacker can exploit off-the-shelf DNN models and public
datasets, to mimic the behaviour of the original learned model to varying
degrees of success, based only on the obtained representation and attacker's
prior knowledge. We propose a two-pronged attack that first infers the original
DNN by exploiting the model footprint on the embedding, and then reconstructs
the raw data by using the inferred model. We show the practicality of the
attack on popular DNNs trained for two prominent biometric modalities, face and
fingerprint recognition. The attack can effectively infer the original
recognition model (mean accuracy 83\% for faces, 86\% for fingerprints), and
can craft effective biometric reconstructions that are successfully
authenticated with 1-vs-1 authentication accuracy of up to 92\% for some
models.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - Presentation Attack detection using Wavelet Transform and Deep Residual
Neural Net [5.425986555749844]
Biometric substances can be deceived by the imposters in several ways.
The bio-metric images, especially the iris and face, are vulnerable to different presentation attacks.
This research applies deep learning approaches to mitigate presentation attacks in a biometric access control system.
arXiv Detail & Related papers (2023-11-23T20:21:49Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - DeepTaster: Adversarial Perturbation-Based Fingerprinting to Identify
Proprietary Dataset Use in Deep Neural Networks [34.11970637801044]
We introduce DeepTaster, a novel fingerprinting technique to address scenarios where a victim's data is unlawfully used to build a suspect model.
To accomplish this, DeepTaster generates adversarial images with perturbations, transforms them into the Fourier frequency domain, and uses these transformed images to identify the dataset used in a suspect model.
arXiv Detail & Related papers (2022-11-24T11:10:54Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Privacy Attacks Against Biometric Models with Fewer Samples:
Incorporating the Output of Multiple Models [2.1874132949602654]
Authentication systems are vulnerable to model inversion attacks where an adversary approximates the inverse of a target machine learning model.
This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems.
We propose a new technique that drastically reduces the amount of training data necessary for model inversion attacks.
arXiv Detail & Related papers (2022-09-22T14:00:43Z) - FBI: Fingerprinting models with Benign Inputs [17.323638042215013]
This paper tackles the challenges to propose i) fingerprinting schemes that are resilient to significant modifications of the models, by generalizing to the notion of model families and their variants.
We achieve both goals by demonstrating that benign inputs, that are unmodified images, are sufficient material for both tasks.
Both approaches are experimentally validated over an unprecedented set of more than 1,000 networks.
arXiv Detail & Related papers (2022-08-05T13:55:36Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - Practical No-box Adversarial Attacks against DNNs [31.808770437120536]
We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model.
We propose three mechanisms for training with a very small dataset and find that prototypical reconstruction is the most effective.
Our approach significantly diminishes the average prediction accuracy of the system to only 15.40%, which is on par with the attack that transfers adversarial examples from a pre-trained Arcface model.
arXiv Detail & Related papers (2020-12-04T11:10:03Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.