Privacy Preserving Machine Learning for Behavioral Authentication
Systems
- URL: http://arxiv.org/abs/2309.13046v1
- Date: Thu, 31 Aug 2023 19:15:26 GMT
- Title: Privacy Preserving Machine Learning for Behavioral Authentication
Systems
- Authors: Md Morshedul Islam and Md Abdur Rafiq
- Abstract summary: A behavioral authentication (BA) system uses the behavioral characteristics of users to verify their identity claims.
Similar to other neural network (NN) architectures, the NN classifier of the BA system is vulnerable to privacy attacks.
We introduce an ML-based privacy attack, and our proposed system is robust against this and other privacy and security attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A behavioral authentication (BA) system uses the behavioral characteristics
of users to verify their identity claims. A BA verification algorithm can be
constructed by training a neural network (NN) classifier on users' profiles.
The trained NN model classifies the presented verification data, and if the
classification matches the claimed identity, the verification algorithm accepts
the claim. This classification-based approach removes the need to maintain a
profile database. However, similar to other NN architectures, the NN classifier
of the BA system is vulnerable to privacy attacks. To protect the privacy of
training and test data used in an NN different techniques are widely used. In
this paper, our focus is on a non-crypto-based approach, and we used random
projection (RP) to ensure data privacy in an NN model. RP is a
distance-preserving transformation based on a random matrix. Before sharing the
profiles with the verifier, users will transform their profiles by RP and keep
their matrices secret. To reduce the computation load in RP, we use sparse
random projection, which is very effective for low-compute devices. Along with
correctness and security properties, our system can ensure the changeability
property of the BA system. We also introduce an ML-based privacy attack, and
our proposed system is robust against this and other privacy and security
attacks. We implemented our approach on three existing behavioral BA systems
and achieved a below 2.0% FRR and a below 1.0% FAR rate. Moreover, the machine
learning-based privacy attacker can only recover below 3.0% to 12.0% of
features from a portion of the projected profiles. However, these recovered
features are not sufficient to know details about the users' behavioral pattern
or to be used in a subsequent attack. Our approach is general and can be used
in other NN-based BA systems as well as in traditional biometric systems.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - One-shot lip-based biometric authentication: extending behavioral
features with authentication phrase information [3.038642416291856]
Lip-based biometric authentication (LBBA) is an authentication method based on a person's lip movements during speech in the form of video data captured by a camera sensor.
LBBA can utilize both physical and behavioral characteristics of lip movements without requiring any additional sensory equipment apart from an RGB camera.
arXiv Detail & Related papers (2023-08-14T05:34:36Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Multi-class Classifier based Failure Prediction with Artificial and
Anonymous Training for Data Privacy [0.0]
A neural network based multi-class classifier is developed for failure prediction.
The proposed mechanism completely decouples the data set used for training process from the actual data which is kept private.
Results show high accuracy in failure prediction under different parameter configurations.
arXiv Detail & Related papers (2022-09-06T07:53:33Z) - Evaluation of Neural Networks Defenses and Attacks using NDCG and
Reciprocal Rank Metrics [6.6389732792316]
We present two metrics which are specifically designed to measure the effect of attacks, or the recovery effect of defenses, on the output of neural networks in classification tasks.
Inspired by the normalized discounted cumulative gain and the reciprocal rank metrics used in information retrieval literature, we treat the neural network predictions as ranked lists of results.
Compared to the common classification metrics, our proposed metrics demonstrate superior informativeness and distinctiveness.
arXiv Detail & Related papers (2022-01-10T12:54:45Z) - PASS: Protected Attribute Suppression System for Mitigating Bias in Face
Recognition [55.858374644761525]
Face recognition networks encode information about sensitive attributes while being trained for identity classification.
Existing bias mitigation approaches require end-to-end training and are unable to achieve high verification accuracy.
We present a descriptors-based adversarial de-biasing approach called Protected Attribute Suppression System ( PASS)'
Pass can be trained on top of descriptors obtained from any previously trained high-performing network to classify identities and simultaneously reduce encoding of sensitive attributes.
arXiv Detail & Related papers (2021-08-09T00:39:22Z) - Security and Privacy Enhanced Gait Authentication with Random
Representation Learning and Digital Lockers [3.3549957463189095]
Gait data captured by inertial sensors have demonstrated promising results on user authentication.
Most existing approaches stored the enrolled gait pattern insecurely for matching with the pattern, thus, posed critical security and privacy issues.
We present a gait cryptosystem that generates from gait data the random key for user authentication, meanwhile, secures the gait pattern.
arXiv Detail & Related papers (2021-08-05T06:34:42Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Towards Probabilistic Verification of Machine Unlearning [30.892906429582904]
We propose a formal framework to study the design of verification mechanisms for data deletion requests.
We show that our approach has minimal effect on the machine learning service's accuracy but provides high confidence verification of unlearning.
arXiv Detail & Related papers (2020-03-09T16:39:46Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.