Defend Data Poisoning Attacks on Voice Authentication
- URL: http://arxiv.org/abs/2209.04547v2
- Date: Fri, 7 Jul 2023 19:40:13 GMT
- Title: Defend Data Poisoning Attacks on Voice Authentication
- Authors: Ke Li, Cameron Baird and Dan Lin
- Abstract summary: Machine learning attacks are putting voice authentication systems at risk.
We propose a more robust defense method, called Guardian, which is a convolutional neural network-based discriminator.
Our approach is able to distinguish about 95% of attacked accounts from normal accounts, which is much more effective than existing approaches with only 60% accuracy.
- Score: 6.160281428772401
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the advances in deep learning, speaker verification has achieved very
high accuracy and is gaining popularity as a type of biometric authentication
option in many scenes of our daily life, especially the growing market of web
services. Compared to traditional passwords, "vocal passwords" are much more
convenient as they relieve people from memorizing different passwords. However,
new machine learning attacks are putting these voice authentication systems at
risk. Without a strong security guarantee, attackers could access legitimate
users' web accounts by fooling the deep neural network (DNN) based voice
recognition models. In this paper, we demonstrate an easy-to-implement data
poisoning attack to the voice authentication system, which can hardly be
captured by existing defense mechanisms. Thus, we propose a more robust defense
method, called Guardian, which is a convolutional neural network-based
discriminator. The Guardian discriminator integrates a series of novel
techniques including bias reduction, input augmentation, and ensemble learning.
Our approach is able to distinguish about 95% of attacked accounts from normal
accounts, which is much more effective than existing approaches with only 60%
accuracy.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - KeyDetect --Detection of anomalies and user based on Keystroke Dynamics [0.0]
Cyber attacks can easily access sensitive data like credit card details and social security number.
Currently to stop cyber attacks, various different methods are opted from using two-step verification methods.
We are proposing a technique of using keystroke dynamics (typing pattern) of a user to authenticate the genuine user.
arXiv Detail & Related papers (2023-04-08T09:00:07Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - On Deep Learning in Password Guessing, a Survey [4.1499725848998965]
This paper compares various deep learning-based password guessing approaches that do not require domain knowledge or assumptions about users' password structures and combinations.
We propose a promising research experimental design on using variations of IWGAN on password guessing under non-targeted offline attacks.
arXiv Detail & Related papers (2022-08-22T15:48:35Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Dictionary Attacks on Speaker Verification [15.00667613025837]
We introduce a generic formulation of the attack that can be used with various speech representations and threat models.
The attacker uses adversarial optimization to maximize raw similarity of speaker embeddings between a seed speech sample and a proxy population.
We show that, combined with multiple attempts, this attack opens even more to serious issues on the security of these systems.
arXiv Detail & Related papers (2022-04-24T15:31:41Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Adversarial Attack and Defense Strategies for Deep Speaker Recognition
Systems [44.305353565981015]
This paper considers several state-of-the-art adversarial attacks to a deep speaker recognition system, employing strong defense methods as countermeasures.
Experiments show that the speaker recognition systems are vulnerable to adversarial attacks, and the strongest attacks can reduce the accuracy of the system from 94% to even 0%.
arXiv Detail & Related papers (2020-08-18T00:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.