Practical Acoustic Eavesdropping On Typed Passphrases
- URL: http://arxiv.org/abs/2503.16719v2
- Date: Mon, 07 Apr 2025 10:07:08 GMT
- Title: Practical Acoustic Eavesdropping On Typed Passphrases
- Authors: Darren Fürst, Andreas Aßmuth,
- Abstract summary: This paper exploits keyboard acoustic emanations to infer typed natural language passphrases via unsupervised learning.<n>It is also applicable to longer messages, such as confidential emails, where the margin for error is much greater.<n>Cross-correlation audio preprocessing outperforms methods like mel-frequency-cepstral coefficients and fast-fourier transforms in keystroke clustering.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloud services have become an essential infrastructure for enterprises and individuals. Access to these cloud services is typically governed by Identity and Access Management systems, where user authentication often relies on passwords. While best practices dictate the implementation of multi-factor authentication, it's a reality that many such users remain solely protected by passwords. This reliance on passwords creates a significant vulnerability, as these credentials can be compromised through various means, including side-channel attacks. This paper exploits keyboard acoustic emanations to infer typed natural language passphrases via unsupervised learning, necessitating no previous training data. Whilst this work focuses on short passphrases, it is also applicable to longer messages, such as confidential emails, where the margin for error is much greater, than with passphrases, making the attack even more effective in such a setting. Unlike traditional attacks that require physical access to the target device, acoustic side-channel attacks can be executed within the vicinity, without the user's knowledge, offering a worthwhile avenue for malicious actors. Our findings replicate and extend previous work, confirming that cross-correlation audio preprocessing outperforms methods like mel-frequency-cepstral coefficients and fast-fourier transforms in keystroke clustering. Moreover, we show that partial passphrase recovery through clustering and a dictionary attack can enable faster than brute-force attacks, further emphasizing the risks posed by this attack vector.
Related papers
- Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding [118.75567341513897]
Existing methods typically analyze target text in isolation or solely with non-member contexts.
We propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts.
arXiv Detail & Related papers (2024-09-05T09:10:38Z) - Exploiting Leakage in Password Managers via Injection Attacks [16.120271337898235]
This work explores injection attacks against password managers.
In this setting, the adversary controls their own application client, which they use to "inject" chosen payloads to a victim's client via, for example, sharing credentials with them.
arXiv Detail & Related papers (2024-08-13T17:45:12Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Conditional Generative Adversarial Network for keystroke presentation
attack [0.0]
We propose to study a new approach aiming to deploy a presentation attack towards a keystroke authentication system.
Our idea is to use Conditional Generative Adversarial Networks (cGAN) for generating synthetic keystroke data that can be used for impersonating an authorized user.
Results indicate that the cGAN can effectively generate keystroke dynamics patterns that can be used for deceiving keystroke authentication systems.
arXiv Detail & Related papers (2022-12-16T12:45:16Z) - Defend Data Poisoning Attacks on Voice Authentication [6.160281428772401]
Machine learning attacks are putting voice authentication systems at risk.
We propose a more robust defense method, called Guardian, which is a convolutional neural network-based discriminator.
Our approach is able to distinguish about 95% of attacked accounts from normal accounts, which is much more effective than existing approaches with only 60% accuracy.
arXiv Detail & Related papers (2022-09-09T22:48:35Z) - On Deep Learning in Password Guessing, a Survey [4.1499725848998965]
This paper compares various deep learning-based password guessing approaches that do not require domain knowledge or assumptions about users' password structures and combinations.
We propose a promising research experimental design on using variations of IWGAN on password guessing under non-targeted offline attacks.
arXiv Detail & Related papers (2022-08-22T15:48:35Z) - GNPassGAN: Improved Generative Adversarial Networks For Trawling Offline
Password Guessing [5.165256397719443]
This paper reviews various deep learning-based password guessing approaches.
It also introduces GNPassGAN, a password guessing tool built on generative adversarial networks for trawling offline attacks.
In comparison to the state-of-the-art PassGAN model, GNPassGAN is capable of guessing 88.03% more passwords and generating 31.69% fewer duplicates.
arXiv Detail & Related papers (2022-08-14T23:51:52Z) - Learning-based Hybrid Local Search for the Hard-label Textual Attack [53.92227690452377]
We consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker could only access the prediction label.
Based on this observation, we propose a novel hard-label attack, called Learning-based Hybrid Local Search (LHLS) algorithm.
Our LHLS significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2022-01-20T14:16:07Z) - Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing [65.32148145602865]
deep hashing networks are vulnerable to adversarial examples.
We propose a novel prototype-supervised adversarial network (ProS-GAN)
To the best of our knowledge, this is the first generation-based method to attack deep hashing networks.
arXiv Detail & Related papers (2021-05-17T00:31:37Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.