Acoustic Side Channel Attack on Keyboards Based on Typing Patterns
- URL: http://arxiv.org/abs/2403.08740v1
- Date: Wed, 13 Mar 2024 17:44:15 GMT
- Title: Acoustic Side Channel Attack on Keyboards Based on Typing Patterns
- Authors: Alireza Taheritajar, Reza Rahaeimehr,
- Abstract summary: Side-channel attacks on keyboards can bypass security measures in many systems that use keyboards as one of the input devices.
This paper proposes an applicable method that takes into account the user's typing pattern in a realistic environment.
Our method achieved an average success rate of 43% across all our case studies when considering real-world scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Acoustic side-channel attacks on keyboards can bypass security measures in many systems that use keyboards as one of the input devices. These attacks aim to reveal users' sensitive information by targeting the sounds made by their keyboards as they type. Most existing approaches in this field ignore the negative impacts of typing patterns and environmental noise in their results. This paper seeks to address these shortcomings by proposing an applicable method that takes into account the user's typing pattern in a realistic environment. Our method achieved an average success rate of 43% across all our case studies when considering real-world scenarios.
Related papers
- TapType: Ten-finger text entry on everyday surfaces via Bayesian inference [32.33746932895968]
TapType is a mobile text entry system for full-size typing on passive surfaces.
From the inertial sensors inside a band on either wrist, TapType decodes and relates surface taps to a traditional QWERTY keyboard layout.
arXiv Detail & Related papers (2024-10-08T12:58:31Z) - OverHear: Headphone based Multi-sensor Keystroke Inference [1.9915929143641455]
We develop a keystroke inference framework that leverages both acoustic and accelerometer data from headphones.
We achieve top-5 key prediction accuracy of around 80% for mechanical keyboards and around 60% for membrane keyboards.
Results highlight the effectiveness and limitations of our approach in the context of real-world scenarios.
arXiv Detail & Related papers (2023-11-04T00:48:20Z) - A Survey on Acoustic Side Channel Attacks on Keyboards [0.0]
Mechanical keyboards are susceptible to acoustic side-channel attacks.
Researchers have developed methods that can extract typed keystrokes from ambient noise.
With the improvement of microphone technology, the potential vulnerability to acoustic side-channel attacks also increases.
arXiv Detail & Related papers (2023-09-20T02:26:53Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Typing on Any Surface: A Deep Learning-based Method for Real-Time
Keystroke Detection in Augmented Reality [4.857109990499532]
Mid-air keyboard interface, wireless keyboards or voice input, either suffer from poor ergonomic design, limited accuracy, or are simply embarrassing to use in public.
This paper proposes and validates a deep-learning based approach, that enables AR applications to accurately predict keystrokes from the user perspective RGB video stream.
A two-stage model, combing an off-the-shelf hand landmark extractor and a novel adaptive Convolutional Recurrent Neural Network (C-RNN) was trained.
arXiv Detail & Related papers (2023-08-31T23:58:25Z) - A Practical Deep Learning-Based Acoustic Side Channel Attack on
Keyboards [6.230751621285321]
This paper presents a state-of-the-art deep learning model in order to classify laptop keystrokes, using a smartphone integrated microphone.
When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model.
We discuss a series of mitigation methods to protect users against these series of attacks.
arXiv Detail & Related papers (2023-08-02T10:51:36Z) - To Wake-up or Not to Wake-up: Reducing Keyword False Alarm by Successive
Refinement [58.96644066571205]
We show that existing deep keyword spotting mechanisms can be improved by Successive Refinement.
We show across multiple models with size ranging from 13K parameters to 2.41M parameters, the successive refinement technique reduces FA by up to a factor of 8.
Our proposed approach is "plug-and-play" and can be applied to any deep keyword spotting model.
arXiv Detail & Related papers (2023-04-06T23:49:29Z) - Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - Deepfake audio detection by speaker verification [79.99653758293277]
We propose a new detection approach that leverages only the biometric characteristics of the speaker, with no reference to specific manipulations.
The proposed approach can be implemented based on off-the-shelf speaker verification tools.
We test several such solutions on three popular test sets, obtaining good performance, high generalization ability, and high robustness to audio impairment.
arXiv Detail & Related papers (2022-09-28T13:46:29Z) - Crack detection using tap-testing and machine learning techniques to
prevent potential rockfall incidents [68.8204255655161]
This paper proposes a system towards an automated inspection for potential rockfalls.
A robot is used to repeatedly strike or tap on the rock surface.
The sound from the tapping is collected by the robot and classified with the intent of identifying rocks that are broken and prone to fall.
arXiv Detail & Related papers (2021-10-10T22:53:36Z) - Adv-BERT: BERT is not robust on misspellings! Generating nature
adversarial samples on BERT [95.88293021131035]
It is unclear, however, how the models will perform in realistic scenarios where textitnatural rather than malicious adversarial instances often exist.
This work systematically explores the robustness of BERT, the state-of-the-art Transformer-style model in NLP, in dealing with noisy data.
arXiv Detail & Related papers (2020-02-27T22:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.