Acoustic Side-Channel Attacks on a Computer Mouse
- URL: http://arxiv.org/abs/2505.02725v1
- Date: Mon, 05 May 2025 15:26:29 GMT
- Title: Acoustic Side-Channel Attacks on a Computer Mouse
- Authors: Mauro Conti, Marin Duroyon, Gabriele Orazi, Gene Tsudik,
- Abstract summary: This paper considers security leakage via acoustic signals emanating from normal mouse usage.<n>We first confirm feasibility of such attacks by showing a proof-of-concept attack that classifies four mouse movements with 97% accuracy in a controlled environment.<n>We then evolve the attack towards discerning twelve unique mouse movements using a smartphone to record the experiment.
- Score: 23.318362669158418
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Acoustic Side-Channel Attacks (ASCAs) extract sensitive information by using audio emitted from a computing devices and their peripherals. Attacks targeting keyboards are popular and have been explored in the literature. However, similar attacks targeting other human interface peripherals, such as computer mice, are under-explored. To this end, this paper considers security leakage via acoustic signals emanating from normal mouse usage. We first confirm feasibility of such attacks by showing a proof-of-concept attack that classifies four mouse movements with 97% accuracy in a controlled environment. We then evolve the attack towards discerning twelve unique mouse movements using a smartphone to record the experiment. Using Machine Learning (ML) techniques, the model is trained on an experiment with six participants to be generalizable and discern among twelve movements with 94% accuracy. In addition, we experiment with an attack that detects a user action of closing a full-screen window on a laptop. Achieving an accuracy of 91%, this experiment highlights exploiting audio leakage from computer mouse movements in a realistic scenario.
Related papers
- DeePen: Penetration Testing for Audio Deepfake Detection [6.976070957821282]
Deepfakes pose significant security risks to individuals, organizations, and society at large.<n>We introduce a systematic testing methodology, which we call DeePen.<n>Our approach operates without prior knowledge of or access to the target deepfake detection models.
arXiv Detail & Related papers (2025-02-27T12:26:25Z) - Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards [93.16294577018482]
Arena, the most popular benchmark of this type, ranks models by asking users to select the better response between two randomly selected models.<n>We show that an attacker can alter the leaderboard (to promote their favorite model or demote competitors) at the cost of roughly a thousand votes.<n>Our attack consists of two steps: first, we show how an attacker can determine which model was used to generate a given reply with more than $95%$ accuracy; and then, the attacker can use this information to consistently vote against a target model.
arXiv Detail & Related papers (2025-01-13T17:12:38Z) - Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning [93.44927301021688]
Website fingerprint (WF) attacks covertly monitor user communications to identify the web pages they visit.<n>Existing WF defenses attempt to reduce the attacker's accuracy by disrupting unique traffic patterns.<n>We introduce Controllable Website Fingerprint Defense (CWFD), a novel defense perspective based on backdoor learning.
arXiv Detail & Related papers (2024-12-16T06:12:56Z) - GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR Devices [8.206832482042682]
We unveil GAZEploit, a novel eye-tracking based attack specifically designed to exploit these eye-tracking information by leveraging the common use of virtual appearances in VR applications.
Our research, involving 30 participants, achieved over 80% accuracy in keystroke inference.
Our study also identified over 15 top-rated apps in the Apple Store as vulnerable to the GAZEploit attack, emphasizing the urgent need for bolstered security measures for this state-of-the-art VR/MR text entry method.
arXiv Detail & Related papers (2024-09-12T15:11:35Z) - Acoustic Side Channel Attack on Keyboards Based on Typing Patterns [0.0]
Side-channel attacks on keyboards can bypass security measures in many systems that use keyboards as one of the input devices.
This paper proposes an applicable method that takes into account the user's typing pattern in a realistic environment.
Our method achieved an average success rate of 43% across all our case studies when considering real-world scenarios.
arXiv Detail & Related papers (2024-03-13T17:44:15Z) - A Practical Deep Learning-Based Acoustic Side Channel Attack on
Keyboards [6.230751621285321]
This paper presents a state-of-the-art deep learning model in order to classify laptop keystrokes, using a smartphone integrated microphone.
When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model.
We discuss a series of mitigation methods to protect users against these series of attacks.
arXiv Detail & Related papers (2023-08-02T10:51:36Z) - NUANCE: Near Ultrasound Attack On Networked Communication Environments [0.0]
This study investigates a primary inaudible attack vector on Amazon Alexa voice services using near ultrasound trojans.
The research maps each attack vector to a tactic or technique from the MITRE ATT&CK matrix.
The experiment involved generating and surveying fifty near-ultrasonic audios to assess the attacks' effectiveness.
arXiv Detail & Related papers (2023-04-25T23:28:46Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - VenoMave: Targeted Poisoning Against Speech Recognition [30.448709704880518]
VENOMAVE is the first training-time poisoning attack against speech recognition.
We evaluate our attack on two datasets: TIDIGITS and Speech Commands.
arXiv Detail & Related papers (2020-10-21T00:30:08Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - BeCAPTCHA-Mouse: Synthetic Mouse Trajectories and Improved Bot Detection [78.11535724645702]
We present BeCAPTCHA-Mouse, a bot detector based on a neuromotor model of mouse dynamics.
BeCAPTCHA-Mouse is able to detect bot trajectories of high realism with 93% of accuracy in average using only one mouse trajectory.
arXiv Detail & Related papers (2020-05-02T17:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.