Non-intrusive and Unconstrained Keystroke Inference in VR Platforms via Infrared Side Channel
- URL: http://arxiv.org/abs/2412.14815v1
- Date: Thu, 19 Dec 2024 13:09:46 GMT
- Title: Non-intrusive and Unconstrained Keystroke Inference in VR Platforms via Infrared Side Channel
- Authors: Tao Ni, Yuefeng Du, Qingchuan Zhao, Cong Wang,
- Abstract summary: We present a new side-channel leakage in the constellation tracking system of mainstream VR platforms.
The infrared (IR) signals emitted from the VR controllers for controller-headset interactions can be maliciously exploited to reconstruct unconstrained input keystrokes.
We propose a novel keystroke inference attack named VRecKey to demonstrate the feasibility and practicality of this novel infrared side channel.
- Score: 7.493632378968814
- License:
- Abstract: Virtual Reality (VR) technologies are increasingly employed in numerous applications across various areas. Therefore, it is essential to ensure the security of interactions between users and VR devices. In this paper, we disclose a new side-channel leakage in the constellation tracking system of mainstream VR platforms, where the infrared (IR) signals emitted from the VR controllers for controller-headset interactions can be maliciously exploited to reconstruct unconstrained input keystrokes on the virtual keyboard non-intrusively. We propose a novel keystroke inference attack named VRecKey to demonstrate the feasibility and practicality of this novel infrared side channel. Specifically, VRecKey leverages a customized 2D IR sensor array to intercept ambient IR signals emitted from VR controllers and subsequently infers (i) character-level key presses on the virtual keyboard and (ii) word-level keystrokes along with their typing trajectories. We extensively evaluate the effectiveness of VRecKey with two commercial VR devices, and the results indicate that it can achieve over 94.2% and 90.5% top-3 accuracy in inferring character-level and word-level keystrokes with varying lengths, respectively. In addition, empirical results show that VRecKey is resilient to several practical impact factors and presents effectiveness in various real-world scenarios, which provides a complementary and orthogonal attack surface for the exploration of keystroke inference attacks in VR platforms.
Related papers
- Predicting 3D Motion from 2D Video for Behavior-Based VR Biometrics [7.609875877250929]
We propose an approach that uses 2D body joints, acquired from the right side of the participants using an external 2D camera.
Our method uses the 2D data of body joints that are not tracked by the VR device to predict past and future 3D tracks of the right controller.
arXiv Detail & Related papers (2025-02-05T02:19:23Z) - mmSpyVR: Exploiting mmWave Radar for Penetrating Obstacles to Uncover Privacy Vulnerability of Virtual Reality [20.72439781800557]
This paper reveals a novel vulnerability in VR systems that allows attackers to capture VR privacy through obstacles.
We propose mmSpyVR, a novel attack on VR user's privacy via mmWave radar.
arXiv Detail & Related papers (2024-11-15T03:22:44Z) - GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR Devices [8.206832482042682]
We unveil GAZEploit, a novel eye-tracking based attack specifically designed to exploit these eye-tracking information by leveraging the common use of virtual appearances in VR applications.
Our research, involving 30 participants, achieved over 80% accuracy in keystroke inference.
Our study also identified over 15 top-rated apps in the Apple Store as vulnerable to the GAZEploit attack, emphasizing the urgent need for bolstered security measures for this state-of-the-art VR/MR text entry method.
arXiv Detail & Related papers (2024-09-12T15:11:35Z) - Deep Motion Masking for Secure, Usable, and Scalable Real-Time Anonymization of Virtual Reality Motion Data [49.68609500290361]
Recent studies have demonstrated that the motion tracking "telemetry" data used by nearly all VR applications is as uniquely identifiable as a fingerprint scan.
We present in this paper a state-of-the-art VR identification model that can convincingly bypass known defensive countermeasures.
arXiv Detail & Related papers (2023-11-09T01:34:22Z) - Typing on Any Surface: A Deep Learning-based Method for Real-Time
Keystroke Detection in Augmented Reality [4.857109990499532]
Mid-air keyboard interface, wireless keyboards or voice input, either suffer from poor ergonomic design, limited accuracy, or are simply embarrassing to use in public.
This paper proposes and validates a deep-learning based approach, that enables AR applications to accurately predict keystrokes from the user perspective RGB video stream.
A two-stage model, combing an off-the-shelf hand landmark extractor and a novel adaptive Convolutional Recurrent Neural Network (C-RNN) was trained.
arXiv Detail & Related papers (2023-08-31T23:58:25Z) - WiserVR: Semantic Communication Enabled Wireless Virtual Reality
Delivery [12.158124978097982]
We propose a novel framework, namely WIreless SEmantic deliveRy for VR (WiserVR), for delivering consecutive 360deg video frames to VR users.
Deep learning-based multiple modules are well-devised for the transceiver in WiserVR to realize high-performance feature extraction and semantic recovery.
arXiv Detail & Related papers (2022-11-02T16:22:41Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition [53.41825941088989]
A new WiFi-based and video-based neural network (WiNN) is proposed to improve the robustness of activity recognition.
Our results show that WiVi data set satisfies the primary demand and all three branches in the proposed pipeline keep more than $80%$ of activity recognition accuracy.
arXiv Detail & Related papers (2022-05-24T10:49:11Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Wireless Edge-Empowered Metaverse: A Learning-Based Incentive Mechanism
for Virtual Reality [102.4151387131726]
We propose a learning-based Incentive Mechanism framework for VR services in the Metaverse.
First, we propose the quality of perception as the metric for VR users in the virtual world.
Second, for quick trading of VR services between VR users (i.e., buyers) and VR SPs (i.e., sellers), we design a double Dutch auction mechanism.
Third, for auction communication reduction, we design a deep reinforcement learning-based auctioneer to accelerate this auction process.
arXiv Detail & Related papers (2021-11-07T13:02:52Z) - Gaze-Sensing LEDs for Head Mounted Displays [73.88424800314634]
We exploit the sensing capability of LEDs to create low-power gaze tracker for virtual reality (VR) applications.
We show that our gaze estimation method does not require complex dimension reduction techniques.
arXiv Detail & Related papers (2020-03-18T23:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.