Illusion Worlds: Deceptive UI Attacks in Social VR
- URL: http://arxiv.org/abs/2504.09199v1
- Date: Sat, 12 Apr 2025 12:55:13 GMT
- Title: Illusion Worlds: Deceptive UI Attacks in Social VR
- Authors: Junhee Lee, Hwanjo Heo, Seungwon Woo, Minseok Kim, Jongseop Kim, Jinwoo Kim,
- Abstract summary: This paper presents four novel UI attacks that covertly manipulate users into performing harmful actions through deceptive virtual content.<n>We propose MetaScanner, a proactive countermeasure that rapidly analyzes objects and scripts in virtual worlds, detecting suspicious elements within seconds.
- Score: 7.701964792074304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social Virtual Reality (VR) platforms have surged in popularity, yet their security risks remain underexplored. This paper presents four novel UI attacks that covertly manipulate users into performing harmful actions through deceptive virtual content. Implemented on VRChat and validated in an IRB-approved study with 30 participants, these attacks demonstrate how deceptive elements can mislead users into malicious actions without their awareness. To address these vulnerabilities, we propose MetaScanner, a proactive countermeasure that rapidly analyzes objects and scripts in virtual worlds, detecting suspicious elements within seconds.
Related papers
- Securing Virtual Reality Experiences: Unveiling and Tackling Cybersickness Attacks with Explainable AI [2.076342899890871]
We present a new type of VR attack, i.e., a cybersickness attack, which successfully stops the triggering of cybersickness mitigation.<n>We propose a novel explainable artificial intelligence (XAI)-guided cybersickness attack detection framework to detect such attacks.
arXiv Detail & Related papers (2025-03-17T17:49:51Z) - GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR Devices [8.206832482042682]
We unveil GAZEploit, a novel eye-tracking based attack specifically designed to exploit these eye-tracking information by leveraging the common use of virtual appearances in VR applications.
Our research, involving 30 participants, achieved over 80% accuracy in keystroke inference.
Our study also identified over 15 top-rated apps in the Apple Store as vulnerable to the GAZEploit attack, emphasizing the urgent need for bolstered security measures for this state-of-the-art VR/MR text entry method.
arXiv Detail & Related papers (2024-09-12T15:11:35Z) - Remote Keylogging Attacks in Multi-user VR Applications [19.79250382329298]
This study highlights a significant security threat in multi-user VR applications.
We propose a remote attack that utilizes the avatar rendering information collected from an adversary's game clients to extract user-typed secrets.
We conducted a user study to verify the attack's effectiveness, in which our attack successfully inferred 97.62% of the keystrokes.
arXiv Detail & Related papers (2024-05-22T22:10:40Z) - Inception Attacks: Immersive Hijacking in Virtual Reality Systems [24.280072806797243]
We introduce the immersive hijacking attack, where a remote attacker takes control of a user's interaction with their VR system.
All of the user's interactions with apps, services and other users can be recorded and modified without their knowledge.
We present our implementation of the immersive hijacking attack on Meta Quest headsets and conduct IRB-approved user studies.
arXiv Detail & Related papers (2024-03-08T23:22:16Z) - Deep Motion Masking for Secure, Usable, and Scalable Real-Time Anonymization of Virtual Reality Motion Data [49.68609500290361]
Recent studies have demonstrated that the motion tracking "telemetry" data used by nearly all VR applications is as uniquely identifiable as a fingerprint scan.
We present in this paper a state-of-the-art VR identification model that can convincingly bypass known defensive countermeasures.
arXiv Detail & Related papers (2023-11-09T01:34:22Z) - Can Virtual Reality Protect Users from Keystroke Inference Attacks? [23.587497604556823]
We show that despite assumptions of enhanced privacy, VR is unable to shield its users from side-channel attacks that steal private information.
This vulnerability arises from VR's greatest strength, its immersive and interactive nature.
arXiv Detail & Related papers (2023-10-24T21:19:38Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Unique Identification of 50,000+ Virtual Reality Users from Head & Hand
Motion Data [58.27542320038834]
We show that a large number of real VR users can be uniquely and reliably identified across multiple sessions using just their head and hand motion.
After training a classification model on 5 minutes of data per person, a user can be uniquely identified amongst the entire pool of 50,000+ with 94.33% accuracy from 100 seconds of motion.
This work is the first to truly demonstrate the extent to which biomechanics may serve as a unique identifier in VR, on par with widely used biometrics such as facial or fingerprint recognition.
arXiv Detail & Related papers (2023-02-17T15:05:18Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.