PrivateXR: Defending Privacy Attacks in Extended Reality Through Explainable AI-Guided Differential Privacy
- URL: http://arxiv.org/abs/2512.16851v1
- Date: Thu, 18 Dec 2025 18:23:06 GMT
- Title: PrivateXR: Defending Privacy Attacks in Extended Reality Through Explainable AI-Guided Differential Privacy
- Authors: Ripan Kumar Kundu, Istiak Ahmed, Khaza Anuarul Hoque,
- Abstract summary: We propose a novel framework combining explainable AI (XAI) and DP-enabled privacy-preserving mechanisms to defend against privacy attacks.<n>Specifically, we leverage post-hoc explanations to identify the most influential features in AI XR models and selectively apply DP to those features during inference.<n>Our results show that the proposed method reduces MIA and RDA success rates by up to 43% and 39%, respectively, for cybersickness tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The convergence of artificial AI and XR technologies (AI XR) promises innovative applications across many domains. However, the sensitive nature of data (e.g., eye-tracking) used in these systems raises significant privacy concerns, as adversaries can exploit these data and models to infer and leak personal information through membership inference attacks (MIA) and re-identification (RDA) with a high success rate. Researchers have proposed various techniques to mitigate such privacy attacks, including differential privacy (DP). However, AI XR datasets often contain numerous features, and applying DP uniformly can introduce unnecessary noise to less relevant features, degrade model accuracy, and increase inference time, limiting real-time XR deployment. Motivated by this, we propose a novel framework combining explainable AI (XAI) and DP-enabled privacy-preserving mechanisms to defend against privacy attacks. Specifically, we leverage post-hoc explanations to identify the most influential features in AI XR models and selectively apply DP to those features during inference. We evaluate our XAI-guided DP approach on three state-of-the-art AI XR models and three datasets: cybersickness, emotion, and activity classification. Our results show that the proposed method reduces MIA and RDA success rates by up to 43% and 39%, respectively, for cybersickness tasks while preserving model utility with up to 97% accuracy using Transformer models. Furthermore, it improves inference time by up to ~2x compared to traditional DP approaches. To demonstrate practicality, we deploy the XAI-guided DP AI XR models on an HTC VIVE Pro headset and develop a user interface (UI), namely PrivateXR, allowing users to adjust privacy levels (e.g., low, medium, high) while receiving real-time task predictions, protecting user privacy during XR gameplay.
Related papers
- Adversary-Aware Private Inference over Wireless Channels [51.93574339176914]
AI-based sensing at wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.<n>As sensitive personal data can be reconstructed by an adversary, transformation of the features are required to reduce the risk of privacy violations.<n>We propose a novel framework for privacy-preserving AI-based sensing, where devices apply transformations of extracted features before transmission to a model server.
arXiv Detail & Related papers (2025-10-23T13:02:14Z) - ALPINE: A Lightweight and Adaptive Privacy-Decision Agent Framework for Dynamic Edge Crowdsensing [34.752121524751466]
ALPINE is a lightweight, adaptive framework that empowers terminal devices to adjust differential privacy levels in real time.<n>Based on environmental risk assessments, we design a reward function that balances privacy gains, data utility and energy cost.<n>Both the collaborative risk model and pretrained TD3-based agent are designed for low-overhead deployment.
arXiv Detail & Related papers (2025-10-20T05:03:25Z) - Machine Learning with Privacy for Protected Attributes [56.44253915927481]
We refine the definition of differential privacy (DP) to create a more general and flexible framework that we call feature differential privacy (FDP)<n>Our definition is simulation-based and allows for both addition/removal and replacement variants of privacy, and can handle arbitrary separation of protected and non-protected features.<n>We apply our framework to various machine learning tasks and show that it can significantly improve the utility of DP-trained models when public features are available.
arXiv Detail & Related papers (2025-06-24T17:53:28Z) - AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents [66.29263282311258]
We introduce a new benchmark AgentDAM that measures if AI web-navigation agents follow the privacy principle of data minimization''<n>Our benchmark simulates realistic web interaction scenarios end-to-end and is adaptable to all existing web navigation agents.
arXiv Detail & Related papers (2025-03-12T19:30:31Z) - Privacy Constrained Fairness Estimation for Decision Trees [2.9906966931843093]
Measuring the fairness of any AI model requires the sensitive attributes of the individuals in the dataset.
We propose a novel method, dubbed Privacy-Aware Fairness Estimation of Rules (PAFER)
We show that using the Laplacian mechanism, the method is able to estimate SP with low error while guaranteeing the privacy of the individuals in the dataset with high certainty.
arXiv Detail & Related papers (2023-12-13T14:54:48Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Discriminative Adversarial Privacy: Balancing Accuracy and Membership
Privacy in Neural Networks [7.0895962209555465]
Discriminative Adversarial Privacy (DAP) is a learning technique designed to achieve a balance between model performance, speed, and privacy.
DAP relies on adversarial training based on a novel loss function able to minimise the prediction error while maximising the MIA's error.
In addition, we introduce a novel metric named Accuracy Over Privacy (AOP) to capture the performance-privacy trade-off.
arXiv Detail & Related papers (2023-06-05T17:25:45Z) - Joint Sensing, Communication, and AI: A Trifecta for Resilient THz User
Experiences [118.91584633024907]
A novel joint sensing, communication, and artificial intelligence (AI) framework is proposed so as to optimize extended reality (XR) experiences over terahertz (THz) wireless systems.
arXiv Detail & Related papers (2023-04-29T00:39:50Z) - Mitigating Query-Flooding Parameter Duplication Attack on Regression
Models with High-Dimensional Gaussian Mechanism [12.017509695576377]
Differential privacy (DP) has been considered a promising technique to mitigate this attack.
We show that the adversary can launch a query-flooding parameter duplication (QPD) attack to infer the model information.
We propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent unauthorized information disclosure.
arXiv Detail & Related papers (2020-02-06T01:47:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.