GaitGuard: Towards Private Gait in Mixed Reality
- URL: http://arxiv.org/abs/2312.04470v3
- Date: Tue, 4 Jun 2024 05:13:26 GMT
- Title: GaitGuard: Towards Private Gait in Mixed Reality
- Authors: Diana Romero, Ruchi Jagdish Patel, Athina Markopoulou, Salma Elmalaki,
- Abstract summary: GaitGuard is the first real-time framework designed to protect the privacy of gait features within the camera view of AR/MR devices.
GaitGuard reduces the risk of identification by up to 68%, while maintaining a minimal latency of merely 118.77 ms.
- Score: 3.2392550445029396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Augmented/Mixed Reality (AR/MR) technologies offers a new era of immersive, collaborative experiences, distinctively setting them apart from conventional mobile systems. However, as we further investigate the privacy and security implications within these environments, the issue of gait privacy emerges as a critical yet underexplored concern. Given its uniqueness as a biometric identifier that can be correlated to several sensitive attributes, the protection of gait information becomes crucial in preventing potential identity tracking and unauthorized profiling within these systems. In this paper, we conduct a user study with 20 participants to assess the risk of individual identification through gait feature analysis extracted from video feeds captured by MR devices. Our results show the capability to uniquely identify individuals with an accuracy of up to 92%, underscoring an urgent need for effective gait privacy protection measures. Through rigorous evaluation, we present a comparative analysis of various mitigation techniques, addressing both aware and unaware adversaries, in terms of their utility and impact on privacy preservation. From these evaluations, we introduce GaitGuard, the first real-time framework designed to protect the privacy of gait features within the camera view of AR/MR devices. Our evaluations of GaitGuard within a MR collaborative scenario demonstrate its effectiveness in implementing mitigation that reduces the risk of identification by up to 68%, while maintaining a minimal latency of merely 118.77 ms, thus marking a critical step forward in safeguarding privacy within AR/MR ecosystems.
Related papers
- Video-DPRP: A Differentially Private Approach for Visual Privacy-Preserving Video Human Activity Recognition [1.90946053920849]
Two primary approaches to ensure privacy preservation in Video HAR are differential privacy (DP) and visual privacy.
We introduce Video-DPRP: a Video-sample-wise Differentially Private Random Projection framework for privacy-preserved video reconstruction for HAR.
We compare Video-DPRP's performance on activity recognition with traditional DP methods, and state-of-the-art (SOTA) visual privacy-preserving techniques.
arXiv Detail & Related papers (2025-03-03T23:43:12Z) - PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models [51.458089902581456]
We introduce PersGuard, a novel backdoor-based approach that prevents malicious personalization of specific images.
Our method significantly outperforms existing techniques, offering a more robust solution for privacy and copyright protection.
arXiv Detail & Related papers (2025-02-22T09:47:55Z) - Differentially Private Integrated Decision Gradients (IDG-DP) for Radar-based Human Activity Recognition [5.955900146668931]
Recent research has shown high accuracy in recognizing subjects or gender from radar gait patterns, raising privacy concerns.
This study addresses these issues by investigating privacy vulnerabilities in radar-based Human Activity Recognition (HAR) systems.
We propose a novel method for privacy preservation using Differential Privacy (DP) driven by attributions derived with Integrated Decision Gradient (IDG) algorithm.
arXiv Detail & Related papers (2024-11-04T14:08:26Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience [11.130411904676095]
Eye tracking data, if exposed, can be used for re-identification attacks.
We develop a methodology to evaluate real-time privacy mechanisms for interactive VR applications.
arXiv Detail & Related papers (2024-02-12T14:53:12Z) - Protect Your Score: Contact Tracing With Differential Privacy Guarantees [68.53998103087508]
We argue that privacy concerns currently hold deployment back.
We propose a contact tracing algorithm with differential privacy guarantees against this attack.
Especially for realistic test scenarios, we achieve a two to ten-fold reduction in the infection rate of the virus.
arXiv Detail & Related papers (2023-12-18T11:16:33Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - STPrivacy: Spatio-Temporal Tubelet Sparsification and Anonymization for
Privacy-preserving Action Recognition [28.002605566359676]
We present a PPAR paradigm, i.e. spatial, performing privacy preservation from both temporal perspectives, and propose a STPrivacy framework.
For first time, our STPrivacy applies vision Transformers to PPAR and regards video as sequence of leakage-temporal tubelets.
Because there is no large-scale benchmarks, we annotate five privacy attributes for two of the most popular action recognition datasets.
arXiv Detail & Related papers (2023-01-08T14:07:54Z) - On the Privacy Effect of Data Enhancement via the Lens of Memorization [20.63044895680223]
We propose to investigate privacy from a new perspective called memorization.
Through the lens of memorization, we find that previously deployed MIAs produce misleading results as they are less likely to identify samples with higher privacy risks.
We demonstrate that the generalization gap and privacy leakage are less correlated than those of the previous results.
arXiv Detail & Related papers (2022-08-17T13:02:17Z) - Privacy-Aware Adversarial Network in Human Mobility Prediction [11.387235721659378]
User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications.
We propose an LSTM-based adversarial representation learning to attain a privacy-preserving feature representation of the original geolocated data.
We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility.
arXiv Detail & Related papers (2022-08-09T19:23:13Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differential Privacy for Eye Tracking with Temporal Correlations [30.44437258959343]
New generation head-mounted displays, such as VR and AR glasses, are coming into the market with already integrated eye tracking.
Since eye movement properties contain biometric information, privacy concerns have to be handled properly.
We propose a novel transform-coding based differential privacy mechanism to further adapt it to the statistics of eye movement feature data.
arXiv Detail & Related papers (2020-02-20T19:01:34Z) - Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models [6.902994369582068]
We present a formal definition of the privacy protection problem in the edge-cloud system running models.
We analyze the-state-of-the-art methods and point out the drawbacks of their methods.
We propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods.
arXiv Detail & Related papers (2019-12-31T15:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.