GaitGuard: Towards Private Gait in Mixed Reality
- URL: http://arxiv.org/abs/2312.04470v6
- Date: Fri, 03 Oct 2025 22:15:02 GMT
- Title: GaitGuard: Towards Private Gait in Mixed Reality
- Authors: Diana Romero, Ruchi Jagdish Patel, Athina Markopoulou, Salma Elmalaki,
- Abstract summary: We introduce GaitGuard, a novel, real-time system designed to safeguard gait privacy against video-based gait profiling threats.<n>GitGuard operates on a multi-threaded framework, incorporating dedicated modules for efficient stream capture, body detection and tracking.
- Score: 1.7277693508964933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Augmented and Mixed Reality (AR/MR) systems offer uniquely immersive and collaborative experiences, fundamentally diverging from traditional mobile interactions. As these technologies become more pervasive, ensuring user privacy is paramount. This paper addresses gait privacy, a critical concern where an individual's walking pattern can inadvertently reveal sensitive personal information like age, ethnicity, or health conditions. We introduce GaitGuard, a novel, real-time system designed to safeguard gait privacy against video-based gait profiling threats within MR environments. GaitGuard operates on a multi-threaded framework, incorporating dedicated modules for efficient stream capture, body detection and tracking, and effective privacy protection. Our rigorous evaluation involved testing 248 distinct configurations, systematically varying regions of interest, privacy techniques, and operational parameters. This comprehensive analysis allowed us to thoroughly assess the trade-offs between privacy protection, video quality, and system performance. Furthermore, we propose an innovative adaptive method that intelligently processes only gait-critical frames, significantly enhancing visual quality without compromising privacy for real-time deployment. GaitGuard demonstrates substantial privacy protection, achieving up to a 68\% reduction in gait profiling accuracy and inducing a significant feature distribution shift (Jensen-Shannon Divergence of 0.63). Crucially, the system maintains a high performance of 29 frames per second (FPS), ensuring an acceptable user experience. User studies with 20 participants further validate our approach, indicating greater user comfort and acceptance of the privacy-preserving transformations. GaitGuard offers a practical and immediately deployable solution for robust gait privacy in MR, without sacrificing the immersive user experience.
Related papers
- Who Can See Through You? Adversarial Shielding Against VLM-Based Attribute Inference Attacks [13.326888254423901]
VLM-based attribute inference attacks have emerged as a serious privacy concern, enabling adversaries to infer private attributes from images shared on social media.<n>We propose a novel protection method that jointly optimize privacy suppression and utility preservation under a visual consistency constraint.<n>Our method effectively reduces PAR below 25%, keeps NPAR above 88%, and generalizes well to unseen and paraphrased privacy questions.
arXiv Detail & Related papers (2025-12-20T08:08:50Z) - Balancing Privacy and Action Performance: A Penalty-Driven Approach to Image Anonymization [8.874765152344468]
We propose a privacy-preserving image anonymization technique that optimize the anonymizer using penalties from the utility branch.<n>We are the first to introduce a feature-based penalty scheme that exclusively controls the action features, allowing freedom to anonymize private attributes.
arXiv Detail & Related papers (2025-04-19T13:52:33Z) - Video-DPRP: A Differentially Private Approach for Visual Privacy-Preserving Video Human Activity Recognition [1.90946053920849]
Two primary approaches to ensure privacy preservation in Video HAR are differential privacy (DP) and visual privacy.
We introduce Video-DPRP: a Video-sample-wise Differentially Private Random Projection framework for privacy-preserved video reconstruction for HAR.
We compare Video-DPRP's performance on activity recognition with traditional DP methods, and state-of-the-art (SOTA) visual privacy-preserving techniques.
arXiv Detail & Related papers (2025-03-03T23:43:12Z) - PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models [51.458089902581456]
We introduce PersGuard, a novel backdoor-based approach that prevents malicious personalization of specific images.
Our method significantly outperforms existing techniques, offering a more robust solution for privacy and copyright protection.
arXiv Detail & Related papers (2025-02-22T09:47:55Z) - Facial Expression Recognition with Controlled Privacy Preservation and Feature Compensation [24.619279669211842]
Facial expression recognition (FER) systems raise significant privacy concerns due to the potential exposure of sensitive identity information.<n>This paper presents a study on removing identity information while preserving FER capabilities.<n>We introduce a controlled privacy enhancement mechanism to optimize performance and a feature compensator to enhance task-relevant features without compromising privacy.
arXiv Detail & Related papers (2024-11-29T23:12:38Z) - Differentially Private Integrated Decision Gradients (IDG-DP) for Radar-based Human Activity Recognition [5.955900146668931]
Recent research has shown high accuracy in recognizing subjects or gender from radar gait patterns, raising privacy concerns.
This study addresses these issues by investigating privacy vulnerabilities in radar-based Human Activity Recognition (HAR) systems.
We propose a novel method for privacy preservation using Differential Privacy (DP) driven by attributions derived with Integrated Decision Gradient (IDG) algorithm.
arXiv Detail & Related papers (2024-11-04T14:08:26Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Scalable Differential Privacy Mechanisms for Real-Time Machine Learning Applications [0.0]
Large language models (LLMs) are increasingly integrated into real-time machine learning applications, where safeguarding user privacy is paramount.
Traditional differential privacy mechanisms often struggle to balance privacy and accuracy, particularly in fast-changing environments with continuously flowing data.
We introduce Scalable Differential Privacy (SDP), a framework tailored for real-time machine learning that emphasizes both robust privacy guarantees and enhanced model performance.
arXiv Detail & Related papers (2024-09-16T20:52:04Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience [11.130411904676095]
Eye tracking data, if exposed, can be used for re-identification attacks.
We develop a methodology to evaluate real-time privacy mechanisms for interactive VR applications.
arXiv Detail & Related papers (2024-02-12T14:53:12Z) - Protect Your Score: Contact Tracing With Differential Privacy Guarantees [68.53998103087508]
We argue that privacy concerns currently hold deployment back.
We propose a contact tracing algorithm with differential privacy guarantees against this attack.
Especially for realistic test scenarios, we achieve a two to ten-fold reduction in the infection rate of the virus.
arXiv Detail & Related papers (2023-12-18T11:16:33Z) - Towards Differential Privacy in Sequential Recommendation: A Noisy Graph
Neural Network Approach [2.4743508801114444]
Differential privacy has been widely adopted to preserve privacy in recommender systems.
Existing differentially private recommender systems only consider static and independent interactions.
We propose a novel DIfferentially Private Sequential recommendation framework with a noisy Graph Neural Network approach.
arXiv Detail & Related papers (2023-09-17T03:12:33Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - STPrivacy: Spatio-Temporal Tubelet Sparsification and Anonymization for
Privacy-preserving Action Recognition [28.002605566359676]
We present a PPAR paradigm, i.e. spatial, performing privacy preservation from both temporal perspectives, and propose a STPrivacy framework.
For first time, our STPrivacy applies vision Transformers to PPAR and regards video as sequence of leakage-temporal tubelets.
Because there is no large-scale benchmarks, we annotate five privacy attributes for two of the most popular action recognition datasets.
arXiv Detail & Related papers (2023-01-08T14:07:54Z) - On the Privacy Effect of Data Enhancement via the Lens of Memorization [20.63044895680223]
We propose to investigate privacy from a new perspective called memorization.
Through the lens of memorization, we find that previously deployed MIAs produce misleading results as they are less likely to identify samples with higher privacy risks.
We demonstrate that the generalization gap and privacy leakage are less correlated than those of the previous results.
arXiv Detail & Related papers (2022-08-17T13:02:17Z) - Privacy-Aware Adversarial Network in Human Mobility Prediction [11.387235721659378]
User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications.
We propose an LSTM-based adversarial representation learning to attain a privacy-preserving feature representation of the original geolocated data.
We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility.
arXiv Detail & Related papers (2022-08-09T19:23:13Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differential Privacy for Eye Tracking with Temporal Correlations [30.44437258959343]
New generation head-mounted displays, such as VR and AR glasses, are coming into the market with already integrated eye tracking.
Since eye movement properties contain biometric information, privacy concerns have to be handled properly.
We propose a novel transform-coding based differential privacy mechanism to further adapt it to the statistics of eye movement feature data.
arXiv Detail & Related papers (2020-02-20T19:01:34Z) - Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models [6.902994369582068]
We present a formal definition of the privacy protection problem in the edge-cloud system running models.
We analyze the-state-of-the-art methods and point out the drawbacks of their methods.
We propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods.
arXiv Detail & Related papers (2019-12-31T15:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.