Privacy on the Fly: A Predictive Adversarial Transformation Network for Mobile Sensor Data
- URL: http://arxiv.org/abs/2511.07242v3
- Date: Tue, 18 Nov 2025 14:23:51 GMT
- Title: Privacy on the Fly: A Predictive Adversarial Transformation Network for Mobile Sensor Data
- Authors: Tianle Song, Chenhao Lin, Yang Cao, Zhengyu Zhao, Jiahao Sun, Chong Zhang, Le Yang, Chao Shen,
- Abstract summary: We propose a real-time privacy-preserving framework that leverages historical signals to generate adversarial perturbations proactively.<n>Experiments on two datasets demonstrate that PATN substantially degrades the performance of privacy inference models.
- Score: 32.36355095752335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile motion sensors such as accelerometers and gyroscopes are now ubiquitously accessible by third-party apps via standard APIs. While enabling rich functionalities like activity recognition and step counting, this openness has also enabled unregulated inference of sensitive user traits, such as gender, age, and even identity, without user consent. Existing privacy-preserving techniques, such as GAN-based obfuscation or differential privacy, typically require access to the full input sequence, introducing latency that is incompatible with real-time scenarios. Worse, they tend to distort temporal and semantic patterns, degrading the utility of the data for benign tasks like activity recognition. To address these limitations, we propose the Predictive Adversarial Transformation Network (PATN), a real-time privacy-preserving framework that leverages historical signals to generate adversarial perturbations proactively. The perturbations are applied immediately upon data acquisition, enabling continuous protection without disrupting application functionality. Experiments on two datasets demonstrate that PATN substantially degrades the performance of privacy inference models, achieving Attack Success Rate (ASR) of 40.11% and 44.65% (reducing inference accuracy to near-random) and increasing the Equal Error Rate (EER) from 8.30% and 7.56% to 41.65% and 46.22%. On ASR, PATN outperforms baseline methods by 16.16% and 31.96%, respectively.
Related papers
- Adversary-Aware Private Inference over Wireless Channels [51.93574339176914]
AI-based sensing at wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.<n>As sensitive personal data can be reconstructed by an adversary, transformation of the features are required to reduce the risk of privacy violations.<n>We propose a novel framework for privacy-preserving AI-based sensing, where devices apply transformations of extracted features before transmission to a model server.
arXiv Detail & Related papers (2025-10-23T13:02:14Z) - DELTA: Variational Disentangled Learning for Privacy-Preserving Data Reprogramming [20.87548031005583]
We propose DELTA, a two-phase variational disentangled generative learning framework.<n>Phase I uses policy-guided reinforcement learning to discover feature transformations with downstream task utility, without any regard to privacy inferability.<n>Phase II employs a variational LSTM seq2seq encoder-decoder with a utility-privacy disentangled latent space design and adversarial-causal disentanglement regularization to suppress privacy signals.
arXiv Detail & Related papers (2025-08-31T04:18:42Z) - Benchmarking Fraud Detectors on Private Graph Data [70.4654745317714]
Currently, many types of fraud are managed in part by automated detection algorithms that operate over graphs.<n>We consider the scenario where a data holder wishes to outsource development of fraud detectors to third parties.<n>Third parties submit their fraud detectors to the data holder, who evaluates these algorithms on a private dataset and then publicly communicates the results.<n>We propose a realistic privacy attack on this system that allows an adversary to de-anonymize individuals' data based only on the evaluation results.
arXiv Detail & Related papers (2025-07-30T03:20:15Z) - Privacy-aware IoT Fall Detection Services For Aging in Place [1.4061979259370276]
Fall detection is critical to support the growing elderly population, projected to reach 2.1 billion by 2050.<n>We propose a novel IoT-based Fall Detection as a Service framework to assist the elderly in living independently and safely by accurately detecting falls.<n>We design a service-oriented architecture that leverages Ultra-wideband (UWB) radar sensors as an IoT health-sensing service, ensuring privacy and minimal intrusion.
arXiv Detail & Related papers (2025-06-18T03:28:07Z) - DP-SMOTE: Integrating Differential Privacy and Oversampling Technique to Preserve Privacy in Smart Homes [0.0]
This paper introduces a robust method for secure sharing of data to service providers, grounded in differential privacy (DP)<n>The approach incorporates the Synthetic Minority Oversampling technique (SMOTe) and seamlessly integrates Gaussian noise to generate synthetic data.<n>It delivers strong performance in safeguarding privacy and in accuracy, recall, and f-measure metrics.
arXiv Detail & Related papers (2025-04-29T14:50:50Z) - Dual Utilization of Perturbation for Stream Data Publication under Local Differential Privacy [10.07017446059039]
Local differential privacy (LDP) has emerged as a promising standard.<n>Applying LDP to stream data presents significant challenges, as stream data often involves a large or even infinite number of values.<n>We introduce the Iterative Perturbation IPP method, which utilizes current perturbed results to calibrate the subsequent perturbation process.<n>We prove that these three algorithms satisfy $w$-event differential privacy while significantly improving utility.
arXiv Detail & Related papers (2025-04-21T09:51:18Z) - Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Effect of Data Degradation on Motion Re-Identification [16.062009131216467]
We study the effect of signal degradation on identifiability, specifically through added noise, reduced framerate, reduced precision, and reduced dimensionality of the data.
Our experiment shows that state-of-the-art identification attacks still achieve near-perfect accuracy for each of these degradations.
arXiv Detail & Related papers (2024-07-25T20:23:32Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.