Blinder: End-to-end Privacy Protection in Sensing Systems via
Personalized Federated Learning
- URL: http://arxiv.org/abs/2209.12046v3
- Date: Fri, 6 Oct 2023 17:23:09 GMT
- Title: Blinder: End-to-end Privacy Protection in Sensing Systems via
Personalized Federated Learning
- Authors: Xin Yang, Omid Ardakanian
- Abstract summary: We propose a sensor data anonymization model that is trained on decentralized data and strikes a desirable trade-off between data utility and privacy.
Our anonymization model, dubbed Blinder, is based on a variational autoencoder and one or multiple discriminator networks trained in an adversarial fashion.
- Score: 5.803565897482636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a sensor data anonymization model that is trained on
decentralized data and strikes a desirable trade-off between data utility and
privacy, even in heterogeneous settings where the sensor data have different
underlying distributions. Our anonymization model, dubbed Blinder, is based on
a variational autoencoder and one or multiple discriminator networks trained in
an adversarial fashion. We use the model-agnostic meta-learning framework to
adapt the anonymization model trained via federated learning to each user's
data distribution. We evaluate Blinder under different settings and show that
it provides end-to-end privacy protection on two IMU datasets at the cost of
increasing privacy loss by up to 4.00% and decreasing data utility by up to
4.24%, compared to the state-of-the-art anonymization model trained on
centralized data. We also showcase Blinder's ability to anonymize the radio
frequency sensing modality. Our experiments confirm that Blinder can obscure
multiple private attributes at once, and has sufficiently low power consumption
and computational overhead for it to be deployed on edge devices and
smartphones to perform real-time anonymization of sensor data.
Related papers
- Guided Diffusion Model for Sensor Data Obfuscation [4.91258288207688]
PrivDiffuser is a novel data obfuscation technique based on a denoising diffusion model.
We show that PrivDiffuser yields a better privacy-utility trade-off than the state-of-the-art obfuscation model.
arXiv Detail & Related papers (2024-12-19T03:47:12Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - A Trajectory K-Anonymity Model Based on Point Density and Partition [0.0]
This paper develops a trajectory K-anonymity model based on Point Density and Partition (K PDP)
It successfully resists re-identification attacks and reduces the data utility loss of the k-anonymized dataset.
arXiv Detail & Related papers (2023-07-31T17:10:56Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Uncertainty-Autoencoder-Based Privacy and Utility Preserving Data Type
Conscious Transformation [3.7315964084413173]
We propose an adversarial learning framework that deals with the privacy-utility tradeoff problem under two conditions.
Under data-type ignorant conditions, the privacy mechanism provides a one-hot encoding of categorical features, representing exactly one class.
Under data-type aware conditions, the categorical variables are represented by a collection of scores, one for each class.
arXiv Detail & Related papers (2022-05-04T08:40:15Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised
Person Re-identification [24.305773593017932]
FedUReID is a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy.
To tackle the problem that edges vary in data volumes and distributions, we personalize training in edges with joint optimization of cloud and edge.
Experiments on eight person ReID datasets demonstrate that FedUReID achieves higher accuracy but also reduces computation cost by 29%.
arXiv Detail & Related papers (2021-08-14T08:35:55Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Anonymizing Sensor Data on the Edge: A Representation Learning and
Transformation Approach [4.920145245773581]
In this paper, we aim to examine the tradeoff between utility and privacy loss by learning low-dimensional representations that are useful for data obfuscation.
We propose deterministic and probabilistic transformations in the latent space of a variational autoencoder to synthesize time series data.
We show that it can anonymize data in real time on resource-constrained edge devices.
arXiv Detail & Related papers (2020-11-16T22:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.