Privacy-Preserving Synthetic Smart Meters Data
- URL: http://arxiv.org/abs/2012.04475v1
- Date: Sun, 6 Dec 2020 16:11:16 GMT
- Title: Privacy-Preserving Synthetic Smart Meters Data
- Authors: Ganesh Del Grosso, Georg Pichler, Pablo Piantanida
- Abstract summary: We propose a method to generate synthetic power consumption samples that faithfully imitate the originals.
Our method is based on Generative Adversarial Networks (GANs)
We study the privacy guarantees provided to members of the training set of our neural network.
- Score: 24.29870358870313
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Power consumption data is very useful as it allows to optimize power grids,
detect anomalies and prevent failures, on top of being useful for diverse
research purposes. However, the use of power consumption data raises
significant privacy concerns, as this data usually belongs to clients of a
power company. As a solution, we propose a method to generate synthetic power
consumption samples that faithfully imitate the originals, but are detached
from the clients and their identities. Our method is based on Generative
Adversarial Networks (GANs). Our contribution is twofold. First, we focus on
the quality of the generated data, which is not a trivial task as no standard
evaluation methods are available. Then, we study the privacy guarantees
provided to members of the training set of our neural network. As a minimum
requirement for privacy, we demand our neural network to be robust to
membership inference attacks, as these provide a gateway for further attacks in
addition to presenting a privacy threat on their own. We find that there is a
compromise to be made between the privacy and the performance provided by the
algorithm.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Defining 'Good': Evaluation Framework for Synthetic Smart Meter Data [14.779917834583577]
We show that standard privacy attack methods are inadequate for assessing privacy risks of smart meter datasets.
We propose an improved method by injecting training data with implausible outliers, then launching privacy attacks directly on these outliers.
arXiv Detail & Related papers (2024-07-16T14:41:27Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Privacy for Free: How does Dataset Condensation Help Privacy? [21.418263507735684]
We identify that dataset condensation (DC) is also a better solution to replace the traditional data generators for private data generation.
We empirically validate the visual privacy and membership privacy of DC-synthesized data by launching both the loss-based and the state-of-the-art likelihood-based membership inference attacks.
arXiv Detail & Related papers (2022-06-01T05:39:57Z) - Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation [20.58763760239068]
Crowdsource celluar signal strength measurements can be used to generate signal maps to improve network performance.
We consider obfuscating such data before the data leaves the mobile device.
Our evaluation results, based on multiple, diverse, real-world signal map datasets, demonstrate the feasibility of concurrently achieving adequate privacy and utility.
arXiv Detail & Related papers (2022-01-13T03:46:22Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z) - Privacy Adversarial Network: Representation Learning for Mobile Data
Privacy [33.75500773909694]
A growing number of cloud-based intelligent services for mobile users require user data to be sent to the provider.
Prior works either obfuscate the data, e.g. add noise and remove identity information, or send representations extracted from the data, e.g. anonymized features.
This work departs from prior works in methodology: we leverage adversarial learning to a better balance between privacy and utility.
arXiv Detail & Related papers (2020-06-08T09:42:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.