Realistic Differentially-Private Transmission Power Flow Data Release
- URL: http://arxiv.org/abs/2103.14036v1
- Date: Thu, 25 Mar 2021 04:04:12 GMT
- Title: Realistic Differentially-Private Transmission Power Flow Data Release
- Authors: David Smith, Frederik Geth, Elliott Vercoe, Andrew Feutrill, Ming
Ding, Jonathan Chan, James Foster and Thierry Rakotoarivelo
- Abstract summary: We propose a fundamentally different post-processing method, using public information of grid losses rather than power dispatch.
We protect more sensitive parameters, i.e., branch shuntance in addition to series impedance.
Our approach addresses a more feasible and realistic scenario, and provides higher than state-of-the-art privacy guarantees.
- Score: 12.425053979364362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the modeling, design and planning of future energy transmission networks,
it is vital for stakeholders to access faithful and useful power flow data,
while provably maintaining the privacy of business confidentiality of service
providers. This critical challenge has recently been somewhat addressed in [1].
This paper significantly extends this existing work. First, we reduce the
potential leakage information by proposing a fundamentally different
post-processing method, using public information of grid losses rather than
power dispatch, which achieve a higher level of privacy protection. Second, we
protect more sensitive parameters, i.e., branch shunt susceptance in addition
to series impedance (complete pi-model). This protects power flow data for the
transmission high-voltage networks, using differentially private
transformations that maintain the optimal power flow consistent with, and
faithful to, expected model behaviour. Third, we tested our approach at a
larger scale than previous work, using the PGLib-OPF test cases [10]. This
resulted in the successful obfuscation of up to a 4700-bus system, which can be
successfully solved with faithfulness of parameters and good utility to data
analysts. Our approach addresses a more feasible and realistic scenario, and
provides higher than state-of-the-art privacy guarantees, while maintaining
solvability, fidelity and feasibility of the system.
Related papers
- Fed-AugMix: Balancing Privacy and Utility via Data Augmentation [15.325493418326117]
Gradient leakage attacks pose a significant threat to the privacy guarantees of federated learning.
We propose a novel data augmentation-based framework designed to achieve a favorable privacy-utility trade-off.
Our framework incorporates the AugMix algorithm at the client level, enabling data augmentation with controllable severity.
arXiv Detail & Related papers (2024-12-18T13:05:55Z) - Providing Differential Privacy for Federated Learning Over Wireless: A Cross-layer Framework [19.381425127772054]
Federated Learning (FL) is a distributed machine learning framework that inherently allows edge devices to maintain their local training data.
We propose a wireless physical layer (PHY) design for OTA-FL which improves differential privacy (DP) through a decentralized, dynamic power control.
This adaptation showcases the flexibility and effectiveness of our design across different learning algorithms while maintaining a strong emphasis on privacy.
arXiv Detail & Related papers (2024-12-05T18:27:09Z) - Leveraging A New GAN-based Transformer with ECDH Crypto-system for Enhancing Energy Theft Detection in Smart Grid [16.031989793237152]
Split-learning is a promising machine learning technique for identifying energy theft.
Traditional split learning approaches are vulnerable to privacy leakage attacks.
We propose a novel GAN-Transformer-based split learning framework in this paper.
arXiv Detail & Related papers (2024-11-27T03:41:38Z) - Privacy-Preserving Power Flow Analysis via Secure Multi-Party Computation [1.8006898281412764]
We show how to perform power flow analysis on cryptographically hidden prosumer data.
We analyze the security of our approach in the universal composability framework.
arXiv Detail & Related papers (2024-11-21T20:04:16Z) - Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Data-Driven Stochastic AC-OPF using Gaussian Processes [54.94701604030199]
Integrating a significant amount of renewables into a power grid is probably the most a way to reduce carbon emissions from power grids slow down climate change.
This paper presents an alternative data-driven approach based on the AC power flow equations that can incorporate uncertainty inputs.
The GP approach learns a simple yet non-constrained data-driven approach to close this gap to the AC power flow equations.
arXiv Detail & Related papers (2022-07-21T23:02:35Z) - FedREP: Towards Horizontal Federated Load Forecasting for Retail Energy
Providers [1.1254693939127909]
We propose a novel horizontal privacy-preserving federated learning framework for energy load forecasting, namely FedREP.
We consider a federated learning system consisting of a control centre and multiple retailers by enabling multiple REPs to build a common, robust machine learning model without sharing data.
For forecasting, we use a state-of-the-art Long Short-Term Memory (LSTM) neural network due to its ability to learn long term sequences of observations.
arXiv Detail & Related papers (2022-03-01T04:16:19Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.