Realistic Differentially-Private Transmission Power Flow Data Release
- URL: http://arxiv.org/abs/2103.14036v1
- Date: Thu, 25 Mar 2021 04:04:12 GMT
- Title: Realistic Differentially-Private Transmission Power Flow Data Release
- Authors: David Smith, Frederik Geth, Elliott Vercoe, Andrew Feutrill, Ming
Ding, Jonathan Chan, James Foster and Thierry Rakotoarivelo
- Abstract summary: We propose a fundamentally different post-processing method, using public information of grid losses rather than power dispatch.
We protect more sensitive parameters, i.e., branch shuntance in addition to series impedance.
Our approach addresses a more feasible and realistic scenario, and provides higher than state-of-the-art privacy guarantees.
- Score: 12.425053979364362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the modeling, design and planning of future energy transmission networks,
it is vital for stakeholders to access faithful and useful power flow data,
while provably maintaining the privacy of business confidentiality of service
providers. This critical challenge has recently been somewhat addressed in [1].
This paper significantly extends this existing work. First, we reduce the
potential leakage information by proposing a fundamentally different
post-processing method, using public information of grid losses rather than
power dispatch, which achieve a higher level of privacy protection. Second, we
protect more sensitive parameters, i.e., branch shunt susceptance in addition
to series impedance (complete pi-model). This protects power flow data for the
transmission high-voltage networks, using differentially private
transformations that maintain the optimal power flow consistent with, and
faithful to, expected model behaviour. Third, we tested our approach at a
larger scale than previous work, using the PGLib-OPF test cases [10]. This
resulted in the successful obfuscation of up to a 4700-bus system, which can be
successfully solved with faithfulness of parameters and good utility to data
analysts. Our approach addresses a more feasible and realistic scenario, and
provides higher than state-of-the-art privacy guarantees, while maintaining
solvability, fidelity and feasibility of the system.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Decentralized Federated Anomaly Detection in Smart Grids: A P2P Gossip Approach [0.44328715570014865]
This paper introduces a novel decentralized federated anomaly detection scheme based on two main gossip protocols namely Random Walk and Epidemic.
Our approach yields a notable 35% improvement in training time compared to conventional Federated Learning.
arXiv Detail & Related papers (2024-07-20T10:45:06Z) - Communication and Energy Efficient Wireless Federated Learning with
Intrinsic Privacy [16.305837225117603]
Federated Learning (FL) is a collaborative learning framework that enables edge devices to collaboratively learn a global model while keeping raw data locally.
We propose a novel wireless FL scheme called private edge learning with spars (PFELS) to provide client-level DP guarantee with intrinsic channel noise.
arXiv Detail & Related papers (2023-04-15T03:04:11Z) - FeDiSa: A Semi-asynchronous Federated Learning Framework for Power
System Fault and Cyberattack Discrimination [1.0621485365427565]
This paper proposes FeDiSa, a novel Semi-asynchronous Federated learning framework for power system faults and cyberattack Discrimination.
Experiments on the proposed framework using publicly available industrial control systems datasets reveal superior attack detection accuracy whilst preserving data confidentiality and minimizing the adverse effects of communication latency and stragglers.
arXiv Detail & Related papers (2023-03-28T13:34:38Z) - Private and Reliable Neural Network Inference [6.7386666699567845]
We present the first system which enables privacy-preserving inference on reliable NNs.
We employ these building blocks to enable privacy-preserving NN inference with robustness and fairness guarantees in a system called Phoenix.
arXiv Detail & Related papers (2022-10-27T16:58:45Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Data-Driven Stochastic AC-OPF using Gaussian Processes [54.94701604030199]
Integrating a significant amount of renewables into a power grid is probably the most a way to reduce carbon emissions from power grids slow down climate change.
This paper presents an alternative data-driven approach based on the AC power flow equations that can incorporate uncertainty inputs.
The GP approach learns a simple yet non-constrained data-driven approach to close this gap to the AC power flow equations.
arXiv Detail & Related papers (2022-07-21T23:02:35Z) - FedREP: Towards Horizontal Federated Load Forecasting for Retail Energy
Providers [1.1254693939127909]
We propose a novel horizontal privacy-preserving federated learning framework for energy load forecasting, namely FedREP.
We consider a federated learning system consisting of a control centre and multiple retailers by enabling multiple REPs to build a common, robust machine learning model without sharing data.
For forecasting, we use a state-of-the-art Long Short-Term Memory (LSTM) neural network due to its ability to learn long term sequences of observations.
arXiv Detail & Related papers (2022-03-01T04:16:19Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.