Differentially Private Communication of Measurement Anomalies in the Smart Grid
- URL: http://arxiv.org/abs/2403.02324v2
- Date: Fri, 22 Mar 2024 21:04:25 GMT
- Title: Differentially Private Communication of Measurement Anomalies in the Smart Grid
- Authors: Nikhil Ravi, Anna Scaglione, Sean Peisert, Parth Pradhan,
- Abstract summary: We present a framework based on differential privacy (DP) for querying electric power measurements to detect system anomalies or bad data.
Our DP approach conceals consumption and system matrix data, while simultaneously enabling an untrusted third party to test hypotheses of anomalies.
We propose a novel DP chi-square noise mechanism that ensures the test does not reveal private information about power injections or the system matrix.
- Score: 4.021993915403885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a framework based on differential privacy (DP) for querying electric power measurements to detect system anomalies or bad data. Our DP approach conceals consumption and system matrix data, while simultaneously enabling an untrusted third party to test hypotheses of anomalies, such as the presence of bad data, by releasing a randomized sufficient statistic for hypothesis-testing. We consider a measurement model corrupted by Gaussian noise and a sparse noise vector representing the attack, and we observe that the optimal test statistic is a chi-square random variable. To detect possible attacks, we propose a novel DP chi-square noise mechanism that ensures the test does not reveal private information about power injections or the system matrix. The proposed framework provides a robust solution for detecting bad data while preserving the privacy of sensitive power system data.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection [57.571451139201855]
Prediction Shift Backdoor Detection (PSBD) is a novel method for identifying backdoor samples in deep neural networks.
PSBD is motivated by an intriguing Prediction Shift (PS) phenomenon, where poisoned models' predictions on clean data often shift away from true labels towards certain other labels.
PSBD identifies backdoor training samples by computing the Prediction Shift Uncertainty (PSU), the variance in probability values when dropout layers are toggled on and off during model inference.
arXiv Detail & Related papers (2024-06-09T15:31:00Z) - Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - Conditional Density Estimations from Privacy-Protected Data [0.0]
We propose simulation-based inference methods from privacy-protected datasets.
We illustrate our methods on discrete time-series data under an infectious disease model and with ordinary linear regression models.
arXiv Detail & Related papers (2023-10-19T14:34:17Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - The power of private likelihood-ratio tests for goodness-of-fit in
frequency tables [1.713291434132985]
We consider privacy-protecting tests for goodness-of-fit in frequency tables.
We show the importance of taking the perturbation into account to avoid a loss in the statistical significance of the test.
Our work presents the first rigorous treatment of privacy-protecting LR tests for goodness-of-fit in frequency tables.
arXiv Detail & Related papers (2021-09-20T15:30:42Z) - Bad-Data Sequence Detection for Power System State Estimation via
ICA-GAN [5.990174495635325]
A deep learning approach to the detection of bad-data sequences in power systems is proposed.
The bad-data model is nonparametric that includes arbitrary natural and adversarial data anomalies.
The probability distribution of data in anomaly-free system operations is also non-parametric, unknown, but with historical training samples.
arXiv Detail & Related papers (2020-12-09T16:53:56Z) - Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of
Multimodal Data with Adversarial Defense [0.3867363075280543]
In this paper, an ensemble detection mechanism is proposed which estimates the degree of abnormality of analyzing the real-time image and IMU (Inertial Measurement Unit) sensor data.
The proposed method performs satisfactorily on the IEEE SP Cup-2020 dataset with an accuracy of 97.8%.
arXiv Detail & Related papers (2020-07-17T20:03:02Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.