Privacy-Preserving Load Forecasting via Personalized Model Obfuscation
- URL: http://arxiv.org/abs/2312.00036v1
- Date: Tue, 21 Nov 2023 03:03:10 GMT
- Title: Privacy-Preserving Load Forecasting via Personalized Model Obfuscation
- Authors: Shourya Bose, Yu Zhang, Kibaek Kim
- Abstract summary: This paper addresses the performance challenges of short-term load forecasting models trained with federated learning on heterogeneous data.
Our proposed algorithm, Privacy Preserving Federated Learning (PPFL), incorporates personalization layers for localized training at each smart meter.
- Score: 4.420464017266168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread adoption of smart meters provides access to detailed and
localized load consumption data, suitable for training building-level load
forecasting models. To mitigate privacy concerns stemming from model-induced
data leakage, federated learning (FL) has been proposed. This paper addresses
the performance challenges of short-term load forecasting models trained with
FL on heterogeneous data, emphasizing privacy preservation through model
obfuscation. Our proposed algorithm, Privacy Preserving Federated Learning
(PPFL), incorporates personalization layers for localized training at each
smart meter. Additionally, we employ a differentially private mechanism to
safeguard against data leakage from shared layers. Simulations on the NREL
ComStock dataset corroborate the effectiveness of our approach.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Addressing Heterogeneity in Federated Load Forecasting with Personalization Layers [3.933147844455233]
We propose the use of personalization layers for load forecasting in a general framework called PL-FL.
We show that PL-FL outperforms FL and purely local training, while requiring lower communication bandwidth than FL.
arXiv Detail & Related papers (2024-04-01T22:53:09Z) - Conditional Density Estimations from Privacy-Protected Data [0.0]
We propose simulation-based inference methods from privacy-protected datasets.
We illustrate our methods on discrete time-series data under an infectious disease model and with ordinary linear regression models.
arXiv Detail & Related papers (2023-10-19T14:34:17Z) - PRIOR: Personalized Prior for Reactivating the Information Overlooked in
Federated Learning [16.344719695572586]
We propose a novel scheme to inject personalized prior knowledge into a global model in each client.
At the heart of our proposed approach is a framework, the PFL with Bregman Divergence (pFedBreD)
Our method reaches the state-of-the-art performances on 5 datasets and outperforms other methods by up to 3.5% across 8 benchmarks.
arXiv Detail & Related papers (2023-10-13T15:21:25Z) - Federated Short-Term Load Forecasting with Personalization Layers for
Heterogeneous Clients [0.7252027234425334]
We propose a personalized FL algorithm (PL-FL) enabling FL to handle personalization layers.
PL-FL is implemented by using the Argonne Privacy-Preserving Federated Learning package.
We test the forecast performance of models trained on the NREL ComStock dataset.
arXiv Detail & Related papers (2023-09-22T21:57:52Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - DER Forecast using Privacy Preserving Federated Learning [0.0]
A distributed machine learning approach, Federated Learning, is proposed to carry out DER forecasting using a network of IoT nodes.
We consider a simulation study which includes 1000 DERs, and show that our method leads to an accurate prediction of preserve consumer privacy.
arXiv Detail & Related papers (2021-07-07T14:25:43Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.