A Secure Federated Learning Framework for Residential Short Term Load
Forecasting
- URL: http://arxiv.org/abs/2209.14547v2
- Date: Tue, 28 Mar 2023 05:09:56 GMT
- Title: A Secure Federated Learning Framework for Residential Short Term Load
Forecasting
- Authors: Muhammad Akbar Husnoo, Adnan Anwar, Nasser Hosseinzadeh, Shama Naz
Islam, Abdun Naser Mahmood and Robin Doss
- Abstract summary: Federated Learning (FL) is a machine learning alternative which enables collaborative learning of a model without exposing private raw data for short term load forecasting.
Standard FL is still vulnerable to an intractable cyber threat known as Byzantine attack carried out by faulty and/or malicious clients.
We develop a state-of-the-art differentially private secured FL-based framework that ensures the privacy of the individual smart meter's data while protect the security of FL models and architecture.
- Score: 1.1254693939127909
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Smart meter measurements, though critical for accurate demand forecasting,
face several drawbacks including consumers' privacy, data breach issues, to
name a few. Recent literature has explored Federated Learning (FL) as a
promising privacy-preserving machine learning alternative which enables
collaborative learning of a model without exposing private raw data for short
term load forecasting. Despite its virtue, standard FL is still vulnerable to
an intractable cyber threat known as Byzantine attack carried out by faulty
and/or malicious clients. Therefore, to improve the robustness of federated
short-term load forecasting against Byzantine threats, we develop a
state-of-the-art differentially private secured FL-based framework that ensures
the privacy of the individual smart meter's data while protect the security of
FL models and architecture. Our proposed framework leverages the idea of
gradient quantization through the Sign Stochastic Gradient Descent (SignSGD)
algorithm, where the clients only transmit the `sign' of the gradient to the
control centre after local model training. As we highlight through our
experiments involving benchmark neural networks with a set of Byzantine attack
models, our proposed approach mitigates such threats quite effectively and thus
outperforms conventional Fed-SGD models.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - FedCAP: Robust Federated Learning via Customized Aggregation and Personalization [13.17735010891312]
Federated learning (FL) has been applied to various privacy-preserving scenarios.
We propose FedCAP, a robust FL framework against both data heterogeneity and Byzantine attacks.
We show that FedCAP performs well in several non-IID settings and shows strong robustness under a series of poisoning attacks.
arXiv Detail & Related papers (2024-10-16T23:01:22Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Privacy-Preserving Distributed Learning for Residential Short-Term Load
Forecasting [11.185176107646956]
Power system load data can inadvertently reveal the daily routines of residential users, posing a risk to their property security.
We introduce a Markovian Switching-based distributed training framework, the convergence of which is substantiated through rigorous theoretical analysis.
Case studies employing real-world power system load data validate the efficacy of our proposed algorithm.
arXiv Detail & Related papers (2024-02-02T16:39:08Z) - Privacy-Preserving Load Forecasting via Personalized Model Obfuscation [4.420464017266168]
This paper addresses the performance challenges of short-term load forecasting models trained with federated learning on heterogeneous data.
Our proposed algorithm, Privacy Preserving Federated Learning (PPFL), incorporates personalization layers for localized training at each smart meter.
arXiv Detail & Related papers (2023-11-21T03:03:10Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated Learning is designed to address privacy concerns in learning models.
New distributed paradigm safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.