AMI-FML: A Privacy-Preserving Federated Machine Learning Framework for
AMI
- URL: http://arxiv.org/abs/2109.05666v1
- Date: Mon, 13 Sep 2021 01:56:48 GMT
- Title: AMI-FML: A Privacy-Preserving Federated Machine Learning Framework for
AMI
- Authors: Milan Biswal, Abu Saleh Md Tayeen, Satyajayant Misra
- Abstract summary: A key challenge in developing distributed machine learning applications for AMI is to preserve user privacy while allowing active end-users participation.
This paper proposes a privacy-preserving federated learning framework for ML applications in the AMI.
We demonstrate the proposed framework on a use case federated ML (FML) application that improves short-term load forecasting (STLF)
- Score: 2.7393821783237184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) based smart meter data analytics is very promising for
energy management and demand-response applications in the advanced metering
infrastructure(AMI). A key challenge in developing distributed ML applications
for AMI is to preserve user privacy while allowing active end-users
participation. This paper addresses this challenge and proposes a
privacy-preserving federated learning framework for ML applications in the AMI.
We consider each smart meter as a federated edge device hosting an ML
application that exchanges information with a central aggregator or a data
concentrator, periodically. Instead of transferring the raw data sensed by the
smart meters, the ML model weights are transferred to the aggregator to
preserve privacy. The aggregator processes these parameters to devise a robust
ML model that can be substituted at each edge device. We also discuss
strategies to enhance privacy and improve communication efficiency while
sharing the ML model parameters, suited for relatively slow network connections
in the AMI. We demonstrate the proposed framework on a use case federated ML
(FML) application that improves short-term load forecasting (STLF). We use a
long short-term memory(LSTM) recurrent neural network (RNN) model for STLF. In
our architecture, we assume that there is an aggregator connected to a group of
smart meters. The aggregator uses the learned model gradients received from the
federated smart meters to generate an aggregate, robust RNN model which
improves the forecasting accuracy for individual and aggregated STLF. Our
results indicate that with FML, forecasting accuracy is increased while
preserving the data privacy of the end-users.
Related papers
- Privacy-Preserving Hierarchical Model-Distributed Inference [4.331317259797958]
This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup.
Our goal is to speed up ML inference while providing privacy to both data and the ML model.
arXiv Detail & Related papers (2024-07-25T19:39:03Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - MLLM-DataEngine: An Iterative Refinement Approach for MLLM [62.30753425449056]
We propose a novel closed-loop system that bridges data generation, model training, and evaluation.
Within each loop, the MLLM-DataEngine first analyze the weakness of the model based on the evaluation results.
For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts the ratio of different types of data.
For quality, we resort to GPT-4 to generate high-quality data with each given data type.
arXiv Detail & Related papers (2023-08-25T01:41:04Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - ezDPS: An Efficient and Zero-Knowledge Machine Learning Inference
Pipeline [2.0813318162800707]
We propose ezDPS, a new efficient and zero-knowledge Machine Learning inference scheme.
ezDPS is a zkML pipeline in which the data is processed in multiple stages for high accuracy.
We show that ezDPS achieves one-to-three orders of magnitude more efficient than the generic circuit-based approach in all metrics.
arXiv Detail & Related papers (2022-12-11T06:47:28Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Genetic CFL: Optimization of Hyper-Parameters in Clustered Federated
Learning [4.710427287359642]
Federated learning (FL) is a distributed model for deep learning that integrates client-server architecture, edge computing, and real-time intelligence.
FL has the capability of revolutionizing machine learning (ML) but lacks in the practicality of implementation due to technological limitations, communication overhead, non-IID (independent and identically distributed) data, and privacy concerns.
We propose a novel hybrid algorithm, namely genetic clustered FL (Genetic CFL), that clusters edge devices based on the training hyper- parameters and genetically modifies the parameters cluster-wise.
arXiv Detail & Related papers (2021-07-15T10:16:05Z) - Trust-Based Cloud Machine Learning Model Selection For Industrial IoT
and Smart City Services [5.333802479607541]
We consider the paradigm where cloud service providers collect big data from resource-constrained devices for building Machine Learning prediction models.
Our proposed solution comprises an intelligent-time reconfiguration that maximizes the level of trust of ML models.
Our results show that the selected model's trust level is 0.7% to 2.53% less compared to the results obtained using ILP.
arXiv Detail & Related papers (2020-08-11T23:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.