Asynchronous Decentralized Federated Learning for Collaborative Fault
Diagnosis of PV Stations
- URL: http://arxiv.org/abs/2202.13606v1
- Date: Mon, 28 Feb 2022 08:26:48 GMT
- Title: Asynchronous Decentralized Federated Learning for Collaborative Fault
Diagnosis of PV Stations
- Authors: Qi Liu (1, 2 and 3), Bo Yang (1, 2 and 3), Zhaojian Wang (1, 2 and 3),
Dafeng Zhu (1, 2 and 3), Xinyi Wang (1, 2 and 3), Kai Ma (4), Xinping Guan
(1, 2 and 3) ((1) Department of Automation, Shanghai Jiao Tong University,
Shanghai, China, (2) Key Laboratory of System Control and Information
Processing, Ministry of Education of China, Shanghai, China, (3) Shanghai
Engineering Research Center of Intelligent Control and Management, Shanghai,
China, (4) School of Electrical Engineering, Yanshan University, Qinhuangdao,
China.)
- Abstract summary: A novel asynchronous decentralized federated learning (ADFL) framework is proposed to train a collaborative fault diagnosis model.
The global model is aggregated distributedly to avoid central node failure.
Both the experiments and numerical simulations are carried out to verify the effectiveness of the proposed method.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the different losses caused by various photovoltaic (PV) array faults,
accurate diagnosis of fault types is becoming increasingly important. Compared
with a single one, multiple PV stations collect sufficient fault samples, but
their data is not allowed to be shared directly due to potential conflicts of
interest. Therefore, federated learning can be exploited to train a
collaborative fault diagnosis model. However, the modeling efficiency is
seriously affected by the model update mechanism since each PV station has a
different computing capability and amount of data. Moreover, for the safe and
stable operation of the PV system, the robustness of collaborative modeling
must be guaranteed rather than simply being processed on a central server. To
address these challenges, a novel asynchronous decentralized federated learning
(ADFL) framework is proposed. Each PV station not only trains its local model
but also participates in collaborative fault diagnosis by exchanging model
parameters to improve the generalization without losing accuracy. The global
model is aggregated distributedly to avoid central node failure. By designing
the asynchronous update scheme, the communication overhead and training time
are greatly reduced. Both the experiments and numerical simulations are carried
out to verify the effectiveness of the proposed method.
Related papers
- Collaborative Value Function Estimation Under Model Mismatch: A Federated Temporal Difference Analysis [55.13545823385091]
Federated reinforcement learning (FedRL) enables collaborative learning while preserving data privacy by preventing direct data exchange between agents.
In real-world applications, each agent may experience slightly different transition dynamics, leading to inherent model mismatches.
We show that even moderate levels of information sharing can significantly mitigate environment-specific errors.
arXiv Detail & Related papers (2025-03-21T18:06:28Z) - Multivariate Physics-Informed Convolutional Autoencoder for Anomaly Detection in Power Distribution Systems with High Penetration of DERs [0.0]
This paper proposes a physics-informed convolutional autoencoder (PIConvAE) model to detect cyber anomalies in power distribution systems with unbalanced configurations and high penetration of DERs.
The performance of the proposed model is evaluated on two unbalanced power distribution grids, IEEE 123-bus system and a real-world feeder in Riverside, CA.
arXiv Detail & Related papers (2024-06-05T04:28:57Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - A Distributed Computation Model Based on Federated Learning Integrates
Heterogeneous models and Consortium Blockchain for Solving Time-Varying
Problems [35.69540692050138]
We propose a Distributed Computation Model (DCM) based on the consortium blockchain network to improve the credibility of the overall model.
In the experiments, we verify the efficiency of DCM, where the results show that the proposed model outperforms many state-of-the-art models.
arXiv Detail & Related papers (2023-06-28T08:50:35Z) - Causality-Based Multivariate Time Series Anomaly Detection [63.799474860969156]
We formulate the anomaly detection problem from a causal perspective and view anomalies as instances that do not follow the regular causal mechanism to generate the multivariate data.
We then propose a causality-based anomaly detection approach, which first learns the causal structure from data and then infers whether an instance is an anomaly relative to the local causal mechanism.
We evaluate our approach with both simulated and public datasets as well as a case study on real-world AIOps applications.
arXiv Detail & Related papers (2022-06-30T06:00:13Z) - FedRAD: Federated Robust Adaptive Distillation [7.775374800382709]
Collaborative learning framework by typically aggregating model updates is vulnerable to model poisoning attacks from adversarial clients.
We propose a novel robust aggregation method, Federated Robust Adaptive Distillation (FedRAD), to detect adversaries and robustly aggregate local models.
The results show that FedRAD outperforms all other aggregators in the presence of adversaries, as well as in heterogeneous data distributions.
arXiv Detail & Related papers (2021-12-02T16:50:57Z) - Task-agnostic Continual Learning with Hybrid Probabilistic Models [75.01205414507243]
We propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification.
The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting.
We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST.
arXiv Detail & Related papers (2021-06-24T05:19:26Z) - Separation of Powers in Federated Learning [5.966064140042439]
Federated Learning (FL) enables collaborative training among mutually distrusting parties.
Recent attacks have reconstructed large fractions of training data from ostensibly "sanitized" model updates.
We introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture.
arXiv Detail & Related papers (2021-05-19T21:00:44Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.