Enhancing Cyber Resilience of Networked Microgrids using Vertical
Federated Reinforcement Learning
- URL: http://arxiv.org/abs/2212.08973v1
- Date: Sat, 17 Dec 2022 22:56:02 GMT
- Title: Enhancing Cyber Resilience of Networked Microgrids using Vertical
Federated Reinforcement Learning
- Authors: Sayak Mukherjee, Ramij R. Hossain, Yuan Liu, Wei Du, Veronica Adetola,
Sheik M. Mohiuddin, Qiuhua Huang, Tianzhixi Yin, Ankit Singhal
- Abstract summary: We propose a novel federated reinforcement learning (Fed-RL) methodology to enhance the cyber resiliency of networked microgrids.
To circumvent data-sharing issues and concerns for proprietary privacy in multi-party-owned networked grids, we propose a novel Fed-RL algorithm to train the RL agents.
The proposed methodology is validated with numerical examples of modified IEEE 123-bus benchmark test systems.
- Score: 3.9338764026621758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel federated reinforcement learning (Fed-RL)
methodology to enhance the cyber resiliency of networked microgrids. We
formulate a resilient reinforcement learning (RL) training setup which (a)
generates episodic trajectories injecting adversarial actions at primary
control reference signals of the grid forming (GFM) inverters and (b) trains
the RL agents (or controllers) to alleviate the impact of the injected
adversaries. To circumvent data-sharing issues and concerns for proprietary
privacy in multi-party-owned networked grids, we bring in the aspects of
federated machine learning and propose a novel Fed-RL algorithm to train the RL
agents. To this end, the conventional horizontal Fed-RL approaches using
decoupled independent environments fail to capture the coupled dynamics in a
networked microgrid, which leads us to propose a multi-agent vertically
federated variation of actor-critic algorithms, namely federated soft
actor-critic (FedSAC) algorithm. We created a customized simulation setup
encapsulating microgrid dynamics in the GridLAB-D/HELICS co-simulation platform
compatible with the OpenAI Gym interface for training RL agents. Finally, the
proposed methodology is validated with numerical examples of modified IEEE
123-bus benchmark test systems consisting of three coupled microgrids.
Related papers
- A Multi-Agent Reinforcement Learning Testbed for Cognitive Radio Applications [0.48182159227299676]
Radio Frequency Reinforcement Learning (RFRL) will play a prominent role in the wireless communication systems of the future.
This paper provides an overview of the updated RFRL Gym environment.
arXiv Detail & Related papers (2024-10-28T20:45:52Z) - Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning [67.95280175998792]
A novel adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association ins.
We employ inverse RL (IRL) to automatically learn reward functions without manual tuning.
We show that the proposed MA-AL method outperforms traditional RL approaches, achieving a $14.6%$ improvement in convergence and reward value.
arXiv Detail & Related papers (2024-09-27T13:05:02Z) - Resilient Control of Networked Microgrids using Vertical Federated
Reinforcement Learning: Designs and Real-Time Test-Bed Validations [5.394255369988441]
This paper presents a novel federated reinforcement learning (Fed-RL) approach to tackle (a) model complexities, unknown dynamical behaviors of IBR devices, (b) privacy issues regarding data sharing in multi-party-owned networked grids, and (2) transfers learned controls from simulation to hardware-in-the-loop test-bed.
Experiments show that the simulator-trained RL controllers produce convincing results with the real-time test-bed set-up, validating the minimization of sim-to-real gap.
arXiv Detail & Related papers (2023-11-21T00:59:27Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Federated Ensemble Model-based Reinforcement Learning in Edge Computing [21.840086997141498]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm.
We propose a novel FRL algorithm that effectively incorporates model-based RL and ensemble knowledge distillation into FL for the first time.
Specifically, we utilise FL and knowledge distillation to create an ensemble of dynamics models for clients, and then train the policy by solely using the ensemble model without interacting with the environment.
arXiv Detail & Related papers (2021-09-12T16:19:10Z) - Collision-Free Flocking with a Dynamic Squad of Fixed-Wing UAVs Using
Deep Reinforcement Learning [2.555094847583209]
We deal with the decentralized leader-follower flocking control problem through deep reinforcement learning (DRL)
We propose a novel reinforcement learning algorithm CACER-II for training a shared control policy for all the followers.
As a result, the variable-length system state can be encoded into a fixed-length embedding vector, which makes the learned DRL policies independent with the number or the order of followers.
arXiv Detail & Related papers (2021-01-20T11:23:35Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.