Active Distribution System Coordinated Control Method via Artificial
Intelligence
- URL: http://arxiv.org/abs/2207.14642v1
- Date: Tue, 12 Jul 2022 13:46:38 GMT
- Title: Active Distribution System Coordinated Control Method via Artificial
Intelligence
- Authors: Matthew Lau, Kayla Thames and Sakis Meliopoulos
- Abstract summary: It is necessary to control the system to provide power reliably and securely under normal voltages and frequency.
We suggest that neural networks with self-attention mechanisms have the potential to aid in the optimization of the system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The increasing deployment of end use power resources in distribution systems
created active distribution systems. Uncontrolled active distribution systems
exhibit wide variations of voltage and loading throughout the day as some of
these resources operate under max power tracking control of highly variable
wind and solar irradiation while others exhibit random variations and/or
dependency on weather conditions. It is necessary to control the system to
provide power reliably and securely under normal voltages and frequency.
Classical optimization approaches to control the system towards this goal
suffer from the dimensionality of the problem and the need for a global
optimization approach to coordinate a huge number of small resources.
Artificial Intelligence (AI) methods offer an alternative that can provide a
practical approach to this problem. We suggest that neural networks with
self-attention mechanisms have the potential to aid in the optimization of the
system. In this paper, we present this approach and provide promising
preliminary results.
Related papers
- Global-Decision-Focused Neural ODEs for Proactive Grid Resilience Management [50.34345101758248]
We propose predict-all-then-optimize-globally (PATOG), a framework that integrates outage prediction with globally optimized interventions.
Our approach ensures spatially and temporally coherent decision-making, improving both predictive accuracy and operational efficiency.
Experiments on synthetic and real-world datasets demonstrate significant improvements in outage prediction consistency and grid resilience.
arXiv Detail & Related papers (2025-02-25T16:15:35Z) - Cluster-Based Multi-Agent Task Scheduling for Space-Air-Ground Integrated Networks [60.085771314013044]
Low-altitude economy holds significant potential for development in areas such as communication and sensing.
We propose a Clustering-based Multi-agent Deep Deterministic Policy Gradient (CMADDPG) algorithm to address the multi-UAV cooperative task scheduling challenges in SAGIN.
arXiv Detail & Related papers (2024-12-14T06:17:33Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - A novel ANROA based control approach for grid-tied multi-functional
solar energy conversion system [0.0]
An adaptive control approach for a three-phase grid-interfaced solar photovoltaic system is proposed and discussed.
This method incorporates an Adaptive Neuro-fuzzy Inference System (ANFIS) with a Rain Optimization Algorithm (ROA)
Avoiding power quality problems including voltage fluctuations, harmonics, and flickers as well as unbalanced loads and reactive power usage is the major goal.
arXiv Detail & Related papers (2024-01-26T09:12:39Z) - A Learning Approach for Joint Design of Event-triggered Control and
Power-Efficient Resource Allocation [3.822543555265593]
We study the joint design problem of an event-triggered control and an energy-efficient resource allocation in a fifth generation (5G) wireless network.
We propose a model-free hierarchical reinforcement learning approach that learns four policies simultaneously.
Our simulation results show that the proposed approach can properly control a simulated ICPS and significantly decrease the number of updates on the actuators' input as well as the downlink power consumption.
arXiv Detail & Related papers (2022-05-14T14:16:11Z) - A Reinforcement Learning Approach to Parameter Selection for Distributed
Optimization in Power Systems [1.1199585259018459]
We develop an adaptive penalty parameter selection policy for the AC optimal power flow (ACOPF) problem solved via ADMM.
We show that our RL policy demonstrates promise for generalizability, performing well under unseen loading schemes as well as under unseen losses of lines and generators.
This work thus provides a proof-of-concept for using RL for parameter selection in ADMM for power systems applications.
arXiv Detail & Related papers (2021-10-22T18:17:32Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Scheduling and Power Control for Wireless Multicast Systems via Deep
Reinforcement Learning [33.737301955006345]
Multicasting in wireless systems is a way to exploit the redundancy in user requests in a Content Centric Network.
Power control and optimal scheduling can significantly improve the wireless multicast network's performance under fading.
We show that power control policy can be learnt for reasonably large systems via this approach.
arXiv Detail & Related papers (2020-09-27T15:59:44Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z) - Distributed Voltage Regulation of Active Distribution System Based on
Enhanced Multi-agent Deep Reinforcement Learning [9.7314654861242]
This paper proposes a data-driven distributed voltage control approach based on the spectrum clustering and the enhanced multi-agent deep reinforcement learning (MADRL) algorithm.
The proposed method can significantly reduce the requirements of communications and knowledge of system parameters.
It also effectively deals with uncertainties and can provide online coordinated control based on the latest local information.
arXiv Detail & Related papers (2020-05-31T15:48:27Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.