Distributed Management of Fluctuating Energy Resources in Dynamic Networked Systems
- URL: http://arxiv.org/abs/2405.19015v1
- Date: Wed, 29 May 2024 11:54:11 GMT
- Title: Distributed Management of Fluctuating Energy Resources in Dynamic Networked Systems
- Authors: Xiaotong Cheng, Ioannis Tsetis, Setareh Maghsudi,
- Abstract summary: We study the energy-sharing problem in a system consisting of several DERs.
We model this problem as a bandit convex optimization problem with constraints that correspond to each node's limitations for energy production.
We propose distributed decision-making policies to solve the formulated problem, where we utilize the notion of dynamic regret as the performance metric.
- Score: 3.716849174391564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern power systems integrate renewable distributed energy resources (DERs) as an environment-friendly enhancement to meet the ever-increasing demands. However, the inherent unreliability of renewable energy renders developing DER management algorithms imperative. We study the energy-sharing problem in a system consisting of several DERs. Each agent harvests and distributes renewable energy in its neighborhood to optimize the network's performance while minimizing energy waste. We model this problem as a bandit convex optimization problem with constraints that correspond to each node's limitations for energy production. We propose distributed decision-making policies to solve the formulated problem, where we utilize the notion of dynamic regret as the performance metric. We also include an adjustment strategy in our developed algorithm to reduce the constraint violations. Besides, we design a policy that deals with the non-stationary environment. Theoretical analysis shows the effectiveness of our proposed algorithm. Numerical experiments using a real-world dataset show superior performance of our proposal compared to state-of-the-art methods.
Related papers
- Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with
Online Learning [60.17407932691429]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - A Constraint Enforcement Deep Reinforcement Learning Framework for
Optimal Energy Storage Systems Dispatch [0.0]
The optimal dispatch of energy storage systems (ESSs) presents formidable challenges due to fluctuations in dynamic prices, demand consumption, and renewable-based energy generation.
By exploiting the generalization capabilities of deep neural networks (DNNs), deep reinforcement learning (DRL) algorithms can learn good-quality control models that adaptively respond to distribution networks' nature.
We propose a DRL framework that effectively handles continuous action spaces while strictly enforcing the environments and action space operational constraints during online operation.
arXiv Detail & Related papers (2023-07-26T17:12:04Z) - State-Augmented Learnable Algorithms for Resource Management in Wireless
Networks [124.89036526192268]
We propose a state-augmented algorithm for solving resource management problems in wireless networks.
We show that the proposed algorithm leads to feasible and near-optimal RRM decisions.
arXiv Detail & Related papers (2022-07-05T18:02:54Z) - Multi-Objective Constrained Optimization for Energy Applications via
Tree Ensembles [55.23285485923913]
Energy systems optimization problems are complex due to strongly non-linear system behavior and multiple competing objectives.
In some cases, proposed optimal solutions need to obey explicit input constraints related to physical properties or safety-critical operating conditions.
This paper proposes a novel data-driven strategy using tree ensembles for constrained multi-objective optimization of black-box problems.
arXiv Detail & Related papers (2021-11-04T20:18:55Z) - Enforcing Policy Feasibility Constraints through Differentiable
Projection for Energy Optimization [57.88118988775461]
We propose PROjected Feasibility (PROF) to enforce convex operational constraints within neural policies.
We demonstrate PROF on two applications: energy-efficient building operation and inverter control.
arXiv Detail & Related papers (2021-05-19T01:58:10Z) - Sliding Differential Evolution Scheduling for Federated Learning in
Bandwidth-Limited Networks [23.361422744588978]
Federated learning (FL) in a bandwidth-limited network with energy-limited user equipments (UEs) is under-explored.
We propose the sliding differential evolution-based scheduling (SDES) policy to jointly save energy consumed by the battery-limited UEs and accelerate the convergence of the global model in FL for the bandwidth-limited network.
arXiv Detail & Related papers (2020-10-18T14:08:24Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - A Scalable Method for Scheduling Distributed Energy Resources using
Parallelized Population-based Metaheuristics [0.0]
A new generic and highly parallel method for unit commitment of distributed energy resources is presented.
The new method provides cluster or cloud parallelizability and is able to deal with a comparably large number of distributed energy resources.
arXiv Detail & Related papers (2020-02-18T11:51:28Z) - Targeted free energy estimation via learned mappings [66.20146549150475]
Free energy perturbation (FEP) was proposed by Zwanzig more than six decades ago as a method to estimate free energy differences.
FEP suffers from a severe limitation: the requirement of sufficient overlap between distributions.
One strategy to mitigate this problem, called Targeted Free Energy Perturbation, uses a high-dimensional mapping in configuration space to increase overlap.
arXiv Detail & Related papers (2020-02-12T11:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.