Deep Reinforcement Learning-driven Cross-Community Energy Interaction
Optimal Scheduling
- URL: http://arxiv.org/abs/2308.12554v2
- Date: Sat, 2 Sep 2023 13:22:39 GMT
- Title: Deep Reinforcement Learning-driven Cross-Community Energy Interaction
Optimal Scheduling
- Authors: Yang Li, Wenjie Ma, Fanjin Bu, Zhen Yang, Bin Wang, Meng Han
- Abstract summary: This paper proposes a comprehensive scheduling model that utilizes a multi-agent deep reinforcement learning algorithm to learn load characteristics of different communities.
It leads to a reduction in wind curtailment rate from 16.3% to 0% and lowers the overall operating cost by 5445.6 Yuan.
- Score: 15.410849325499017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to coordinate energy interactions among various communities and
energy conversions among multi-energy subsystems within the multi-community
integrated energy system under uncertain conditions, and achieve overall
optimization and scheduling of the comprehensive energy system, this paper
proposes a comprehensive scheduling model that utilizes a multi-agent deep
reinforcement learning algorithm to learn load characteristics of different
communities and make decisions based on this knowledge. In this model, the
scheduling problem of the integrated energy system is transformed into a Markov
decision process and solved using a data-driven deep reinforcement learning
algorithm, which avoids the need for modeling complex energy coupling
relationships between multi-communities and multi-energy subsystems. The
simulation results show that the proposed method effectively captures the load
characteristics of different communities and utilizes their complementary
features to coordinate reasonable energy interactions among them. This leads to
a reduction in wind curtailment rate from 16.3% to 0% and lowers the overall
operating cost by 5445.6 Yuan, demonstrating significant economic and
environmental benefits.
Related papers
- Reinforcement Learning for Efficient Design and Control Co-optimisation of Energy Systems [0.0]
This study introduces a novel reinforcement learning framework tailored for the co-optimisation of design and control in energy systems.
By leveraging RL's model-free capabilities, the framework eliminates the need for explicit system modelling.
This contribution paves the way for advanced RL applications in energy management, leading to more efficient and effective use of renewable energy sources.
arXiv Detail & Related papers (2024-06-28T11:01:02Z) - Balancing Energy Efficiency and Distributional Robustness in
Over-the-Air Federated Learning [40.96977338485749]
This paper presents a novel approach that ensures energy efficiency for distributionally robust federated learning (FL) with over air computation (AirComp)
We introduce a novel client selection method that integrates two complementary insights: a deterministic one that is designed for energy efficiency, and a probabilistic one designed for distributional robustness.
Simulation results underscore the efficacy of the proposed algorithm, revealing its superior performance compared to baselines from both robustness and energy efficiency perspectives.
arXiv Detail & Related papers (2023-12-22T12:15:52Z) - On Feature Diversity in Energy-based Models [98.78384185493624]
An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration.
We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs.
arXiv Detail & Related papers (2023-06-02T12:30:42Z) - Optimal scheduling of island integrated energy systems considering
multi-uncertainties and hydrothermal simultaneous transmission: A deep
reinforcement learning approach [3.900623554490941]
Multi-uncertainties from power sources and loads have brought challenges to the stable demand supply of various resources at islands.
To address these challenges, a comprehensive scheduling framework is proposed based on modeling an island integrated energy system (IES)
In response to the shortage of freshwater on islands, in addition to the introduction of seawater desalination systems, a transmission structure of "hydrothermal simultaneous transmission" (HST) is proposed.
arXiv Detail & Related papers (2022-12-27T12:46:25Z) - Multi-Resource Allocation for On-Device Distributed Federated Learning
Systems [79.02994855744848]
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively.
arXiv Detail & Related papers (2022-11-01T14:16:05Z) - Multi-Objective Constrained Optimization for Energy Applications via
Tree Ensembles [55.23285485923913]
Energy systems optimization problems are complex due to strongly non-linear system behavior and multiple competing objectives.
In some cases, proposed optimal solutions need to obey explicit input constraints related to physical properties or safety-critical operating conditions.
This paper proposes a novel data-driven strategy using tree ensembles for constrained multi-objective optimization of black-box problems.
arXiv Detail & Related papers (2021-11-04T20:18:55Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - Optimal Power Allocation for Rate Splitting Communications with Deep
Reinforcement Learning [61.91604046990993]
This letter introduces a novel framework to optimize the power allocation for users in a Rate Splitting Multiple Access network.
In the network, messages intended for users are split into different parts that are a single common part and respective private parts.
arXiv Detail & Related papers (2021-07-01T06:32:49Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - A Scalable Method for Scheduling Distributed Energy Resources using
Parallelized Population-based Metaheuristics [0.0]
A new generic and highly parallel method for unit commitment of distributed energy resources is presented.
The new method provides cluster or cloud parallelizability and is able to deal with a comparably large number of distributed energy resources.
arXiv Detail & Related papers (2020-02-18T11:51:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.