Evaluation of Look-ahead Economic Dispatch Using Reinforcement Learning
- URL: http://arxiv.org/abs/2209.10207v1
- Date: Wed, 21 Sep 2022 09:08:45 GMT
- Title: Evaluation of Look-ahead Economic Dispatch Using Reinforcement Learning
- Authors: Zekuan Yu, Guangchun Ruan, Xinyue Wang, Guanglun Zhang, Yiliu He,
Haiwang Zhong
- Abstract summary: We propose an evaluation approach to analyze the performance of reinforcement learning agents in a look-ahead economic dispatch scheme.
In particular, a scenario generation method is developed to generate the network scenarios and demand scenarios for evaluation.
Several metrics are defined to evaluate the agents' performance from the perspective of economy and security.
- Score: 4.513295381096656
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Modern power systems are experiencing a variety of challenges driven by
renewable energy, which calls for developing novel dispatch methods such as
reinforcement learning (RL). Evaluation of these methods as well as the RL
agents are largely under explored. In this paper, we propose an evaluation
approach to analyze the performance of RL agents in a look-ahead economic
dispatch scheme. This approach is conducted by scanning multiple operational
scenarios. In particular, a scenario generation method is developed to generate
the network scenarios and demand scenarios for evaluation, and network
structures are aggregated according to the change rates of power flow. Then
several metrics are defined to evaluate the agents' performance from the
perspective of economy and security. In the case study, we use a modified IEEE
30-bus system to illustrate the effectiveness of the proposed evaluation
approach, and the simulation results reveal good and rapid adaptation to
different scenarios. The comparison between different RL agents is also
informative to offer advice for a better design of the learning strategies.
Related papers
- Multi-agent Off-policy Actor-Critic Reinforcement Learning for Partially Observable Environments [30.280532078714455]
This study proposes the use of a social learning method to estimate a global state within a multi-agent off-policy actor-critic algorithm for reinforcement learning.
We show that the difference between final outcomes, obtained when the global state is fully observed versus estimated through the social learning method, is $varepsilon$-bounded when an appropriate number of iterations of social learning updates are implemented.
arXiv Detail & Related papers (2024-07-06T06:51:14Z) - Meta-Gradient Search Control: A Method for Improving the Efficiency of Dyna-style Planning [8.552540426753]
This paper introduces an online, meta-gradient algorithm that tunes a probability with which states are queried during Dyna-style planning.
Results indicate that our method improves efficiency of the planning process.
arXiv Detail & Related papers (2024-06-27T22:24:46Z) - Towards Evaluating Transfer-based Attacks Systematically, Practically,
and Fairly [79.07074710460012]
adversarial vulnerability of deep neural networks (DNNs) has drawn great attention.
An increasing number of transfer-based methods have been developed to fool black-box DNN models.
We establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods.
arXiv Detail & Related papers (2023-11-02T15:35:58Z) - A Novel Benchmarking Paradigm and a Scale- and Motion-Aware Model for
Egocentric Pedestrian Trajectory Prediction [7.306417438683524]
We present a new paradigm for evaluating egocentric pedestrian trajectory prediction algorithms.
We show that our approach achieves significant improvement by up to 40% in challenging scenarios.
arXiv Detail & Related papers (2023-10-16T14:08:34Z) - Diffusion-based Visual Counterfactual Explanations -- Towards Systematic
Quantitative Evaluation [64.0476282000118]
Latest methods for visual counterfactual explanations (VCE) harness the power of deep generative models to synthesize new examples of high-dimensional images of impressive quality.
It is currently difficult to compare the performance of these VCE methods as the evaluation procedures largely vary and often boil down to visual inspection of individual examples and small scale user studies.
We propose a framework for systematic, quantitative evaluation of the VCE methods and a minimal set of metrics to be used.
arXiv Detail & Related papers (2023-08-11T12:22:37Z) - REX: Rapid Exploration and eXploitation for AI Agents [103.68453326880456]
We propose an enhanced approach for Rapid Exploration and eXploitation for AI Agents called REX.
REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance.
arXiv Detail & Related papers (2023-07-18T04:26:33Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Backward Imitation and Forward Reinforcement Learning via Bi-directional
Model Rollouts [11.4219428942199]
Traditional model-based reinforcement learning (RL) methods generate forward rollout traces using the learnt dynamics model.
In this paper, we propose the backward imitation and forward reinforcement learning (BIFRL) framework.
BIFRL empowers the agent to both reach to and explore from high-value states in a more efficient manner.
arXiv Detail & Related papers (2022-08-04T04:04:05Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.