Decentralized Multi-agent Reinforcement Learning based State-of-Charge
Balancing Strategy for Distributed Energy Storage System
- URL: http://arxiv.org/abs/2308.15394v1
- Date: Tue, 29 Aug 2023 15:48:49 GMT
- Title: Decentralized Multi-agent Reinforcement Learning based State-of-Charge
Balancing Strategy for Distributed Energy Storage System
- Authors: Zheng Xiong, Biao Luo, Bing-Chuan Wang, Xiaodong Xu, Xiaodong Liu, and
Tingwen Huang
- Abstract summary: This paper develops a Decentralized Multi-Agent Reinforcement Learning (Dec-MARL) method to solve the SoC balancing problem in the distributed energy storage system (DESS)
By the above procedure, Dec-MARL reveals outstanding performance in a fully-decentralized system without any expert experience or constructing any complicated model.
- Score: 30.137522138745986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper develops a Decentralized Multi-Agent Reinforcement Learning
(Dec-MARL) method to solve the SoC balancing problem in the distributed energy
storage system (DESS). First, the SoC balancing problem is formulated into a
finite Markov decision process with action constraints derived from demand
balance, which can be solved by Dec-MARL. Specifically, the first-order average
consensus algorithm is utilized to expand the observations of the DESS state in
a fully-decentralized way, and the initial actions (i.e., output power) are
decided by the agents (i.e., energy storage units) according to these
observations. In order to get the final actions in the allowable range, a
counterfactual demand balance algorithm is proposed to balance the total demand
and the initial actions. Next, the agents execute the final actions and get
local rewards from the environment, and the DESS steps into the next state.
Finally, through the first-order average consensus algorithm, the agents get
the average reward and the expended observation of the next state for later
training. By the above procedure, Dec-MARL reveals outstanding performance in a
fully-decentralized system without any expert experience or constructing any
complicated model. Besides, it is flexible and can be extended to other
decentralized multi-agent systems straightforwardly. Extensive simulations have
validated the effectiveness and efficiency of Dec-MARL.
Related papers
- NodeOP: Optimizing Node Management for Decentralized Networks [8.225105658045843]
We present NodeOP, a novel framework designed to optimize the management of General Node Operators in decentralized networks.
By integrating Agent-Based Modeling (ABM) with a Tendermint Byzantine Fault Tolerance (BFT)-based consensus mechanism, NodeOP addresses key challenges in task allocation, consensus formation, and system stability.
arXiv Detail & Related papers (2024-10-22T06:00:04Z) - Learning Decentralized Partially Observable Mean Field Control for
Artificial Collective Behavior [28.313779052437134]
We propose novel models for decentralized partially observable MFC (Dec-POMFC)
We provide rigorous theoretical results, including a dynamic programming principle.
Overall, our framework takes a step towards RL-based engineering of artificial collective behavior via MFC.
arXiv Detail & Related papers (2023-07-12T14:02:03Z) - Monte-Carlo Search for an Equilibrium in Dec-POMDPs [11.726372393432195]
Decentralized partially observable Markov decision processes (Dec-POMDPs) formalize the problem of individual controllers for a group of collaborative agents.
seeking a Nash equilibrium -- each agent policy being a best response to the other agents -- is more accessible.
We show that this approach can be adapted to cases where only a generative model (a simulator) of the Dec-POMDP is available.
arXiv Detail & Related papers (2023-05-19T16:47:46Z) - Learning Distributed and Fair Policies for Network Load Balancing as
Markov Potentia Game [4.892398873024191]
This paper investigates the network load balancing problem in data centers (DCs) where multiple load balancers (LBs) are deployed.
The challenges of this problem consist of the heterogeneous processing architecture and dynamic environments.
We formulate the multi-agent load balancing problem as a Markov potential game, with a carefully and properly designed workload distribution fairness as the potential function.
A fully distributed MARL algorithm is proposed to approximate the Nash equilibrium of the game.
arXiv Detail & Related papers (2022-06-03T08:29:02Z) - Emergence of Theory of Mind Collaboration in Multiagent Systems [65.97255691640561]
We propose an adaptive training algorithm to develop effective collaboration between agents with ToM.
We evaluate our algorithms with two games, where our algorithm surpasses all previous decentralized execution algorithms without modeling ToM.
arXiv Detail & Related papers (2021-09-30T23:28:00Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.