Continuous Multiagent Control using Collective Behavior Entropy for
Large-Scale Home Energy Management
- URL: http://arxiv.org/abs/2005.10000v1
- Date: Thu, 14 May 2020 16:07:55 GMT
- Title: Continuous Multiagent Control using Collective Behavior Entropy for
Large-Scale Home Energy Management
- Authors: Jianwen Sun, Yan Zheng, Jianye Hao, Zhaopeng Meng, Yang Liu
- Abstract summary: We propose a collective MA-DRL algorithm with continuous action space to provide fine-grained control on a large scale microgrid.
Our approach significantly outperforms the state-of-the-art methods regarding power cost reduction and daily peak loads optimization.
- Score: 36.82414045535202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing popularity of electric vehicles, distributed energy
generation and storage facilities in smart grid systems, an efficient
Demand-Side Management (DSM) is urgent for energy savings and peak loads
reduction. Traditional DSM works focusing on optimizing the energy activities
for a single household can not scale up to large-scale home energy management
problems. Multi-agent Deep Reinforcement Learning (MA-DRL) shows a potential
way to solve the problem of scalability, where modern homes interact together
to reduce energy consumers consumption while striking a balance between energy
cost and peak loads reduction. However, it is difficult to solve such an
environment with the non-stationarity, and existing MA-DRL approaches cannot
effectively give incentives for expected group behavior. In this paper, we
propose a collective MA-DRL algorithm with continuous action space to provide
fine-grained control on a large scale microgrid. To mitigate the
non-stationarity of the microgrid environment, a novel predictive model is
proposed to measure the collective market behavior. Besides, a collective
behavior entropy is introduced to reduce the high peak loads incurred by the
collective behaviors of all householders in the smart grid. Empirical results
show that our approach significantly outperforms the state-of-the-art methods
regarding power cost reduction and daily peak loads optimization.
Related papers
- Decentralized Coordination of Distributed Energy Resources through Local Energy Markets and Deep Reinforcement Learning [1.8434042562191815]
Transactive energy, implemented through local energy markets, has recently garnered attention as a promising solution to the grid challenges.
This study addresses the gap by training a set of deep reinforcement learning agents to automate end-user participation in ALEX.
The study unveils a clear correlation between bill reduction and reduced net load variability in this setup.
arXiv Detail & Related papers (2024-04-19T19:03:33Z) - EnergAIze: Multi Agent Deep Deterministic Policy Gradient for Vehicle to Grid Energy Management [0.0]
This paper introduces EnergAIze, a Multi-Agent Reinforcement Learning (MARL) energy management framework.
It enables user-centric and multi-objective energy management by allowing each prosumer to select from a range of personal management objectives.
The efficacy of EnergAIze was evaluated through case studies employing the CityLearn simulation framework.
arXiv Detail & Related papers (2024-04-02T23:16:17Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Intelligent Residential Energy Management System using Deep
Reinforcement Learning [5.532477732693001]
This paper proposes a Deep Reinforcement Learning (DRL) model for demand response where the virtual agent learns the task like humans do.
Our method outperformed the state of the art mixed integer linear programming (MILP) for load peak reduction.
arXiv Detail & Related papers (2020-05-28T19:51:22Z) - A Hierarchical Approach to Multi-Energy Demand Response: From
Electricity to Multi-Energy Applications [1.5084441395740482]
This paper looks into an opportunity to control energy consumption of an aggregation of many residential, commercial and industrial consumers.
This ensemble control becomes a modern demand response contributor to the set of modeling tools for multi-energy infrastructure systems.
arXiv Detail & Related papers (2020-05-05T17:17:51Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.