Intelligent Residential Energy Management System using Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2005.14259v1
- Date: Thu, 28 May 2020 19:51:22 GMT
- Title: Intelligent Residential Energy Management System using Deep
Reinforcement Learning
- Authors: Alwyn Mathew, Abhijit Roy, Jimson Mathew
- Abstract summary: This paper proposes a Deep Reinforcement Learning (DRL) model for demand response where the virtual agent learns the task like humans do.
Our method outperformed the state of the art mixed integer linear programming (MILP) for load peak reduction.
- Score: 5.532477732693001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rising demand for electricity and its essential nature in today's world
calls for intelligent home energy management (HEM) systems that can reduce
energy usage. This involves scheduling of loads from peak hours of the day when
energy consumption is at its highest to leaner off-peak periods of the day when
energy consumption is relatively lower thereby reducing the system's peak load
demand, which would consequently result in lesser energy bills, and improved
load demand profile. This work introduces a novel way to develop a learning
system that can learn from experience to shift loads from one time instance to
another and achieve the goal of minimizing the aggregate peak load. This paper
proposes a Deep Reinforcement Learning (DRL) model for demand response where
the virtual agent learns the task like humans do. The agent gets feedback for
every action it takes in the environment; these feedbacks will drive the agent
to learn about the environment and take much smarter steps later in its
learning stages. Our method outperformed the state of the art mixed integer
linear programming (MILP) for load peak reduction. The authors have also
designed an agent to learn to minimize both consumers' electricity bills and
utilities' system peak load demand simultaneously. The proposed model was
analyzed with loads from five different residential consumers; the proposed
method increases the monthly savings of each consumer by reducing their
electricity bill drastically along with minimizing the peak load on the system
when time shiftable loads are handled by the proposed method.
Related papers
- Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - Transfer Learning in Transformer-Based Demand Forecasting For Home
Energy Management System [4.573008040057806]
We analyze how transfer learning can help by exploiting data from multiple households to improve a single house's load forecasting.
Specifically, we train an advanced forecasting model using data from multiple different households, and then finetune this global model on a new household with limited data.
The obtained models are used for forecasting power consumption of the household for the next 24 hours(day-ahead) at a time resolution of 15 minutes.
arXiv Detail & Related papers (2023-10-29T21:19:08Z) - Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement
Learning considering End Users Flexibility [1.3812010983144802]
This work aims to identify households' EV cost-reducing charging policy under a Time-of-Use tariff scheme, with the use of Deep Reinforcement Learning, and more specifically Deep Q-Networks (DQN)
A novel end users flexibility potential reward is inferred from historical data analysis, where households with solar power generation have been used to train and test the algorithm.
arXiv Detail & Related papers (2023-10-13T12:07:36Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Optimal Load Scheduling Using Genetic Algorithm to Improve the Load
Profile [0.0]
Genetic algorithm (GA) is used to schedule the load via real time pricing signal (RTP)
We conclude that GA provides optimal solution for scheduling of house hold appliances by curtailing overall utilized energy cost and peak to average ratio hence improving the load profile.
arXiv Detail & Related papers (2021-10-14T04:47:17Z) - Dynamic residential load scheduling based on an adaptive consumption
level pricing scheme [0.0]
DRLS is proposed for optimal scheduling of household appliances on the basis of an adaptive consumption level (ACLPS) pricing scheme.
The proposed load scheduling system encourages customers to manage their energy consumption within the allowable consumption allowance (CA) of the proposed DR pricing scheme to achieve lower energy bills.
For a given case study, the proposed residential load scheduling system based on ACLPS allows customers to reduce their energy bills by up to 53% and to decrease the peak load by up to 35%.
arXiv Detail & Related papers (2020-07-23T11:14:39Z) - Continuous Multiagent Control using Collective Behavior Entropy for
Large-Scale Home Energy Management [36.82414045535202]
We propose a collective MA-DRL algorithm with continuous action space to provide fine-grained control on a large scale microgrid.
Our approach significantly outperforms the state-of-the-art methods regarding power cost reduction and daily peak loads optimization.
arXiv Detail & Related papers (2020-05-14T16:07:55Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.