Capacity-constrained demand response in smart grids using deep reinforcement learning
- URL: http://arxiv.org/abs/2602.16525v1
- Date: Wed, 18 Feb 2026 15:13:07 GMT
- Title: Capacity-constrained demand response in smart grids using deep reinforcement learning
- Authors: Shafagh Abband Pashaki, Sepehr Maleki, Amir Badiee,
- Abstract summary: This paper presents a capacity-constrained incentive-based demand response approach for residential smart grids.<n>It aims to maintain electricity grid capacity limits and prevent congestion by financially incentivising end users to reduce or shift their energy consumption.
- Score: 1.5749416770494706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a capacity-constrained incentive-based demand response approach for residential smart grids. It aims to maintain electricity grid capacity limits and prevent congestion by financially incentivising end users to reduce or shift their energy consumption. The proposed framework adopts a hierarchical architecture in which a service provider adjusts hourly incentive rates based on wholesale electricity prices and aggregated residential load. The financial interests of both the service provider and end users are explicitly considered. A deep reinforcement learning approach is employed to learn optimal real-time incentive rates under explicit capacity constraints. Heterogeneous user preferences are modelled through appliance-level home energy management systems and dissatisfaction costs. Using real-world residential electricity consumption and price data from three households, simulation results show that the proposed approach effectively reduces peak demand and smooths the aggregated load profile. This leads to an approximately 22.82% reduction in the peak-to-average ratio compared to the no-demand-response case.
Related papers
- Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning [51.02352381270177]
Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology.
The choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy.
This article provides a comprehensive overview of the SFL process and thoroughly analyze energy consumption and privacy.
arXiv Detail & Related papers (2023-11-15T23:23:42Z) - Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement
Learning considering End Users Flexibility [1.3812010983144802]
This work aims to identify households' EV cost-reducing charging policy under a Time-of-Use tariff scheme, with the use of Deep Reinforcement Learning, and more specifically Deep Q-Networks (DQN)
A novel end users flexibility potential reward is inferred from historical data analysis, where households with solar power generation have been used to train and test the algorithm.
arXiv Detail & Related papers (2023-10-13T12:07:36Z) - ILB: Graph Neural Network Enabled Emergency Demand Response Program For
Electricity [6.123324869194196]
In times of crisis, an emergency Demand Response program is required to manage unexpected spikes in energy demand.
We propose the Incentive-Driven Load Balancer (ILB), a program designed to efficiently manage demand and response during crisis situations.
We introduce a two-step machine learning-based framework for participant selection, which employs a graph-based approach to identify households capable of easily adjusting their electricity consumption.
arXiv Detail & Related papers (2023-09-29T20:38:04Z) - Equitable Time-Varying Pricing Tariff Design: A Joint Learning and
Optimization Approach [0.0]
Time-varying pricing tariffs incentivize consumers to shift their electricity demand and reduce costs, but may increase the energy burden for consumers with limited response capability.
This paper proposes a joint learning-based identification and optimization method to design equitable time-varying tariffs.
arXiv Detail & Related papers (2023-07-26T20:14:23Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Dynamic residential load scheduling based on an adaptive consumption
level pricing scheme [0.0]
DRLS is proposed for optimal scheduling of household appliances on the basis of an adaptive consumption level (ACLPS) pricing scheme.
The proposed load scheduling system encourages customers to manage their energy consumption within the allowable consumption allowance (CA) of the proposed DR pricing scheme to achieve lower energy bills.
For a given case study, the proposed residential load scheduling system based on ACLPS allows customers to reduce their energy bills by up to 53% and to decrease the peak load by up to 35%.
arXiv Detail & Related papers (2020-07-23T11:14:39Z) - Distributed Deep Reinforcement Learning for Intelligent Load Scheduling
in Residential Smart Grids [9.208362060870822]
We propose a model-free method for the households which works with limited information about the uncertain factors.
We then utilize real-world data from Pecan Street Inc., which contains the power consumption profile of more than 1; 000 households.
In average, the results reveal that we can achieve around 12% reduction on peak-to-average ratio (PAR) and 11% reduction on load variance.
arXiv Detail & Related papers (2020-06-29T15:01:51Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.