Demand Response Optimization MILP Framework for Microgrids with DERs
- URL: http://arxiv.org/abs/2502.08764v1
- Date: Wed, 12 Feb 2025 20:10:51 GMT
- Title: Demand Response Optimization MILP Framework for Microgrids with DERs
- Authors: K. Victor Sam Moses Babu, Pratyush Chakraborty, Mayukha Pal,
- Abstract summary: This paper presents a framework for optimizing demand response in a microgrid with solar generation and battery storage systems.
The framework incorporates load classification, dynamic price thresholding, and multi-period coordination for optimal DR event scheduling.
- Score: 0.0
- License:
- Abstract: The integration of renewable energy sources in microgrids introduces significant operational challenges due to their intermittent nature and the mismatch between generation and demand patterns. Effective demand response (DR) strategies are crucial for maintaining system stability and economic efficiency, particularly in microgrids with high renewable penetration. This paper presents a comprehensive mixed-integer linear programming (MILP) framework for optimizing DR operations in a microgrid with solar generation and battery storage systems. The framework incorporates load classification, dynamic price thresholding, and multi-period coordination for optimal DR event scheduling. Analysis across seven distinct operational scenarios demonstrates consistent peak load reduction of 10\% while achieving energy cost savings ranging from 13.1\% to 38.0\%. The highest performance was observed in scenarios with high solar generation, where the framework achieved 38.0\% energy cost reduction through optimal coordination of renewable resources and DR actions. The results validate the framework's effectiveness in managing diverse operational challenges while maintaining system stability and economic efficiency.
Related papers
- Integrated Optimization and Game Theory Framework for Fair Cost Allocation in Community Microgrids [0.0]
This paper presents a novel framework integrating multi-objective optimization with cooperative game theory for fair and efficient microgrid operation and cost allocation.
Results show peak demand reductions ranging from 7.8% to 62.6%, solar utilization rates reaching 114.8% through effective storage integration, and cooperative gains of up to $1,801.01 per day.
arXiv Detail & Related papers (2025-02-13T04:28:17Z) - Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Collaborative Optimization of Multi-microgrids System with Shared Energy
Storage Based on Multi-agent Stochastic Game and Reinforcement Learning [8.511196076836592]
The proposed MMG system framework can reduce energy fluctuations in the main grid by 1746.5kW in 24 hours and achieve a cost reduction of 16.21% in the test.
The superiority of the proposed algorithms is verified through their fast convergence speed and excellent optimization performance.
arXiv Detail & Related papers (2023-06-19T07:55:41Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Optimal Planning of Hybrid Energy Storage Systems using Curtailed
Renewable Energy through Deep Reinforcement Learning [0.0]
We propose a sophisticated deep reinforcement learning (DRL) methodology with a policy-based algorithm to plan energy storage systems (ESS)
A quantitative performance comparison proved that the DRL agent outperforms the scenario-based optimization (SO) algorithm.
The corresponding results confirmed that the DRL agent learns the way like what a human expert would do, suggesting reliable application of the proposed methodology.
arXiv Detail & Related papers (2022-12-12T02:24:50Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - AI-based Optimal scheduling of Renewable AC Microgrids with
bidirectional LSTM-Based Wind Power Forecasting [5.039813366558306]
This paper proposes an effective framework for optimal scheduling of microgrids considering energy storage devices, wind turbines, micro turbines.
A deep learning model based on bidirectional long short-term memory is proposed to address the short-term wind power forecasting problem.
Results show the effective and efficient performance of the proposed framework in the optimal scheduling of microgrids.
arXiv Detail & Related papers (2022-07-08T14:40:31Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning [59.28219519916883]
This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
arXiv Detail & Related papers (2020-09-23T01:44:57Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.