A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids
- URL: http://arxiv.org/abs/2401.02653v1
- Date: Fri, 5 Jan 2024 06:04:46 GMT
- Title: A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids
- Authors: Viorica Rozina Chifu, Tudor Cioara, Cristina Bianca Pop, Horia Rusu
and Ionut Anghel
- Abstract summary: We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Economic and policy factors are driving the continuous increase in the
adoption and usage of electrical vehicles (EVs). However, despite being a
cleaner alternative to combustion engine vehicles, EVs have negative impacts on
the lifespan of microgrid equipment and energy balance due to increased power
demand and the timing of their usage. In our view grid management should
leverage on EVs scheduling flexibility to support local network balancing
through active participation in demand response programs. In this paper, we
propose a model-free solution, leveraging Deep Q-Learning to schedule the
charging and discharging activities of EVs within a microgrid to align with a
target energy profile provided by the distribution system operator. We adapted
the Bellman Equation to assess the value of a state based on specific rewards
for EV scheduling actions and used a neural network to estimate Q-values for
available actions and the epsilon-greedy algorithm to balance exploitation and
exploration to meet the target energy profile. The results are promising
showing that the proposed solution can effectively schedule the EVs charging
and discharging actions to align with the target profile with a Person
coefficient of 0.99, handling effective EVs scheduling situations that involve
dynamicity given by the e-mobility features, relying only on data with no
knowledge of EVs and microgrid dynamics.
Related papers
- Electric Vehicles coordination for grid balancing using multi-objective
Harris Hawks Optimization [0.0]
The rise of renewables coincides with the shift towards Electrical Vehicles (EVs) posing technical and operational challenges for the energy balance of the local grid.
Coordinating power flow from multiple EVs into the grid requires sophisticated algorithms and load-balancing strategies.
This paper proposes an EVs fleet coordination model for the day ahead aiming to ensure a reliable energy supply and maintain a stable local grid.
arXiv Detail & Related papers (2023-11-24T15:50:37Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement
Learning considering End Users Flexibility [1.3812010983144802]
This work aims to identify households' EV cost-reducing charging policy under a Time-of-Use tariff scheme, with the use of Deep Reinforcement Learning, and more specifically Deep Q-Networks (DQN)
A novel end users flexibility potential reward is inferred from historical data analysis, where households with solar power generation have been used to train and test the algorithm.
arXiv Detail & Related papers (2023-10-13T12:07:36Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Federated Reinforcement Learning for Real-Time Electric Vehicle Charging
and Discharging Control [42.17503767317918]
This paper develops an optimal EV charging/discharging control strategy for different EV users under dynamic environments.
A horizontal federated reinforcement learning (HFRL)-based method is proposed to fit various users' behaviors and dynamic environments.
Simulation results illustrate that the proposed real-time EV charging/discharging control strategy can perform well among various factors.
arXiv Detail & Related papers (2022-10-04T08:22:46Z) - A Reinforcement Learning Approach for Electric Vehicle Routing Problem
with Vehicle-to-Grid Supply [2.6066825041242367]
We present QuikRouteFinder that uses reinforcement learning (RL) for EV routing to overcome these challenges.
Results from RL are compared against exact formulations based on mixed-integer linear program (MILP) and genetic algorithm (GA) metaheuristics.
arXiv Detail & Related papers (2022-04-12T06:13:06Z) - An Energy Consumption Model for Electrical Vehicle Networks via Extended
Federated-learning [50.85048976506701]
This paper proposes a novel solution to range anxiety based on a federated-learning model.
It is capable of estimating battery consumption and providing energy-efficient route planning for vehicle networks.
arXiv Detail & Related papers (2021-11-13T15:03:44Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.