Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement
Learning considering End Users Flexibility
- URL: http://arxiv.org/abs/2310.09040v1
- Date: Fri, 13 Oct 2023 12:07:36 GMT
- Title: Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement
Learning considering End Users Flexibility
- Authors: Christoforos Menos-Aikateriniadis, Stavros Sykiotis, Pavlos S.
Georgilakis
- Abstract summary: This work aims to identify households' EV cost-reducing charging policy under a Time-of-Use tariff scheme, with the use of Deep Reinforcement Learning, and more specifically Deep Q-Networks (DQN)
A novel end users flexibility potential reward is inferred from historical data analysis, where households with solar power generation have been used to train and test the algorithm.
- Score: 1.3812010983144802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid growth of decentralized energy resources and especially Electric
Vehicles (EV), that are expected to increase sharply over the next decade, will
put further stress on existing power distribution networks, increasing the need
for higher system reliability and flexibility. In an attempt to avoid
unnecessary network investments and to increase the controllability over
distribution networks, network operators develop demand response (DR) programs
that incentivize end users to shift their consumption in return for financial
or other benefits. Artificial intelligence (AI) methods are in the research
forefront for residential load scheduling applications, mainly due to their
high accuracy, high computational speed and lower dependence on the physical
characteristics of the models under development. The aim of this work is to
identify households' EV cost-reducing charging policy under a Time-of-Use
tariff scheme, with the use of Deep Reinforcement Learning, and more
specifically Deep Q-Networks (DQN). A novel end users flexibility potential
reward is inferred from historical data analysis, where households with solar
power generation have been used to train and test the designed algorithm. The
suggested DQN EV charging policy can lead to more than 20% of savings in end
users electricity bills.
Related papers
- A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids [0.0]
We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
arXiv Detail & Related papers (2024-01-05T06:04:46Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - An Efficient Distributed Multi-Agent Reinforcement Learning for EV
Charging Network Control [2.5477011559292175]
We introduce a decentralized Multi-agent Reinforcement Learning (MARL) charging framework that prioritizes the preservation of privacy for EV owners.
Our results demonstrate that the CTDE framework improves the performance of the charging network by reducing the network costs.
arXiv Detail & Related papers (2023-08-24T16:53:52Z) - DClEVerNet: Deep Combinatorial Learning for Efficient EV Charging
Scheduling in Large-scale Networked Facilities [5.78463306498655]
Electric vehicles (EVs) might stress distribution networks significantly, leaving their performance degraded and jeopardized stability.
Modern power grids require coordinated or smart'' charging strategies capable of optimizing EV charging scheduling in a scalable and efficient fashion.
We formulate a time-coupled binary optimization problem that maximizes EV users' total welfare gain while accounting for the network's available power capacity and stations' occupancy limits.
arXiv Detail & Related papers (2023-05-18T14:03:47Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Solar Power driven EV Charging Optimization with Deep Reinforcement
Learning [6.936743119804558]
Decentralized energy resources, such as Electric Vehicles (EV) and solar photovoltaic systems (PV), are continuously integrated in residential power systems.
This paper aims to address the challenge of domestic EV charging while prioritizing clean, solar energy consumption.
arXiv Detail & Related papers (2022-11-17T11:52:27Z) - The impact of online machine-learning methods on long-term investment
decisions and generator utilization in electricity markets [69.68068088508505]
We investigate the impact of eleven offline and five online learning algorithms to predict the electricity demand profile over the next 24h.
We show we can reduce the mean absolute error by 30% using an online algorithm when compared to the best offline algorithm.
We also show that large errors in prediction accuracy have a disproportionate error on investments made over a 17-year time frame.
arXiv Detail & Related papers (2021-03-07T11:28:54Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.