Electric Vehicles coordination for grid balancing using multi-objective
Harris Hawks Optimization
- URL: http://arxiv.org/abs/2311.14563v1
- Date: Fri, 24 Nov 2023 15:50:37 GMT
- Title: Electric Vehicles coordination for grid balancing using multi-objective
Harris Hawks Optimization
- Authors: Cristina Bianca Pop, Tudor Cioara, Viorica Chifu, Ionut Anghel,
Francesco Bellesini
- Abstract summary: The rise of renewables coincides with the shift towards Electrical Vehicles (EVs) posing technical and operational challenges for the energy balance of the local grid.
Coordinating power flow from multiple EVs into the grid requires sophisticated algorithms and load-balancing strategies.
This paper proposes an EVs fleet coordination model for the day ahead aiming to ensure a reliable energy supply and maintain a stable local grid.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rise of renewables coincides with the shift towards Electrical Vehicles
(EVs) posing technical and operational challenges for the energy balance of the
local grid. Nowadays, the energy grid cannot deal with a spike in EVs usage
leading to a need for more coordinated and grid aware EVs charging and
discharging strategies. However, coordinating power flow from multiple EVs into
the grid requires sophisticated algorithms and load-balancing strategies as the
complexity increases with more control variables and EVs, necessitating large
optimization and decision search spaces. In this paper, we propose an EVs fleet
coordination model for the day ahead aiming to ensure a reliable energy supply
and maintain a stable local grid, by utilizing EVs to store surplus energy and
discharge it during periods of energy deficit. The optimization problem is
addressed using Harris Hawks Optimization (HHO) considering criteria related to
energy grid balancing, time usage preference, and the location of EV drivers.
The EVs schedules, associated with the position of individuals from the
population, are adjusted through exploration and exploitation operations, and
their technical and operational feasibility is ensured, while the rabbit
individual is updated with a non-dominated EV schedule selected per iteration
using a roulette wheel algorithm. The solution is evaluated within the
framework of an e-mobility service in Terni city. The results indicate that
coordinated charging and discharging of EVs not only meet balancing service
requirements but also align with user preferences with minimal deviations.
Related papers
- Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - Uncertainty-Aware Critic Augmentation for Hierarchical Multi-Agent EV Charging Control [9.96602699887327]
We propose HUCA, a novel real-time charging control for regulating energy demands for both the building and EVs.
HUCA employs hierarchical actor-critic networks to dynamically reduce electricity costs in buildings, accounting for the needs of EV charging in the dynamic pricing scenario.
Experiments on real-world electricity datasets under both simulated certain and uncertain departure scenarios demonstrate that HUCA outperforms baselines in terms of total electricity costs.
arXiv Detail & Related papers (2024-12-23T23:45:45Z) - A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids [0.0]
We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
arXiv Detail & Related papers (2024-01-05T06:04:46Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - Deep Reinforcement Learning-Based Battery Conditioning Hierarchical V2G
Coordination for Multi-Stakeholder Benefits [3.4529246211079645]
This study proposes a multi-stakeholder hierarchical V2G coordination based on deep reinforcement learning (DRL) and the Proof of Stake algorithm.
The multi-stakeholders include the power grid, EV aggregators (EVAs), and users, and the proposed strategy can achieve multi-stakeholder benefits.
arXiv Detail & Related papers (2023-08-01T01:19:56Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Federated Reinforcement Learning for Real-Time Electric Vehicle Charging
and Discharging Control [42.17503767317918]
This paper develops an optimal EV charging/discharging control strategy for different EV users under dynamic environments.
A horizontal federated reinforcement learning (HFRL)-based method is proposed to fit various users' behaviors and dynamic environments.
Simulation results illustrate that the proposed real-time EV charging/discharging control strategy can perform well among various factors.
arXiv Detail & Related papers (2022-10-04T08:22:46Z) - A Reinforcement Learning Approach for Electric Vehicle Routing Problem
with Vehicle-to-Grid Supply [2.6066825041242367]
We present QuikRouteFinder that uses reinforcement learning (RL) for EV routing to overcome these challenges.
Results from RL are compared against exact formulations based on mixed-integer linear program (MILP) and genetic algorithm (GA) metaheuristics.
arXiv Detail & Related papers (2022-04-12T06:13:06Z) - An Energy Consumption Model for Electrical Vehicle Networks via Extended
Federated-learning [50.85048976506701]
This paper proposes a novel solution to range anxiety based on a federated-learning model.
It is capable of estimating battery consumption and providing energy-efficient route planning for vehicle networks.
arXiv Detail & Related papers (2021-11-13T15:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.