Electric Vehicles coordination for grid balancing using multi-objective
Harris Hawks Optimization
- URL: http://arxiv.org/abs/2311.14563v1
- Date: Fri, 24 Nov 2023 15:50:37 GMT
- Title: Electric Vehicles coordination for grid balancing using multi-objective
Harris Hawks Optimization
- Authors: Cristina Bianca Pop, Tudor Cioara, Viorica Chifu, Ionut Anghel,
Francesco Bellesini
- Abstract summary: The rise of renewables coincides with the shift towards Electrical Vehicles (EVs) posing technical and operational challenges for the energy balance of the local grid.
Coordinating power flow from multiple EVs into the grid requires sophisticated algorithms and load-balancing strategies.
This paper proposes an EVs fleet coordination model for the day ahead aiming to ensure a reliable energy supply and maintain a stable local grid.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rise of renewables coincides with the shift towards Electrical Vehicles
(EVs) posing technical and operational challenges for the energy balance of the
local grid. Nowadays, the energy grid cannot deal with a spike in EVs usage
leading to a need for more coordinated and grid aware EVs charging and
discharging strategies. However, coordinating power flow from multiple EVs into
the grid requires sophisticated algorithms and load-balancing strategies as the
complexity increases with more control variables and EVs, necessitating large
optimization and decision search spaces. In this paper, we propose an EVs fleet
coordination model for the day ahead aiming to ensure a reliable energy supply
and maintain a stable local grid, by utilizing EVs to store surplus energy and
discharge it during periods of energy deficit. The optimization problem is
addressed using Harris Hawks Optimization (HHO) considering criteria related to
energy grid balancing, time usage preference, and the location of EV drivers.
The EVs schedules, associated with the position of individuals from the
population, are adjusted through exploration and exploitation operations, and
their technical and operational feasibility is ensured, while the rabbit
individual is updated with a non-dominated EV schedule selected per iteration
using a roulette wheel algorithm. The solution is evaluated within the
framework of an e-mobility service in Terni city. The results indicate that
coordinated charging and discharging of EVs not only meet balancing service
requirements but also align with user preferences with minimal deviations.
Related papers
- A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids [0.0]
We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
arXiv Detail & Related papers (2024-01-05T06:04:46Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with
Online Learning [60.17407932691429]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - MARL for Decentralized Electric Vehicle Charging Coordination with V2V
Energy Exchange [5.442116840518914]
This paper addresses the EV charging coordination by considering vehicle-to-vehicle (V2V) energy exchange.
We propose a Multi-Agent Reinforcement Learning (MARL) approach to coordinate EV charging with V2V energy exchange.
arXiv Detail & Related papers (2023-08-27T14:06:21Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - Deep Reinforcement Learning-Based Battery Conditioning Hierarchical V2G
Coordination for Multi-Stakeholder Benefits [3.4529246211079645]
This study proposes a multi-stakeholder hierarchical V2G coordination based on deep reinforcement learning (DRL) and the Proof of Stake algorithm.
The multi-stakeholders include the power grid, EV aggregators (EVAs), and users, and the proposed strategy can achieve multi-stakeholder benefits.
arXiv Detail & Related papers (2023-08-01T01:19:56Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Federated Reinforcement Learning for Real-Time Electric Vehicle Charging
and Discharging Control [42.17503767317918]
This paper develops an optimal EV charging/discharging control strategy for different EV users under dynamic environments.
A horizontal federated reinforcement learning (HFRL)-based method is proposed to fit various users' behaviors and dynamic environments.
Simulation results illustrate that the proposed real-time EV charging/discharging control strategy can perform well among various factors.
arXiv Detail & Related papers (2022-10-04T08:22:46Z) - A Reinforcement Learning Approach for Electric Vehicle Routing Problem
with Vehicle-to-Grid Supply [2.6066825041242367]
We present QuikRouteFinder that uses reinforcement learning (RL) for EV routing to overcome these challenges.
Results from RL are compared against exact formulations based on mixed-integer linear program (MILP) and genetic algorithm (GA) metaheuristics.
arXiv Detail & Related papers (2022-04-12T06:13:06Z) - An Energy Consumption Model for Electrical Vehicle Networks via Extended
Federated-learning [50.85048976506701]
This paper proposes a novel solution to range anxiety based on a federated-learning model.
It is capable of estimating battery consumption and providing energy-efficient route planning for vehicle networks.
arXiv Detail & Related papers (2021-11-13T15:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.