Federated Reinforcement Learning for Real-Time Electric Vehicle Charging
and Discharging Control
- URL: http://arxiv.org/abs/2210.01452v1
- Date: Tue, 4 Oct 2022 08:22:46 GMT
- Title: Federated Reinforcement Learning for Real-Time Electric Vehicle Charging
and Discharging Control
- Authors: Zixuan Zhang and Yuning Jiang and Yuanming Shi and Ye Shi and Wei Chen
- Abstract summary: This paper develops an optimal EV charging/discharging control strategy for different EV users under dynamic environments.
A horizontal federated reinforcement learning (HFRL)-based method is proposed to fit various users' behaviors and dynamic environments.
Simulation results illustrate that the proposed real-time EV charging/discharging control strategy can perform well among various factors.
- Score: 42.17503767317918
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the recent advances in mobile energy storage technologies, electric
vehicles (EVs) have become a crucial part of smart grids. When EVs participate
in the demand response program, the charging cost can be significantly reduced
by taking full advantage of the real-time pricing signals. However, many
stochastic factors exist in the dynamic environment, bringing significant
challenges to design an optimal charging/discharging control strategy. This
paper develops an optimal EV charging/discharging control strategy for
different EV users under dynamic environments to maximize EV users' benefits.
We first formulate this problem as a Markov decision process (MDP). Then we
consider EV users with different behaviors as agents in different environments.
Furthermore, a horizontal federated reinforcement learning (HFRL)-based method
is proposed to fit various users' behaviors and dynamic environments. This
approach can learn an optimal charging/discharging control strategy without
sharing users' profiles. Simulation results illustrate that the proposed
real-time EV charging/discharging control strategy can perform well among
various stochastic factors.
Related papers
- Advancing Generative Artificial Intelligence and Large Language Models for Demand Side Management with Internet of Electric Vehicles [52.43886862287498]
This paper explores the integration of large language models (LLMs) into energy management.
We propose an innovative solution that enhances LLMs with retrieval-augmented generation for automatic problem formulation, code generation, and customizing optimization.
We present a case study to demonstrate the effectiveness of our proposed solution in charging scheduling and optimization for electric vehicles.
arXiv Detail & Related papers (2025-01-26T14:31:03Z) - Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - Uncertainty-Aware Critic Augmentation for Hierarchical Multi-Agent EV Charging Control [9.96602699887327]
We propose HUCA, a novel real-time charging control for regulating energy demands for both the building and EVs.
HUCA employs hierarchical actor-critic networks to dynamically reduce electricity costs in buildings, accounting for the needs of EV charging in the dynamic pricing scenario.
Experiments on real-world electricity datasets under both simulated certain and uncertain departure scenarios demonstrate that HUCA outperforms baselines in terms of total electricity costs.
arXiv Detail & Related papers (2024-12-23T23:45:45Z) - A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids [0.0]
We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
arXiv Detail & Related papers (2024-01-05T06:04:46Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - Deep Reinforcement Learning-Based Battery Conditioning Hierarchical V2G
Coordination for Multi-Stakeholder Benefits [3.4529246211079645]
This study proposes a multi-stakeholder hierarchical V2G coordination based on deep reinforcement learning (DRL) and the Proof of Stake algorithm.
The multi-stakeholders include the power grid, EV aggregators (EVAs), and users, and the proposed strategy can achieve multi-stakeholder benefits.
arXiv Detail & Related papers (2023-08-01T01:19:56Z) - Computationally efficient joint coordination of multiple electric
vehicle charging points using reinforcement learning [6.37470346908743]
A major challenge in todays power grid is to manage the increasing load from electric vehicle (EV) charging.
We propose a single-step solution that jointly coordinates multiple charging points at once.
We show that our new RL solutions still improve the performance of charging demand coordination by 40-50% compared to a business-as-usual policy.
arXiv Detail & Related papers (2022-03-26T13:42:57Z) - Investigating Underlying Drivers of Variability in Residential Energy
Usage Patterns with Daily Load Shape Clustering of Smart Meter Data [53.51471969978107]
Large-scale deployment of smart meters has motivated increasing studies to explore disaggregated daily load patterns.
This paper aims to shed light on the mechanisms by which electricity consumption patterns exhibit variability.
arXiv Detail & Related papers (2021-02-16T16:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.