A Deep Reinforcement Learning-Based Charging Scheduling Approach with
Augmented Lagrangian for Electric Vehicle
- URL: http://arxiv.org/abs/2209.09772v1
- Date: Tue, 20 Sep 2022 14:56:51 GMT
- Title: A Deep Reinforcement Learning-Based Charging Scheduling Approach with
Augmented Lagrangian for Electric Vehicle
- Authors: Guibin. Chen and Xiaoying. Shi
- Abstract summary: This paper formulates the EV charging scheduling problem as a constrained Markov decision process (CMDP)
A novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP.
Comprehensive numerical experiments with real-world electricity price demonstrate that our proposed algorithm can achieve high solution optimality and constraints compliance.
- Score: 2.686271754751717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of optimizing charging/discharging schedules
of electric vehicles (EVs) when participate in demand response (DR). As there
exist uncertainties in EVs' remaining energy, arrival and departure time, and
future electricity prices, it is quite difficult to make charging decisions to
minimize charging cost while guarantee that the EV's battery
state-of-the-charge (SOC) is within certain range. To handle with this dilemma,
this paper formulates the EV charging scheduling problem as a constrained
Markov decision process (CMDP). By synergistically combining the augmented
Lagrangian method and soft actor critic algorithm, a novel safe off-policy
reinforcement learning (RL) approach is proposed in this paper to solve the
CMDP. The actor network is updated in a policy gradient manner with the
Lagrangian value function. A double-critics network is adopted to synchronously
estimate the action-value function to avoid overestimation bias. The proposed
algorithm does not require strong convexity guarantee of examined problems and
is sample efficient. Comprehensive numerical experiments with real-world
electricity price demonstrate that our proposed algorithm can achieve high
solution optimality and constraints compliance.
Related papers
- Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks [1.9188272016043582]
We introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework.
Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community.
Our results indicate that, despite higher policy variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average.
arXiv Detail & Related papers (2024-04-18T21:50:03Z) - Safety-Aware Reinforcement Learning for Electric Vehicle Charging Station Management in Distribution Network [4.842172685255376]
Electric vehicles (EVs) pose a significant risk to the distribution system operation in the absence of coordination.
This paper presents a safety-aware reinforcement learning (RL) algorithm designed to manage EV charging stations.
Our proposed algorithm does not rely on explicit penalties for constraint violations, eliminating the need for penalty tuning coefficient.
arXiv Detail & Related papers (2024-03-20T01:57:38Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - GP CC-OPF: Gaussian Process based optimization tool for
Chance-Constrained Optimal Power Flow [54.94701604030199]
The Gaussian Process (GP) based Chance-Constrained Optimal Flow (CC-OPF) is an open-source Python code for economic dispatch (ED) problem in power grids.
The developed tool presents a novel data-driven approach based on the CC-OP model for solving the large regression problem with a trade-off between complexity and accuracy.
arXiv Detail & Related papers (2023-02-16T17:59:06Z) - Data-Driven Chance Constrained AC-OPF using Hybrid Sparse Gaussian
Processes [57.70237375696411]
The paper proposes a fast data-driven setup that uses the sparse and hybrid Gaussian processes (GP) framework to model the power flow equations with input uncertainty.
We advocate the efficiency of the proposed approach by a numerical study over multiple IEEE test cases showing up to two times faster and more accurate solutions.
arXiv Detail & Related papers (2022-08-30T09:27:59Z) - Data-Driven Stochastic AC-OPF using Gaussian Processes [54.94701604030199]
Integrating a significant amount of renewables into a power grid is probably the most a way to reduce carbon emissions from power grids slow down climate change.
This paper presents an alternative data-driven approach based on the AC power flow equations that can incorporate uncertainty inputs.
The GP approach learns a simple yet non-constrained data-driven approach to close this gap to the AC power flow equations.
arXiv Detail & Related papers (2022-07-21T23:02:35Z) - Computationally efficient joint coordination of multiple electric
vehicle charging points using reinforcement learning [6.37470346908743]
A major challenge in todays power grid is to manage the increasing load from electric vehicle (EV) charging.
We propose a single-step solution that jointly coordinates multiple charging points at once.
We show that our new RL solutions still improve the performance of charging demand coordination by 40-50% compared to a business-as-usual policy.
arXiv Detail & Related papers (2022-03-26T13:42:57Z) - Efficient Representation for Electric Vehicle Charging Station
Operations using Reinforcement Learning [5.815007821143811]
We develop aggregation schemes that are based on the emergency of EV charging, namely the laxity value.
A least-laxity first (LLF) rule is adopted to consider only the total charging power of the EVCS.
In addition, we propose an equivalent state aggregation that can guarantee to attain the same optimal policy.
arXiv Detail & Related papers (2021-08-07T00:34:48Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.