Deep Reinforcement Learning Based Framework for Mobile Energy
Disseminator Dispatching to Charge On-the-Road Electric Vehicles
- URL: http://arxiv.org/abs/2308.15656v1
- Date: Tue, 29 Aug 2023 22:23:52 GMT
- Title: Deep Reinforcement Learning Based Framework for Mobile Energy
Disseminator Dispatching to Charge On-the-Road Electric Vehicles
- Authors: Jiaming Wang, Jiqian Dong, Sikai Chen, Shreyas Sundaram, Samuel Labi
- Abstract summary: This paper proposes a deep reinforcement learning based methodology to develop a vehicle dispatching framework.
The proposed model can significantly enhance EV travel range while efficiently deploying a optimal number of MEDs.
- Score: 3.7313553276292657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The exponential growth of electric vehicles (EVs) presents novel challenges
in preserving battery health and in addressing the persistent problem of
vehicle range anxiety. To address these concerns, wireless charging,
particularly, Mobile Energy Disseminators (MEDs) have emerged as a promising
solution. The MED is mounted behind a large vehicle and charges all
participating EVs within a radius upstream of it. Unfortuantely, during such
V2V charging, the MED and EVs inadvertently form platoons, thereby occupying
multiple lanes and impairing overall corridor travel efficiency. In addition,
constrained budgets for MED deployment necessitate the development of an
effective dispatching strategy to determine optimal timing and locations for
introducing the MEDs into traffic. This paper proposes a deep reinforcement
learning (DRL) based methodology to develop a vehicle dispatching framework. In
the first component of the framework, we develop a realistic reinforcement
learning environment termed "ChargingEnv" which incorporates a reliable
charging simulation system that accounts for common practical issues in
wireless charging deployment, specifically, the charging panel misalignment.
The second component, the Proximal-Policy Optimization (PPO) agent, is trained
to control MED dispatching through continuous interactions with ChargingEnv.
Numerical experiments were carried out to demonstrate the demonstrate the
efficacy of the proposed MED deployment decision processor. The experiment
results suggest that the proposed model can significantly enhance EV travel
range while efficiently deploying a optimal number of MEDs. The proposed model
is found to be not only practical in its applicability but also has promises of
real-world effectiveness. The proposed model can help travelers to maximize EV
range and help road agencies or private-sector vendors to manage the deployment
of MEDs efficiently.
Related papers
- Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation [49.49868273653921]
Diffusion models are promising for joint trajectory prediction and controllable generation in autonomous driving.
We introduce Optimal Gaussian Diffusion (OGD) and Estimated Clean Manifold (ECM) Guidance.
Our methodology streamlines the generative process, enabling practical applications with reduced computational overhead.
arXiv Detail & Related papers (2024-08-01T17:59:59Z) - Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks [1.9188272016043582]
We introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework.
Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community.
Our results indicate that, despite higher policy variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average.
arXiv Detail & Related papers (2024-04-18T21:50:03Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - A new Hyper-heuristic based on Adaptive Simulated Annealing and
Reinforcement Learning for the Capacitated Electric Vehicle Routing Problem [9.655068751758952]
Electric vehicles (EVs) have been adopted in urban areas to reduce environmental pollution and global warming.
There are still deficiencies in routing the trajectories of last-mile logistics that continue to impact social and economic sustainability.
This paper proposes a hyper-heuristic approach called Hyper-heuristic Adaptive Simulated Annealing with Reinforcement Learning.
arXiv Detail & Related papers (2022-06-07T11:10:38Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Learning to Operate an Electric Vehicle Charging Station Considering
Vehicle-grid Integration [4.855689194518905]
We propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit.
In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory.
Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC)
arXiv Detail & Related papers (2021-11-01T23:10:28Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - A Physics Model-Guided Online Bayesian Framework for Energy Management
of Extended Range Electric Delivery Vehicles [3.927161292818792]
This paper improves an in-use rule-based EMS that is used in a delivery vehicle fleet equipped with two-way vehicle-to-cloud connectivity.
A physics model-guided online Bayesian framework is described and validated on large number of in-use driving samples of EREVs used for last-mile package delivery.
Results show an average of 12.8% fuel use reduction among tested vehicles for 155 real delivery trips.
arXiv Detail & Related papers (2020-06-01T08:43:23Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.