Intelligent Electric Vehicle Charging Recommendation Based on
Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2102.07359v1
- Date: Mon, 15 Feb 2021 06:23:59 GMT
- Title: Intelligent Electric Vehicle Charging Recommendation Based on
Multi-Agent Reinforcement Learning
- Authors: Weijia Zhang, Hao Liu, Fan Wang, Tong Xu, Haoran Xin, Dejing Dou, Hui
Xiong
- Abstract summary: Electric Vehicle (EV) has become a choice in the modern transportation system due to its environmental and energy sustainability.
In many cities, EV drivers often fail to find the proper spots for charging, because of the limited charging infrastructures and the largely unbalanced charging demands.
We propose a framework, named Multi-Agent Spatiotemporal-temporal ment Learning (MasterReinforce), for intelligently recommending public charging stations.
- Score: 42.31586065609373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electric Vehicle (EV) has become a preferable choice in the modern
transportation system due to its environmental and energy sustainability.
However, in many large cities, EV drivers often fail to find the proper spots
for charging, because of the limited charging infrastructures and the
spatiotemporally unbalanced charging demands. Indeed, the recent emergence of
deep reinforcement learning provides great potential to improve the charging
experience from various aspects over a long-term horizon. In this paper, we
propose a framework, named Multi-Agent Spatio-Temporal Reinforcement Learning
(Master), for intelligently recommending public accessible charging stations by
jointly considering various long-term spatiotemporal factors. Specifically, by
regarding each charging station as an individual agent, we formulate this
problem as a multi-objective multi-agent reinforcement learning task. We first
develop a multi-agent actor-critic framework with the centralized attentive
critic to coordinate the recommendation between geo-distributed agents.
Moreover, to quantify the influence of future potential charging competition,
we introduce a delayed access strategy to exploit the knowledge of future
charging competition during training. After that, to effectively optimize
multiple learning objectives, we extend the centralized attentive critic to
multi-critics and develop a dynamic gradient re-weighting strategy to
adaptively guide the optimization direction. Finally, extensive experiments on
two real-world datasets demonstrate that Master achieves the best comprehensive
performance compared with nine baseline approaches.
Related papers
- Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks [1.9188272016043582]
We introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework.
Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community.
Our results indicate that, despite higher policy variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average.
arXiv Detail & Related papers (2024-04-18T21:50:03Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - A new Hyper-heuristic based on Adaptive Simulated Annealing and
Reinforcement Learning for the Capacitated Electric Vehicle Routing Problem [9.655068751758952]
Electric vehicles (EVs) have been adopted in urban areas to reduce environmental pollution and global warming.
There are still deficiencies in routing the trajectories of last-mile logistics that continue to impact social and economic sustainability.
This paper proposes a hyper-heuristic approach called Hyper-heuristic Adaptive Simulated Annealing with Reinforcement Learning.
arXiv Detail & Related papers (2022-06-07T11:10:38Z) - Computationally efficient joint coordination of multiple electric
vehicle charging points using reinforcement learning [6.37470346908743]
A major challenge in todays power grid is to manage the increasing load from electric vehicle (EV) charging.
We propose a single-step solution that jointly coordinates multiple charging points at once.
We show that our new RL solutions still improve the performance of charging demand coordination by 40-50% compared to a business-as-usual policy.
arXiv Detail & Related papers (2022-03-26T13:42:57Z) - Optimized cost function for demand response coordination of multiple EV
charging stations using reinforcement learning [6.37470346908743]
We build on previous research on RL, based on a Markov decision process (MDP) to simultaneously coordinate multiple charging stations.
We propose an improved cost function that essentially forces the learned control policy to always fulfill any charging demand that does not offer flexibility.
We rigorously compare the newly proposed batch RL fitted Q-iteration implementation with the original (costly) one, using real-world data.
arXiv Detail & Related papers (2022-03-03T11:22:27Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - An Energy Consumption Model for Electrical Vehicle Networks via Extended
Federated-learning [50.85048976506701]
This paper proposes a novel solution to range anxiety based on a federated-learning model.
It is capable of estimating battery consumption and providing energy-efficient route planning for vehicle networks.
arXiv Detail & Related papers (2021-11-13T15:03:44Z) - Learning to Operate an Electric Vehicle Charging Station Considering
Vehicle-grid Integration [4.855689194518905]
We propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit.
In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory.
Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC)
arXiv Detail & Related papers (2021-11-01T23:10:28Z) - Self-Supervised Reinforcement Learning for Recommender Systems [77.38665506495553]
We propose self-supervised reinforcement learning for sequential recommendation tasks.
Our approach augments standard recommendation models with two output layers: one for self-supervised learning and the other for RL.
Based on such an approach, we propose two frameworks namely Self-Supervised Q-learning(SQN) and Self-Supervised Actor-Critic(SAC)
arXiv Detail & Related papers (2020-06-10T11:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.