Empirical Analysis of AI-based Energy Management in Electric Vehicles: A
Case Study on Reinforcement Learning
- URL: http://arxiv.org/abs/2212.09154v1
- Date: Sun, 18 Dec 2022 20:12:20 GMT
- Title: Empirical Analysis of AI-based Energy Management in Electric Vehicles: A
Case Study on Reinforcement Learning
- Authors: Jincheng Hu, Yang Lin, Jihao Li, Zhuoran Hou, Dezong Zhao, Quan Zhou,
Jingjing Jiang and Yuanjian Zhang
- Abstract summary: Reinforcement learning-based (RL-based) energy management strategy (EMS) is considered a promising solution for the energy management of electric vehicles with multiple power sources.
This paper presents an empirical analysis of RL-based EMS in a Plug-in Hybrid Electric Vehicle (PHEV) and Fuel Cell Electric Vehicle (FCEV)
- Score: 9.65075615023066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning-based (RL-based) energy management strategy (EMS) is
considered a promising solution for the energy management of electric vehicles
with multiple power sources. It has been shown to outperform conventional
methods in energy management problems regarding energy-saving and real-time
performance. However, previous studies have not systematically examined the
essential elements of RL-based EMS. This paper presents an empirical analysis
of RL-based EMS in a Plug-in Hybrid Electric Vehicle (PHEV) and Fuel Cell
Electric Vehicle (FCEV). The empirical analysis is developed in four aspects:
algorithm, perception and decision granularity, hyperparameters, and reward
function. The results show that the Off-policy algorithm effectively develops a
more fuel-efficient solution within the complete driving cycle compared with
other algorithms. Improving the perception and decision granularity does not
produce a more desirable energy-saving solution but better balances battery
power and fuel consumption. The equivalent energy optimization objective based
on the instantaneous state of charge (SOC) variation is parameter sensitive and
can help RL-EMSs to achieve more efficient energy-cost strategies.
Related papers
- Data-driven modeling and supervisory control system optimization for plug-in hybrid electric vehicles [16.348774515562678]
Learning-based intelligent energy management systems for plug-in hybrid electric vehicles (PHEVs) are crucial for achieving efficient energy utilization.
Their application faces system reliability challenges in the real world, which prevents widespread acceptance by original equipment manufacturers (OEMs)
This paper proposes a real-vehicle application-oriented control framework, combining horizon-extended reinforcement learning (RL)-based energy management with the equivalent consumption minimization strategy (ECMS) to enhance practical applicability.
arXiv Detail & Related papers (2024-06-13T13:04:42Z) - On Feature Diversity in Energy-based Models [98.78384185493624]
An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration.
We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs.
arXiv Detail & Related papers (2023-06-02T12:30:42Z) - Towards Optimal Energy Management Strategy for Hybrid Electric Vehicle
with Reinforcement Learning [5.006685959891296]
Reinforcement learning (RL) has proven to be an effective solution for learning intelligent control strategies.
This paper presents a novel framework, in which we implement and integrate RL-based EMS with the open-source vehicle simulation tool called FASTSim.
The learned RL-based EMSs are evaluated on various vehicle models using different test drive cycles and prove to be effective in improving energy efficiency.
arXiv Detail & Related papers (2023-05-21T06:29:17Z) - Energy Management of Multi-mode Plug-in Hybrid Electric Vehicle using
Multi-agent Deep Reinforcement Learning [6.519522573636577]
Multi-mode plug-in hybrid electric vehicle (PHEV) technology is one of the pathways making contributions to decarbonization.
This paper studies a multi-agent deep reinforcement learning (MADRL) control method for energy management of the multi-mode PHEV.
Using the unified DDPG settings and a relevance ratio of 0.2, the proposed MADRL system can save up to 4% energy compared to the single-agent learning system and up to 23.54% energy compared to the conventional rule-based system.
arXiv Detail & Related papers (2023-03-16T21:31:55Z) - Optimal Planning of Hybrid Energy Storage Systems using Curtailed
Renewable Energy through Deep Reinforcement Learning [0.0]
We propose a sophisticated deep reinforcement learning (DRL) methodology with a policy-based algorithm to plan energy storage systems (ESS)
A quantitative performance comparison proved that the DRL agent outperforms the scenario-based optimization (SO) algorithm.
The corresponding results confirmed that the DRL agent learns the way like what a human expert would do, suggesting reliable application of the proposed methodology.
arXiv Detail & Related papers (2022-12-12T02:24:50Z) - Progress and summary of reinforcement learning on energy management of
MPS-EV [4.0629930354376755]
The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range.
This paper presents an in-depth analysis of the current research on RL-based EMS and summarizes the design elements of RL-based EMS.
arXiv Detail & Related papers (2022-11-08T04:49:32Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.