Transfer Deep Reinforcement Learning-enabled Energy Management Strategy
for Hybrid Tracked Vehicle
- URL: http://arxiv.org/abs/2007.08690v1
- Date: Thu, 16 Jul 2020 23:39:34 GMT
- Title: Transfer Deep Reinforcement Learning-enabled Energy Management Strategy
for Hybrid Tracked Vehicle
- Authors: Xiaowei Guo, Teng Liu, Bangbei Tang, Xiaolin Tang, Jinwei Zhang,
Wenhao Tan, and Shufeng Jin
- Abstract summary: This paper proposes an adaptive energy management strategy for hybrid electric vehicles by combining deep reinforcement learning (DRL) and transfer learning (TL)
It aims to address the defect of DRL in tedious training time.
The founded DRL and TL-enabled control policy is capable of enhancing energy efficiency and improving system performance.
- Score: 8.327437591702163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes an adaptive energy management strategy for hybrid
electric vehicles by combining deep reinforcement learning (DRL) and transfer
learning (TL). This work aims to address the defect of DRL in tedious training
time. First, an optimization control modeling of a hybrid tracked vehicle is
built, wherein the elaborate powertrain components are introduced. Then, a
bi-level control framework is constructed to derive the energy management
strategies (EMSs). The upper-level is applying the particular deep
deterministic policy gradient (DDPG) algorithms for EMS training at different
speed intervals. The lower-level is employing the TL method to transform the
pre-trained neural networks for a novel driving cycle. Finally, a series of
experiments are executed to prove the effectiveness of the presented control
framework. The optimality and adaptability of the formulated EMS are
illuminated. The founded DRL and TL-enabled control policy is capable of
enhancing energy efficiency and improving system performance.
Related papers
- Data-driven modeling and supervisory control system optimization for plug-in hybrid electric vehicles [16.348774515562678]
Learning-based intelligent energy management systems for plug-in hybrid electric vehicles (PHEVs) are crucial for achieving efficient energy utilization.
Their application faces system reliability challenges in the real world, which prevents widespread acceptance by original equipment manufacturers (OEMs)
This paper proposes a real-vehicle application-oriented control framework, combining horizon-extended reinforcement learning (RL)-based energy management with the equivalent consumption minimization strategy (ECMS) to enhance practical applicability.
arXiv Detail & Related papers (2024-06-13T13:04:42Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Energy Management of Multi-mode Plug-in Hybrid Electric Vehicle using
Multi-agent Deep Reinforcement Learning [6.519522573636577]
Multi-mode plug-in hybrid electric vehicle (PHEV) technology is one of the pathways making contributions to decarbonization.
This paper studies a multi-agent deep reinforcement learning (MADRL) control method for energy management of the multi-mode PHEV.
Using the unified DDPG settings and a relevance ratio of 0.2, the proposed MADRL system can save up to 4% energy compared to the single-agent learning system and up to 23.54% energy compared to the conventional rule-based system.
arXiv Detail & Related papers (2023-03-16T21:31:55Z) - Skip Training for Multi-Agent Reinforcement Learning Controller for
Industrial Wave Energy Converters [94.84709449845352]
Recent Wave Energy Converters (WEC) are equipped with multiple legs and generators to maximize energy generation.
Traditional controllers have shown limitations to capture complex wave patterns and the controllers must efficiently maximize the energy capture.
This paper introduces a Multi-Agent Reinforcement Learning controller (MARL), which outperforms the traditionally used spring damper controller.
arXiv Detail & Related papers (2022-09-13T00:20:31Z) - Multi-agent Deep Reinforcement Learning for Charge-sustaining Control of
Multi-mode Hybrid Vehicles [9.416703139663705]
Transportation electrification requires an increasing number of electric components on vehicles.
This paper focuses on the online optimization of energy management strategy for a multi-mode hybrid electric vehicle.
A new collaborative cyber-physical learning with multi-agents is proposed.
arXiv Detail & Related papers (2022-09-06T16:40:55Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Data-Driven Transferred Energy Management Strategy for Hybrid Electric
Vehicles via Deep Reinforcement Learning [3.313774035672581]
This paper proposes a real-time EMS via incorporating the DRL method and transfer learning (TL)
The related EMSs are derived from and evaluated on the real-world collected driving cycle dataset from Transportation Secure Data Center.
Simulation results indicate that the presented transfer DRL-based EMS could effectively reduce time consumption and guarantee control performance.
arXiv Detail & Related papers (2020-09-07T17:53:07Z) - Human-like Energy Management Based on Deep Reinforcement Learning and
Historical Driving Experiences [5.625230013691758]
Development of hybrid electric vehicles depends on an advanced and efficient energy management strategy (EMS)
This article presents a human-like energy management framework for hybrid electric vehicles according to deep reinforcement learning methods and collected historical driving data.
Improvements in fuel economy and convergence rate indicate the effectiveness of the constructed control structure.
arXiv Detail & Related papers (2020-07-16T14:15:35Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.